<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Posts on But it works on my PC!</title>
    <link>https://blog.richardfennell.net/posts/</link>
    <description>Recent content in Posts on But it works on my PC!</description>
    
    <generator>Hugo -- 0.147.0</generator>
    <language>en</language>
    <lastBuildDate>Tue, 03 Mar 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://blog.richardfennell.net/posts/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Options for Migrating DevOps Toolsets</title>
      <link>https://blog.richardfennell.net/posts/options-migrating-devops-toolsets/</link>
      <pubDate>Tue, 03 Mar 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/options-migrating-devops-toolsets/</guid>
      <description>&lt;p&gt;Whilst presenting on &amp;lsquo;Migrating DevOps Toolsets&amp;rsquo; at &lt;a href=&#34;https://www.dddnorth.co.uk/&#34;&gt;DDDNorth 2026&lt;/a&gt; last weekend, I mentioned a &lt;a href=&#34;https://tinyurl.com/azdomig&#34;&gt;blog post &amp;amp; flowchart&lt;/a&gt; I had created a few years ago to guide people through their options when migrating from what was then called TFS (Azure DevOps Server) to what was then called VSTS (Azure DevOps Services)&lt;/p&gt;
&lt;p&gt;When I got home, I thought that this old post &amp;amp; flowchart were worth bringing up to date. So, I also took the chance to make them a little more generic, making them more useful as a guide for a wider range of DevOps toolset migrations, not just Azure DevOps.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst presenting on &lsquo;Migrating DevOps Toolsets&rsquo; at <a href="https://www.dddnorth.co.uk/">DDDNorth 2026</a> last weekend, I mentioned a <a href="https://tinyurl.com/azdomig">blog post &amp; flowchart</a> I had created a few years ago to guide people through their options when migrating from what was then called TFS (Azure DevOps Server) to what was then called VSTS (Azure DevOps Services)</p>
<p>When I got home, I thought that this old post &amp; flowchart were worth bringing up to date. So, I also took the chance to make them a little more generic, making them more useful as a guide for a wider range of DevOps toolset migrations, not just Azure DevOps.</p>
<p>The flowchart is still skewed towards Azure DevOps and GitHub Enterprise, as those are the toolset I have most experience with, but the general principle work for any DevOps toolset migration.</p>
<p><img alt="image" loading="lazy" src="/images/rfennell/devops-migration-choices.png" title="DevOps Migration options flowchart"></p>
<p><a href="/files/DevOps-Migration-Choices.pdf">Click to download a PDF version of the flowchart</a>  </p>
<p>In the flowchart I mention a few tools, so here are some useful links</p>
<ul>
<li>
<p><strong>Azure DevOps Server to the Azure DevOps Services migration</strong></p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/azure/devops/migrate/migration-get-started?view=azure-devops">Azure DevOps Database Migration Tool</a> – the official Microsoft full fidelity TPC migration service</li>
</ul>
</li>
<li>
<p><strong>Source Control migration tools</strong></p>
<ul>
<li><a href="https://github.com/git-tfs/git-tfs">Git TFS</a> – OSS tool to move TFVC into Git</li>
<li><a href="https://git-scm.com/docs/git-svn">Git SVN</a> – OSS tool to move SVN into Git</li>
<li><a href="https://github.com/trevorr/vss2git">Git VSS</a> – OSS tool to move VSS into Git</li>
<li><a href="https://learn.microsoft.com/en-us/azure/devops/repos/git/import-git-repository?view=azure-devops">Azure DevOps Git Import</a> - built-in feature of Azure DevOps to import Git repo into Azure DevOps</li>
<li><a href="https://learn.microsoft.com/en-us/azure/devops/repos/git/import-from-tfvc?view=azure-devops">Azure DevOps TFVC Import</a> - built-in feature of Azure DevOps to import TFVC into Azure DevOps</li>
<li><a href="https://docs.github.com/en/migrations/importing-source-code/using-github-importer/importing-a-repository-with-github-importer">GiHub Git Import</a> - built-in feature of GitHub to import Git repo</li>
<li><a href="https://github.com/github/gh-gei">GitHub Enterprise Import</a> - GitHub CLI tool to import repos from other toolset at scale, and re potentially re-point CI/CD tools for GitHub Enterprise</li>
<li><a href="https://www.timelymigration.com/">Timely Migrations</a> - Commercial tool to migrate TFVC with history between servers</li>
</ul>
</li>
<li>
<p><strong>Work Tracking tracking migration tools</strong></p>
<ul>
<li><a href="https://marketplace.visualstudio.com/items?itemName=nkdagility.vsts-sync-migration">Migration Tools for Azure DevOps</a> –  OSS tool to migrate Azure DevOps Work items between projects/instances</li>
<li><a href="https://github.com/solidify/jira-azuredevops-migrator">Jira to Azure DevOps work item migration tool</a> - OSS tool to migrate Jira tickets to Azure DevOps Work items</li>
<li><a href="https://www.opshub.com/products/opshub-migration-manager/">OpsHub Migration Manager</a> - Commercial migration tool that supports many work tracking toolsets</li>
</ul>
</li>
<li>
<p><strong>CI/CD migration tools</strong></p>
<ul>
<li><a href="https://github.com/github/gh-actions-importer">GitHub Actions Import</a> - GitHub CLI tool to convert CI/CD definitions from other toolset to another</li>
</ul>
</li>
</ul>
<p>Hope some of this is useful to anyone considering a migration.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Unexpected &#39;task is dependent on a Node version that is end-of-life&#39; warning with Azure DevOps Pipelines</title>
      <link>https://blog.richardfennell.net/posts/unexpected-is-dependent-on-a-node-version-that-is-end-of-life-error/</link>
      <pubDate>Wed, 18 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/unexpected-is-dependent-on-a-node-version-that-is-end-of-life-error/</guid>
      <description>&lt;h1 id=&#34;background&#34;&gt;Background&lt;/h1&gt;
&lt;p&gt;I have been doing the regular maintenance in our Azure DevOps Pipelines of updating the versions of tasks. This usually means you perform one of the following actions&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Just increment the major version number in the YAML e.g &lt;code&gt;task: MyTask@1&lt;/code&gt; to &lt;code&gt;task: MyTask@2&lt;/code&gt; when there is a newer version of a task available.&lt;/li&gt;
&lt;li&gt;If the task is out of support and abandoned, swap to a different task, one that is still being supported, that does the same action.&lt;/li&gt;
&lt;li&gt;Fork the out of date task, and perform the updates to bring it back into support.&lt;/li&gt;
&lt;li&gt;Swap to a PowerShell/Bash script that wrappers a CLI tool to do the same action - increasingly my go to solution.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It is a shame the first option is often not possible, but &lt;a href=&#34;https://jessehouwing.net/security-state-of-the-azure-devops-marketplace/&#34;&gt;don&amp;rsquo;t get me, or other MVPs on the long running subject of abandoned insecure Azure DevOps extensions&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="background">Background</h1>
<p>I have been doing the regular maintenance in our Azure DevOps Pipelines of updating the versions of tasks. This usually means you perform one of the following actions</p>
<ul>
<li>Just increment the major version number in the YAML e.g <code>task: MyTask@1</code> to <code>task: MyTask@2</code> when there is a newer version of a task available.</li>
<li>If the task is out of support and abandoned, swap to a different task, one that is still being supported, that does the same action.</li>
<li>Fork the out of date task, and perform the updates to bring it back into support.</li>
<li>Swap to a PowerShell/Bash script that wrappers a CLI tool to do the same action - increasingly my go to solution.</li>
</ul>
<p>It is a shame the first option is often not possible, but <a href="https://jessehouwing.net/security-state-of-the-azure-devops-marketplace/">don&rsquo;t get me, or other MVPs on the long running subject of abandoned insecure Azure DevOps extensions</a></p>
<h1 id="the-problem">The Problem</h1>
<p>The problem I hit today was with the <a href="https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup">Post Build Cleanup extension from Microsoft Premier Services</a>. This had started reporting the warning</p>
<pre tabindex="0"><code>##[warning]Task &#39;Post Build Cleanup&#39; version 4 (PostBuildCleanup@4) is dependent on a Node version (16) that is end-of-life. Contact the extension owner for an updated version of the task. Task maintainers should review Node upgrade guidance: https://aka.ms/node-runner-guidance
</code></pre><p>On checking <a href="https://github.com/MicrosoftPremier/VstsExtensions">the project repo</a>, I saw that version 4 was the latest version of this task. There is an <a href="https://github.com/MicrosoftPremier/VstsExtensions/issues/263">issue #263</a> that says the extension has no maintainer, and it is recommended you <a href="https://github.com/MicrosoftPremier/VstsExtensions/blob/master/PostBuildCleanup/en-US/cleanupRepository.yml">swap to using a PowerShell script</a>. However, I wondered if the extension could be forked, and that is where it got interesting.</p>
<p>The problem is that the Microsoft Premier Services team choose not to open source the code for their extensions, the project repo is only used to track issues and store archived .VSIX packages. However, with Azure DevOps that is not a complete blocker, as you can:</p>
<ol>
<li>Pick the option to get download the extension .VSIX package for Azure DevOps Server use in the <a href="https://marketplace.visualstudio.com/">Azure DevOps marketplace</a></li>
<li>Rename the .VSIX file to .ZIP</li>
<li>Expand the ZIP file into a folder and then you can have a look at the code</li>
</ol>
<p>As I said, this is where it got interesting. When I looked on the <code>task.json</code> file I could see the task was setup to use any of three Node versions including Node 20</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"> <span class="s2">&#34;execution&#34;</span><span class="err">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;Node10&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;target&#34;</span><span class="p">:</span> <span class="s2">&#34;enableCleanup.js&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;argumentFormat&#34;</span><span class="p">:</span> <span class="s2">&#34;&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">},</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;Node16&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;target&#34;</span><span class="p">:</span> <span class="s2">&#34;enableCleanup.js&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;argumentFormat&#34;</span><span class="p">:</span> <span class="s2">&#34;&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">},</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;Node20&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;target&#34;</span><span class="p">:</span> <span class="s2">&#34;enableCleanup.js&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;argumentFormat&#34;</span><span class="p">:</span> <span class="s2">&#34;&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span><span class="err">,</span>
</span></span></code></pre></div><p>So why was I getting the warning about Node 16 when it should have been using Node 20?</p>
<h1 id="the-answer">The Answer</h1>
<p>The issue is that the current Azure DevOps agents, and GitHub runners that use a related codebase, no longer ship with Node 20. If you looking a self hosted runners <code>external</code> folder you will see folders for</p>
<ul>
<li>node10</li>
<li>node16</li>
<li>node20_1</li>
<li>node24</li>
</ul>
<p>The task can&rsquo;t find Node 20, as it is replaced by Node 20.1 with it&rsquo;s numerous security fixes, so the runner falls back to use Node 16 and hence generates the version warning.</p>
<p>The fix, if you forked the extension, or had your own extension showing the same issues, is to change the execution block in the <code>task.json</code> to target Node 20.1</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"> <span class="s2">&#34;execution&#34;</span><span class="err">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;Node10&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;target&#34;</span><span class="p">:</span> <span class="s2">&#34;enableCleanup.js&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;argumentFormat&#34;</span><span class="p">:</span> <span class="s2">&#34;&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">},</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;Node16&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;target&#34;</span><span class="p">:</span> <span class="s2">&#34;enableCleanup.js&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;argumentFormat&#34;</span><span class="p">:</span> <span class="s2">&#34;&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">},</span>
</span></span><span class="line"><span class="cl">        <span class="nt">&#34;Node20_1&#34;</span><span class="p">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;target&#34;</span><span class="p">:</span> <span class="s2">&#34;enableCleanup.js&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="nt">&#34;argumentFormat&#34;</span><span class="p">:</span> <span class="s2">&#34;&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span><span class="err">,</span>
</span></span></code></pre></div><p>Once this is done Node 16 is not longer used</p>
<h1 id="summing-up">Summing Up</h1>
<p>In this case swapping to the recommended <a href="https://github.com/MicrosoftPremier/VstsExtensions/blob/master/PostBuildCleanup/en-US/cleanupRepository.yml">clean-up template</a> is the obvious answer, but it was an interesting exploration to find the root cause of this confusing warning.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Caught out again by the cache in a build pipeline</title>
      <link>https://blog.richardfennell.net/posts/caught-out-again-by-a-cache-in-a-build-pipeline/</link>
      <pubDate>Mon, 16 Feb 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/caught-out-again-by-a-cache-in-a-build-pipeline/</guid>
      <description>&lt;p&gt;I have &lt;a href=&#34;https://blog.richardfennell.net/posts/interesting-side-effect-with-azdo-cache/&#34;&gt;posted in the past&lt;/a&gt; about the issues a misconfigured cache can have in Azure DevOps Pipelines, or GitHub Actions. Well it caught me out again today, wasting a few hours of my time.&lt;/p&gt;
&lt;p&gt;Today, I had a failing integration test in a CI/CD pipeline, but all the tests passed locally. The failing test was looking for a certain number of rows returned from a API with known seed test data. I had revised the seed data and my test, but in my CI/CD pipeline my test was failing as it was still looking for the old number of rows.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have <a href="/posts/interesting-side-effect-with-azdo-cache/">posted in the past</a> about the issues a misconfigured cache can have in Azure DevOps Pipelines, or GitHub Actions. Well it caught me out again today, wasting a few hours of my time.</p>
<p>Today, I had a failing integration test in a CI/CD pipeline, but all the tests passed locally. The failing test was looking for a certain number of rows returned from a API with known seed test data. I had revised the seed data and my test, but in my CI/CD pipeline my test was failing as it was still looking for the old number of rows.</p>
<p>Even though I went as far as inspecting the IL code in my test DLL, using <a href="https://www.jetbrains.com/decompiler/">dotPeek</a>, to confirm the number of rows expected was the old incorrect value and not the new, it took me far too long to consider that my problem could be a misconfigured cache (again).</p>
<p>My issue was that the folder that the cache was checking was no longer being created due to changes in the pipeline unrelated to my tests. So the cache task could not find it&rsquo;s target folder, so fell back to caching everything in the working directory. In effect reverting any changes made in the pipeline to the last successful run.</p>
<p>The fix, in this case, was to remove the cache as it was no longer required. However, the real moral of the story is to always consider any caching if your build artifacts are not being updated as you expect.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot create a new Azure DevOps Agent Pool on an Azure DevOps Server</title>
      <link>https://blog.richardfennell.net/posts/cannot-create-azdo-agent-pool/</link>
      <pubDate>Wed, 21 Jan 2026 01:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-create-azdo-agent-pool/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;I recently had an issue trying to add a new Azure DevOps Pipeline Agent Pool to an existing Azure DevOps 2022 Server via the Team Project Collection Settings UI.&lt;/p&gt;
&lt;p&gt;When tried to add the agent pool I got the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Access denied &lt;user name&gt; needs Manage permissions to perform this action. For more information, contact the Azure DevOps Server administrator&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The problem was that I was the Azure DevOps Server administrator&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>I recently had an issue trying to add a new Azure DevOps Pipeline Agent Pool to an existing Azure DevOps 2022 Server via the Team Project Collection Settings UI.</p>
<p>When tried to add the agent pool I got the error</p>
<blockquote>
<p>Access denied <user name> needs Manage permissions to perform this action. For more information, contact the Azure DevOps Server administrator</p></blockquote>
<p>The problem was that I was the Azure DevOps Server administrator</p>
<h2 id="the-solution">The Solution</h2>
<p>The answer was to create the new Agent Pool at the Team Project level.</p>
<p>The difference when creating the new Agent Pool at the Team Project level is that though you can see the new Agent Pool at the Team Project Collection level, it is only usable in the Team Project it was created in.</p>
<p>For any other Team Projects, you have to add the newly created Agent Pool as an &lsquo;Existing Agent Pool&rsquo; via the Team Projects &gt; Settings &gt; Agent Pool menu</p>
<h2 id="so-why-did-this-work">So why did this work?</h2>
<p>My assumption is that in one or more Team Projects in the Team Project Collection there are some custom permissions that deny access to the Agent Pool settings.</p>
<p>As adding a new Agent Pool at the collection level tries to add it to all existing Team Project, any such deny permission could cause the error I was seeing.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Issues parsing xUnit Test Coverage data into SonarQube</title>
      <link>https://blog.richardfennell.net/posts/errors-getting-test-coverage-into-sonarqube/</link>
      <pubDate>Wed, 21 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/errors-getting-test-coverage-into-sonarqube/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;I have been chasing what it turned out to be a non-existent fault when trying to ingest test code coverage data into our SonarQube instance.&lt;/p&gt;
&lt;p&gt;I saw my &amp;lsquo;problem&amp;rsquo; in a .NET 8.0 solution with XUnit v3 based unit tests, this solution was being built using this Azure DevOps Pipelines YAML&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;SonarQubePrepare@7&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;SonarQube&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;SonarQube&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;scannerMode&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;dotnet&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;jdkversion&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;JAVA_HOME_17_X64&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;projectKey&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;${{ parameters.sonarQubeProjectKey }}&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;projectName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;${{ parameters.sonarQubeProjectName }}&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;projectVersion&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;$(GitVersion_Major).$(GitVersion_Minor)&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;extraProperties&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;sd&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        # Additional properties that will be passed to the scanner,
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        # Put one key=value per line, example:
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        sonar.cpd.exclusions=**/AssemblyInfo.cs,**/*.g.cs
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        # Ingest the test results and coverage data
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        sonar.cs.vscoveragexml.reportsPaths=$(Agent.TempDirectory)/**/*.coveragexml
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        sonar.cs.vstest.reportsPaths=$(Agent.TempDirectory)/**/*.trx&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;DotNetCoreCLI@2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;.NET Build&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;command&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;build&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;arguments&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;&amp;gt;&lt;/span&gt;&lt;span class=&#34;sd&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        --configuration ${{ parameters.buildConfiguration }}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        --no-restore&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;projects&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;$(Build.SourcesDirectory)/src/MySolution.sln&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;DotNetCoreCLI@2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;.NET Test&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;command&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;test&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;projects&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;$(Build.SourcesDirectory)/src/MySolution.sln&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;arguments&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;&amp;gt;&lt;/span&gt;&lt;span class=&#34;sd&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        --configuration ${{ parameters.buildConfiguration }}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        --collect &amp;#34;Code coverage&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        --no-restore
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        --no-build&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;SonarQubeAnalyze@7&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Complete the SonarQube analysis&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;jdkversion&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;JAVA_HOME_17_X64&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;SonarQubePublish@7&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Publish Quality Gate Result&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;pollingTimeoutSec&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;300&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;At the start of the &lt;code&gt;SonarQubeAnalyze@7&lt;/code&gt; task log I could see that the &lt;code&gt;.coverage&lt;/code&gt; file was found and converted into a &lt;code&gt;.coveragexml&lt;/code&gt; file. However, there were multiple &amp;lsquo;The device is not ready&amp;rsquo; errors when parsing this file later in the process.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>I have been chasing what it turned out to be a non-existent fault when trying to ingest test code coverage data into our SonarQube instance.</p>
<p>I saw my &lsquo;problem&rsquo; in a .NET 8.0 solution with XUnit v3 based unit tests, this solution was being built using this Azure DevOps Pipelines YAML</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w"> </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">SonarQubePrepare@7</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">SonarQube</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;SonarQube&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">scannerMode</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;dotnet&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">jdkversion</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;JAVA_HOME_17_X64&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projectKey</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${{ parameters.sonarQubeProjectKey }}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projectName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${{ parameters.sonarQubeProjectName }}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projectVersion</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(GitVersion_Major).$(GitVersion_Minor)&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">extraProperties</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Additional properties that will be passed to the scanner,
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Put one key=value per line, example:
</span></span></span><span class="line"><span class="cl"><span class="sd">        sonar.cpd.exclusions=**/AssemblyInfo.cs,**/*.g.cs
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Ingest the test results and coverage data
</span></span></span><span class="line"><span class="cl"><span class="sd">        sonar.cs.vscoveragexml.reportsPaths=$(Agent.TempDirectory)/**/*.coveragexml
</span></span></span><span class="line"><span class="cl"><span class="sd">        sonar.cs.vstest.reportsPaths=$(Agent.TempDirectory)/**/*.trx</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">DotNetCoreCLI@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;.NET Build&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;build&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">arguments</span><span class="p">:</span><span class="w"> </span><span class="p">&gt;</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        --configuration ${{ parameters.buildConfiguration }}
</span></span></span><span class="line"><span class="cl"><span class="sd">        --no-restore</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projects</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Build.SourcesDirectory)/src/MySolution.sln&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">DotNetCoreCLI@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;.NET Test&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;test&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projects</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Build.SourcesDirectory)/src/MySolution.sln&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">arguments</span><span class="p">:</span><span class="w"> </span><span class="p">&gt;</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        --configuration ${{ parameters.buildConfiguration }}
</span></span></span><span class="line"><span class="cl"><span class="sd">        --collect &#34;Code coverage&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        --no-restore
</span></span></span><span class="line"><span class="cl"><span class="sd">        --no-build</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">SonarQubeAnalyze@7</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Complete the SonarQube analysis&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">jdkversion</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;JAVA_HOME_17_X64&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">SonarQubePublish@7</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Publish Quality Gate Result&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pollingTimeoutSec</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;300&#34;</span><span class="w">
</span></span></span></code></pre></div><p>At the start of the <code>SonarQubeAnalyze@7</code> task log I could see that the <code>.coverage</code> file was found and converted into a <code>.coveragexml</code> file. However, there were multiple &lsquo;The device is not ready&rsquo; errors when parsing this file later in the process.</p>
<pre tabindex="0"><code>Falling back on locating coverage files in the agent temp directory.
Searching for coverage files in E:\Agent\_work\_temp
All matching files: count=2
	E:\Agent\_work\_temp\a3b45684-38dd-4511-a371-1326896d8e57\BMAGENT2025_2026-01-20.16_40_35.coverage
	E:\Agent\_work\_temp\BMAGENT2025_2026-01-20_16_40_32\In\BMAGENT2025\BMAGENT2025_2026-01-20.16_40_35.coverage
Unique coverage files: count=1
	E:\Agent\_work\_temp\a3b45684-38dd-4511-a371-1326896d8e57\BMAGENT2025_2026-01-20.16_40_35.coverage
Converting coverage file &#39;E:\Agent\_work\_temp\a3b45684-38dd-4511-a371-1326896d8e57\BMAGENT2025_2026-01-20.16_40_35.coverage&#39; to &#39;E:\Agent\_work\_temp\a3b45684-38dd-4511-a371-1326896d8e57\BMAGENT2025_2026-01-20.16_40_35.coveragexml&#39;.
Coverage report conversion completed successfully.

... other analysis ...

INFO: Sensor C# Tests Coverage Report Import [csharpenterprise]
INFO: Parsing the Visual Studio coverage XML report E:\Agent\_work\_temp\a3b45684-38dd-4511-a371-1326896d8e57\BMAGENT2025_2026-01-20.16_40_35.coveragexml
WARN: Skipping the import of Visual Studio XML code coverage for the invalid file path: D:\a\_work\1\s\src\DotNetWorker.Core\StartupHook.cs at line 3085
java.io.IOException: The device is not ready
</code></pre><p>Initially my project contained no &lsquo;real&rsquo; tests, just some placeholders as I was just getting the CI/CD process in place. These placeholder tests generated no significant test coverage, so I expected the coverage in SonarQube to be near 0%. However, I never bothered to actually check for any coverage value in SonarQube UI, as I assumed these errors in the log meant that coverage data was not being processed at all.</p>
<p>I now know this was not the case, but it took me a while to realise this.</p>
<h2 id="the-exploration">The Exploration</h2>
<p>On seeing the errors I started to try to isolate the issue.</p>
<ul>
<li>I created a small test project that had a .NET 8.0 class library project with some MSTests. Using the same YAML pipeline settings as above, the test coverage was imported without error.</li>
<li>I next swapped from MSTest for xUnit, I could still import the coverage without errors</li>
</ul>
<p>So the issue looked to be due to my codebase, not the basic tools in use.</p>
<p>An Internet search threw up that other people had seen similar issues and an answer was to gather the code coverage data using <a href="https://github.com/coverlet-coverage/coverlet">Coverlet</a> as opposed to the built in dotnet test coverage tooling.</p>
<p>So I did that, and my errors disappeared the test coverage was imported. However, the downside of using Coverlet is that you loose the Coverage tab in the Azure DevOps pipeline summary view. This needs the binary <code>.coverage</code> format file.</p>
<p>As I did not want to loose the built-in coverage tab in the Azure DevOps pipeline summary view, I was back where I started, or so I thought.</p>
<p>It was then I realised that though I was seeing error in the log, I was also seeing a code coverage details in SonarQube. This was easier to seen now as while I had been fiddling tests had got added to the codebase, so I now had significant code coverage.</p>
<h2 id="a-possible-explanation">A Possible Explanation</h2>
<p>I am not 100% certain of the problem, there is certainly an issue with SonarQube parsing <code>.coveragexml</code> files, the question is does it matter for my purposes?</p>
<p>I suspect the issue is my <code>.coveragexml</code> file contains coverage data for both my project assemblies and externally referenced ones. I can see this in the Azure DevOps pipeline summary code coverage view.</p>
<p><img alt="Azure DevOps Coverage View" loading="lazy" src="/images/rfennell/sonarqube-test-coverage.jpg"></p>
<p>I think it is external assemblies that are causing the parsing issue, but these errors are not actually important to me as they concern files outside my projects codebase under analysis in SonarQube.</p>
<p>So, I would have saved myself a lot of time if I had noticed the import issues were warnings and not errors, and read the line after the warning block in the SonarQube log that told me what had actually been imported successfully.</p>
<pre tabindex="0"><code>WARN: Skipping the import of Visual Studio XML code coverage for the invalid file path: D:\a\_work\1\s\src\DotNetWorker.Core\StartupHook.cs at line 3085
java.io.IOException: The device is not ready

INFO: Coverage Report Statistics: 88 files, 58 main files, 58 main files with coverage, 30 test files, 0 project excluded files, 0 other language files.
</code></pre><p>The moral of the story is to read the whole of the log before jumping to conclusions, like you were told to do for exam questions at school!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Updating powerBI connections after a SQL migration</title>
      <link>https://blog.richardfennell.net/posts/updating-powerbi-db-connections/</link>
      <pubDate>Thu, 15 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updating-powerbi-db-connections/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;I have some long standing PowerBI reports that I use for summarizing project data. They use a variety of data sources, including Azure hosted SQL instances. I recently moved the Azure hosted SQL databases to a new instance as part of a major tidy up of my Azure resources. This of course caused my reports to break.&lt;/p&gt;
&lt;p&gt;I thought swapping the SQL connection details in PowerBI would be easy, and I guess it was, but it took me too long to work out how.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>I have some long standing PowerBI reports that I use for summarizing project data. They use a variety of data sources, including Azure hosted SQL instances. I recently moved the Azure hosted SQL databases to a new instance as part of a major tidy up of my Azure resources. This of course caused my reports to break.</p>
<p>I thought swapping the SQL connection details in PowerBI would be easy, and I guess it was, but it took me too long to work out how.</p>
<p>The main issue was that my reports were quick &lsquo;hacked together&rsquo; reports, just pushed to my PowerBi personal workspace. I had never considered a formal publication with shared data connectors and gateways. This had come back to bite me.</p>
<h2 id="the-solution">The Solution</h2>
<p>You can&rsquo;t just change the dataset (now called the semantic view) in the PowerBI web UI, the docs say you should be able to, and I could for a newly created report, but not my very old ones! I guess they must be using some older schema?</p>
<p>After many false starts trying to use the web UI, I ended up using the PowerBi Desktop as follows</p>
<ol>
<li>Open my PowerBI workspace in the browser</li>
<li>Download each of my reports as .PBIX</li>
<li>Edit the .PBIX in the <a href="https://apps.microsoft.com/detail/9NTXR16HNW1T?hl=en-us&amp;gl=GB&amp;ocid=pdpshare">PowerBI desktop app</a></li>
<li>Go to &lsquo;Transform Data&rsquo; (on the top menu bar), a new Window appears</li>
<li>Go to &lsquo;Data source settings&rsquo; (on the top menu bar), a dialog appears</li>
<li>Press the &lsquo;Change Source&hellip;&rsquo; button, and enter the new connection details</li>
<li>When the &lsquo;Change Source&hellip;&rsquo; dialog is closed you will see a banner message &lsquo;Reauthenticate and reapply&rsquo;. Press the button to start this process</li>
<li>Your local copy of the report should now be using the new data source</li>
<li>Save the updated .PBIX, I used a new name but this is not essential</li>
<li>Upload the revised .PBIX to your Power BI Workspace.</li>
</ol>
<p>So I got there in the end. I am sure there are easier ways. I am just not familiar with PowerBI enough to know them (yet)</p>
]]></content:encoded>
    </item>
    <item>
      <title>Updating a project&#39;s SonarQube and OWASP Dependency Checker Plugin Configuration</title>
      <link>https://blog.richardfennell.net/posts/updating-our-sonarqube-configuration/</link>
      <pubDate>Thu, 08 Jan 2026 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updating-our-sonarqube-configuration/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;We have used &lt;a href=&#34;https://www.sonarsource.com/products/sonarqube/&#34;&gt;SonarQube&lt;/a&gt; and the &lt;a href=&#34;https://github.com/dependency-check/dependency-check-sonar-plugin&#34;&gt;OWASP Dependency Checker Plugin&lt;/a&gt; for many years to perform analysis and vulnerability checking within our Azure DevOps Pipelines.&lt;/p&gt;
&lt;p&gt;Recently, whilst picking up an old project for a new phase of development, I came across a couple of problems due to changes in both tools since the project CI/CD pipelines were last run.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The OWASP Dependency Checker vulnerabilities were not appearing in SonarQube as issues&lt;/li&gt;
&lt;li&gt;The OWASP Dependency Checker HTML report could not (always) be loaded in SonarQube&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The issues were just down to changes in both tools over time. It just goes to show that you can&amp;rsquo;t just setup a CI/CD system and expect it work forever, changes are always being introduced in cloud based tools.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>We have used <a href="https://www.sonarsource.com/products/sonarqube/">SonarQube</a> and the <a href="https://github.com/dependency-check/dependency-check-sonar-plugin">OWASP Dependency Checker Plugin</a> for many years to perform analysis and vulnerability checking within our Azure DevOps Pipelines.</p>
<p>Recently, whilst picking up an old project for a new phase of development, I came across a couple of problems due to changes in both tools since the project CI/CD pipelines were last run.</p>
<ul>
<li>The OWASP Dependency Checker vulnerabilities were not appearing in SonarQube as issues</li>
<li>The OWASP Dependency Checker HTML report could not (always) be loaded in SonarQube</li>
</ul>
<p>The issues were just down to changes in both tools over time. It just goes to show that you can&rsquo;t just setup a CI/CD system and expect it work forever, changes are always being introduced in cloud based tools.</p>
<h1 id="the-solution">The Solution</h1>
<h2 id="changes-in-owasp-dependency-checker-supported-output-formats">Changes in OWASP Dependency Checker supported output formats</h2>
<p>The reason the OWASP Dependency Checker vulnerabilities were not appearing in SonarQube as Issues was because the data file format was incorrect. We were exporting XML, historically correct, when JSON is now required.</p>
<p>The <a href="https://github.com/dependency-check/dependency-check-sonar-plugin">OWASP Dependency Checker Plugin</a> changed the way it handles importing issues in the PR <a href="https://github.com/dependency-check/dependency-check-sonar-plugin/pull/755">Remove depreacted XML-Report Parser #755</a>. This means the following changes are required</p>
<ul>
<li>Get the <a href="https://github.com/dependency-check/dependency-check-sonar-plugin">OWASP Dependency Checker Plugin</a> to produce JSON and HTML reports, as opposed to XML and HTML</li>
<li>Update the <a href="https://github.com/dependency-check/dependency-check-sonar-plugin">OWASP Dependency Checker Plugin</a> to use the currently supported parameter ie. swap <code>sonar.dependencyCheck.reportPath</code> to <code>sonar.dependencyCheck.jsonReportPath</code></li>
</ul>
<h2 id="viewing-html-dependency-reports">Viewing HTML Dependency Reports</h2>
<p>The issue with viewing the OWASP Dependency Checker HTML report was due to size. If an HTML report that is too big has been uploaded to any branch/PR in a SonarQube project, you get Heap memory issues when you try to access the SonarQube project. This will persist until the branch/PR is removed. This will probably occur due to SonarQube&rsquo;s housekeeping automation, inactive branches/PRs are removed after 30 days.</p>
<p>You could of course fiddle SonarQube container settings to added memory/performance, but the simple answer is to just not upload the HTML reports to SonarQube.</p>
<p>As long as the JSON report is uploaded all the vulnerabilities are presented within SonarQube system as issues, and as we attach the HTML report as a pipeline artifact it is still available if needed in Azure DevOps.</p>
<p>So the simplest answer is to not define, or comment out, the <code>sonar.dependencyCheck.htmlReportPath</code> SonarQube Dependency Check parameter.</p>
<h2 id="the-final-pipeline">The Final Pipeline</h2>
<p>The pipeline I ended up with is as follows</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="w"> </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">SonarQubePrepare@7</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">SonarQube</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;SonarQube Service Connection&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">scannerMode</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;dotnet&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">jdkversion</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;JAVA_HOME_17_X64&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projectKey</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;MPK&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projectName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;My Project Name&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projectVersion</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(GitVersion_Major).$(GitVersion_Minor)&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">extraProperties</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Additional properties that will be passed to the scanner,
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Put one key=value per line, example:
</span></span></span><span class="line"><span class="cl"><span class="sd">        sonar.dependencyCheck.jsonReportPath=$(Build.ArtifactStagingDirectory)/vulnerabilityscan/dependency-check-report.json
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Comment out the HTML report path to avoid SonarQube heap memory issues
</span></span></span><span class="line"><span class="cl"><span class="sd">        sonar.dependencyCheck.htmlReportPath=$(Build.ArtifactStagingDirectory)/vulnerabilityscan/dependency-check-report.html
</span></span></span><span class="line"><span class="cl"><span class="sd">        sonar.cpd.exclusions=**/AssemblyInfo.cs,**/*.g.cs
</span></span></span><span class="line"><span class="cl"><span class="sd">        sonar.cs.vscoveragexml.reportsPaths=$(Agent.TempDirectory)/**/*.coveragexml
</span></span></span><span class="line"><span class="cl"><span class="sd">        sonar.cs.vstest.reportsPaths=$(Agent.TempDirectory)/**/*.trx</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">DotNetCoreCLI@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;.NET Build&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;build&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">arguments</span><span class="p">:</span><span class="w"> </span><span class="p">&gt;</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        --configuration ${{ parameters.buildConfiguration }}
</span></span></span><span class="line"><span class="cl"><span class="sd">        --no-restore</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projects</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Build.SourcesDirectory)/src/MySolution.sln&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">DotNetCoreCLI@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;.NET Test&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;test&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projects</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Build.SourcesDirectory)/src/MySolution.sln&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">arguments</span><span class="p">:</span><span class="w"> </span><span class="p">&gt;</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        --configuration ${{ parameters.buildConfiguration }}
</span></span></span><span class="line"><span class="cl"><span class="sd">        --collect &#34;Code coverage&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        --no-restore
</span></span></span><span class="line"><span class="cl"><span class="sd">        --no-build</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">CodeCoverage-Format-Convertor@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;CodeCoverage Format Converter (to allow import to SonarQube)&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">ProjectDirectory</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Agent.TempDirectory)&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># set the version of JAVA required by the dependency-check-build-task</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">JavaToolInstaller@0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">versionSpec</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;17&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">jdkArchitectureOption</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;x64&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">jdkSourceOption</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;PreInstalled&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">dependency-check-build-task@6</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;Run OSWAP Vulnerability Scan&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projectName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${{ parameters.sonarQubeProjectName }}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">scanPath</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Build.SourcesDirectory)/src/**&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">format</span><span class="p">:</span><span class="w"> </span><span class="l">HTML, JSON</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">reportsDirectory</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Build.ArtifactStagingDirectory)/vulnerabilityscan&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">uploadReports</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w"> </span><span class="c"># false we publish the reports as an artifact with a name of our choice</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">additionalArguments</span><span class="p">:</span><span class="w"> </span><span class="p">&gt;</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        --nvdApiKey &#34;${{ parameters.nvdApiKey }}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># Not needed if uploadReports: true above</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PublishPipelineArtifact@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;Publish Vulnerability Scan Report&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">targetPath</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Build.ArtifactStagingDirectory)/vulnerabilityscan&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">publishLocation</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;pipeline&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">artifactName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${{ parameters.artifactName }}VulnerabilityScan&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">SonarQubeAnalyze@7</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Complete the SonarQube analysis&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">jdkversion</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;JAVA_HOME_17_X64&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">SonarQubePublish@7</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Publish Quality Gate Result&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pollingTimeoutSec</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;300&#34;</span><span class="w">
</span></span></span></code></pre></div><h1 id="in-summary">In Summary</h1>
<p>It just shows you have to budget some time in keeping you CI/CD automation up to date on any project.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Grey Matter Podcast &#39;Secure by design: The DevSecOps mindset&#39;</title>
      <link>https://blog.richardfennell.net/posts/grey-matter-nov25-podcast/</link>
      <pubDate>Fri, 28 Nov 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/grey-matter-nov25-podcast/</guid>
      <description>&lt;p&gt;Another podcast I recently recorded with our friends at &lt;a href=&#34;https://greymatter.com/&#34;&gt;Grey Matter&lt;/a&gt; has just been published &lt;a href=&#34;https://shows.acast.com/the-grey-matter-podcast/episodes/secure-by-design-the-devsecops-mindset&#34;&gt;Secure by design: The DevSecOps mindset&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://shows.acast.com/the-grey-matter-podcast/episodes/secure-by-design-the-devsecops-mindset&#34;&gt;&lt;img alt=&#34;Podcast image&#34; loading=&#34;lazy&#34; src=&#34;https://open-images.acast.com/shows/65b7c372124cd20018028989/1764241771965-f008e476-919a-4f4d-ba50-fe2b4cc4601d.jpeg?height=750&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Another podcast I recently recorded with our friends at <a href="https://greymatter.com/">Grey Matter</a> has just been published <a href="https://shows.acast.com/the-grey-matter-podcast/episodes/secure-by-design-the-devsecops-mindset">Secure by design: The DevSecOps mindset</a></p>
<p><a href="https://shows.acast.com/the-grey-matter-podcast/episodes/secure-by-design-the-devsecops-mindset"><img alt="Podcast image" loading="lazy" src="https://open-images.acast.com/shows/65b7c372124cd20018028989/1764241771965-f008e476-919a-4f4d-ba50-fe2b4cc4601d.jpeg?height=750"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Interesting side effect with the Azure DevOps cache if misconfigured</title>
      <link>https://blog.richardfennell.net/posts/interesting-side-effect-with-azdo-cache/</link>
      <pubDate>Fri, 31 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/interesting-side-effect-with-azdo-cache/</guid>
      <description>&lt;p&gt;I recently came across an interesting side effect with the Azure DevOps cache task if its settings are not correctly configured. One that caused me to get somewhat confused before I realised what had occurred.&lt;/p&gt;
&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;I had a working pipeline that as part of its build process ran the &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=dependency-check.dependencycheck&#34;&gt;OWASP Dependency Checker task&lt;/a&gt;. This can be slow to run as it has to download the current vulnerability database. To try to speed my builds I have been using the &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/cache-v2?view=azure-pipelines&#34;&gt;cache task&lt;/a&gt; to cache the current pipeline run&amp;rsquo;s downloaded vulnerability database, so on the next run the vast majority of the database is already downloaded.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently came across an interesting side effect with the Azure DevOps cache task if its settings are not correctly configured. One that caused me to get somewhat confused before I realised what had occurred.</p>
<h1 id="the-problem">The Problem</h1>
<p>I had a working pipeline that as part of its build process ran the <a href="https://marketplace.visualstudio.com/items?itemName=dependency-check.dependencycheck">OWASP Dependency Checker task</a>. This can be slow to run as it has to download the current vulnerability database. To try to speed my builds I have been using the <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/cache-v2?view=azure-pipelines">cache task</a> to cache the current pipeline run&rsquo;s downloaded vulnerability database, so on the next run the vast majority of the database is already downloaded.</p>
<p>The pipeline YAML is as follows</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PowerShell@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Find the NVD DB path to cache</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">targetType</span><span class="p">:</span><span class="w"> </span><span class="l">inline</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        $nvdcachepath = $(get-childitem &#34;$(Agent.WorkFolder)\_tasks\dependency-check-build-task*\*.*.*\dependency-check\data&#34;).FullName
</span></span></span><span class="line"><span class="cl"><span class="sd">        echo &#34;##vso[task.setvariable variable=nvdcachepath;]$nvdcachepath&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">Cache@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Cache NVD data</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">key</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;&#34;NVDCache&#34; | &#34;$(Agent.OS)&#34;&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l">$(nvdcachepath)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">restoreKeys</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        NVDCache | &#34;$(Agent.OS)&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        NVDCache</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># other build tasks</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">JavaToolInstaller@0 </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Install Java needed for the Dependency Check</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">versionSpec</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;11&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jdkArchitectureOption</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;x64&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jdkSourceOption</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;PreInstalled&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">dependency-check.dependencycheck.dependency-check-build-task.dependency-check-build-task@6</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Dependency Check</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">projectName</span><span class="p">:</span><span class="w"> </span><span class="l">Identity Server</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">scanPath</span><span class="p">:</span><span class="w"> </span><span class="l">CCC.Web.IdentityAndSSO.IdentityServer</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">format</span><span class="p">:</span><span class="w"> </span><span class="l">HTML,XML</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">additionalArguments</span><span class="p">:</span><span class="w"> </span>--<span class="l">nvdApiKey $(nvdapikey)</span><span class="w">
</span></span></span></code></pre></div><p>This has all been working well, but the YAML has a potential flaw, can you spot it?</p>
<p>The issue appeared when, due to the server that provides the OWASP vulnerability database being unavailable the dependency check task was temporarily disabled.</p>
<p>This meant that the dependency check task was not downloaded, so the Powershell script to find the vulnerability database folder returned an empty string as it could find no matching folder.</p>
<p>This is important as it is this string that controls what is cached. With no folder passed into the cache task, the whole of the pipeline working folder is cached. In effect the restoring of the cache is checking out the source as it was on the last successful build on the branch. So you are not building the commit you think you are, but an older commit, the source code that was cached.</p>
<h1 id="the-solution">The Solution</h1>
<p>There are a number of options to avoid this problem</p>
<ol>
<li>Don&rsquo;t use the Cache task</li>
<li>Don&rsquo;t disable any task that produces folders your Cache task is meant caches</li>
<li>If you disable a task who&rsquo;s data you cache also disable the Cache task</li>
<li>Put some logic around the cache task e.g
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">Cache@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">condition</span><span class="p">:</span><span class="w"> </span><span class="l">and(succeeded(), ne(variables[&#39;nvdcachepath&#39;], &#39;&#39;))</span><span class="w">
</span></span></span></code></pre></div></li>
</ol>
<p>All are valid for different scenarios, the choice is down to your use case.</p>
<p>The overall point is make sure you are caching what you think you are caching, if not constrained the Cache task will cache everything it can.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting a Surface Hub v1 Working with a Replacement PC</title>
      <link>https://blog.richardfennell.net/posts/getting_a_surfacehub_working_with_a_replacement_pc/</link>
      <pubDate>Tue, 28 Oct 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting_a_surfacehub_working_with_a_replacement_pc/</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Updated 30-Oct-2025&lt;/strong&gt; Added more details on Screen Refresh Rate and Teams Room App&lt;/p&gt;&lt;/blockquote&gt;
&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;We have owned a Surface Hub v1 for a number of years, and it has served us well. However, with Microsoft ending support for Windows 10 it was in danger of becoming a large piece of sculpture in the office. This is not just because we did not want to run a Windows 10 device when security patches were not available, but that the embedded version of Teams would not even load.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<blockquote>
<p><strong>Updated 30-Oct-2025</strong> Added more details on Screen Refresh Rate and Teams Room App</p></blockquote>
<h1 id="the-problem">The Problem</h1>
<p>We have owned a Surface Hub v1 for a number of years, and it has served us well. However, with Microsoft ending support for Windows 10 it was in danger of becoming a large piece of sculpture in the office. This is not just because we did not want to run a Windows 10 device when security patches were not available, but that the embedded version of Teams would not even load.</p>
<p>In theory the answer was easy, <a href="https://learn.microsoft.com/en-us/surface-hub/surface-hub-v1-plan-eos">flip a switch on the Surface Hub so it uses an external replacement PC</a> as opposed to it&rsquo;s built in one. However, this process turned out to be harder than we expected as the Surface Hub is very picky over hardware settings.</p>
<h1 id="the-solution">The Solution</h1>
<p>The whole process is very well documented by <a href="https://rwold.net/how-to-use-surface-hub-v1-as-external-interactive-display/">Ryan Wold</a>, so I won&rsquo;t repeat the details. However, I will mention the kit and settings we ended up using as we found getting any output on the Surface Hub meant using exactly the correct settings.</p>
<p>We aimed to use spare kit we had around, so our replacement PC was an old <a href="https://www.lenovo.com/us/en/p/desktops/thinkcentre/m-series-sff/thinkcentre-m910s">Lenovo M910s desktop PC</a>. We picked this particular unit as we had at some point in the past fitted it with an <a href="https://www.nvidia.com/en-us/geforce/graphics-cards/geforce-gtx-750-ti/">Nvidia Geforce GTX 750Ti </a> as we knew that a dedicated GPU would be needed to drive the 4K screen.</p>
<p>The consistent issue we had trying to get this working was that the PC  would boot, we would see the BIOS boot screen, Windows would start to load, but as soon as the loading Windows spinner appeared the Surface Hub screen would go blank. If we had a second display attached, we could see that Windows could not detect the Surface Hub as a display.</p>
<p>The critical item it turned out was the display one, this needed to be an active cable. We ended up with a <a href="https://www.amazon.co.uk/dp/B0BYJP8QQG?ref=ppx_yo2ov_dt_b_fed_asin_title">BENFEI HDMI to DisplayPort Adapter</a>. Once we used this some of our display issues were sorted, Windows could detect the Surface Hub consistently.</p>
<p>The other critical step was that we had to set the resolution to 4K (3840 x 2160) and specifically with the refresh rate set at 30Hz. If we had a more powerful GPU we think we could have run at the default rate of 120Hz, but that was not an option with the kit we had.</p>
<p>30Hz is not the default refresh rate when the Surface Hub is detected as a monitor, so we got the blank screen. To change the refresh rate we had to have a second display attached. This is not a problem you might think, make the edit and remove the second display. However, Windows stores the refresh rates separately for each screen setup e.g. single display, dual display etc. This meant when we disconnected the second display the refresh rate for the Surface Hub returned to its 120Hz default.</p>
<p>So for now we need to keep the second display attached</p>
<p>But not all the setup was so hard, getting the Surface Hub cameras, audio devices, touch screen and pens working was easy. All that needed was a USB cable and the <a href="https://www.microsoft.com/en-us/download/details.aspx?id=52210&amp;msockid=34c33284916a6b933d7324a3908a6aed">right drivers</a></p>
<h1 id="sorting-the-refresh-rate">Sorting the Refresh Rate</h1>
<p>The solution to the refresh rate issue was to use the <a href="https://customresolutionutility.net/">CRU Display Utility</a>.</p>
<p>With the PC running with with dual monitors:</p>
<ol>
<li>Download, unzip and run the the <strong>CRU Utility</strong></li>
<li>Selected the <strong>Surface Hub (active)</strong> display and altered the detailed resolutions order. You need the 30Hz option to be the first entry in the list</li>
<li>Repeat Step 2 for any other <strong>Surface Hub</strong> entries in the list (not sure if this step is essential, but quick to do)</li>
<li>Pressed Save and exited CRU</li>
<li><strong>This is the important step we initially missed</strong> Use the CRU provided <strong>restart.exe</strong> tool to restart the graphic subsystem and hence read the changed order.</li>
<li>I could then disconnect the 2nd monitor, the display a rescan was done, but unlike in the past the working 30Hz refresh rate was picked.</li>
</ol>
<p>The PC could not be rebooted, with a single monitor, and it still worked.</p>
<h1 id="trying-to-setup-the-team-room-client">Trying to Setup the Team Room Client</h1>
<p>It is good that we can now use the Surface Hub for Teams calls again, but having to login to start Teams is not as nice as being able to invite the device as location to meetings.</p>
<p>To try to make the device a Teams Room we download the client from the <a href="https://teams.microsoft.com/l/message/48:notes/1761823413289?context=%7B%22contextType%22%3A%22chat%22%2C%22oid%22%3A%228%3Aorgid%3A7523db6f-265c-4c93-8633-81a9965e64a2%22%7D">Windows App Store</a> where it is still confusingly call the &lsquo;Skype Team Room&rsquo;. I guess it must be hard/impossible to change the name of a published package in the store. However, it is the correct one as the release history shows this app being recently updated and has the current Teams branding.</p>
<p>When you install and run this application it initially looks OK, but then fails saying</p>
<blockquote>
<p><strong>Microsoft Teams Room device isn&rsquo;t certified</strong></p>
<p>You must insert the HDMI adaptor that came with this device</p></blockquote>
<p>You have the option to restart the application, but you get the same result.</p>
<p>The bottom line is that this application seems to be locked down to certified hardware, there is no option for a DIY &lsquo;build your own unit&rsquo;, or at least not one I have been able to find thus far.</p>
<h1 id="where-we-are-now">Where we are now</h1>
<p>So now we have a working replacement PC connected to Surface Hub. This PC is domain joined, so staff can login for Teams meetings and use all the physical features of the Surface Hub. This is not a perfect replacement, but it at least means the Surface Hub is not just E-Waste.</p>
<p>I will post again, if we have any more successes</p>
]]></content:encoded>
    </item>
    <item>
      <title>Changing mySql SSL Certificates when running SnipeIT in Azure</title>
      <link>https://blog.richardfennell.net/posts/mysql-cert-changes-and-snipeit-in-azure/</link>
      <pubDate>Mon, 22 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mysql-cert-changes-and-snipeit-in-azure/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;I have previously &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/setting-up-snipe-it-on-azure&#34;&gt;posted&lt;/a&gt; about settings up &lt;a href=&#34;https://github.com/grokability/snipe-it&#34;&gt;Snipe IT&lt;/a&gt; on Azure using a Docker container running in an Azure WebApp with a MySQL DB.&lt;/p&gt;
&lt;p&gt;I recently had to revisit my setup as the site was failing to load with a DB connection failure. The issue was that the SSL certificate chain used by Microsoft for their hosted MySQL service had changed and my settings needed to be updated&lt;/p&gt;
&lt;h1 id=&#34;the-solution&#34;&gt;The Solution&lt;/h1&gt;
&lt;ol&gt;
&lt;li&gt;In te Azure Portal find your MySQL instance and select the networking blade, click the &amp;lsquo;Download SSL Certificate&amp;rsquo; link, you are taken to &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/mysql/flexible-server/.how-to-connect-tls-ssl#download-the-public-ssl-certificate&#34;&gt;learn.microsoft.com&lt;/a&gt;
. Download the &lt;a href=&#34;https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem&#34;&gt;DigiCert Global Root G2 certificate&lt;/a&gt; and the &lt;a href=&#34;https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt&#34;&gt;Microsoft RSA Root Certificate Authority 2017 certificate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;You &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/mysql/flexible-server/concepts-root-certificate-rotation#steps&#34;&gt;need to combine these certificates&lt;/a&gt; i.e. put the base64 coded versions of each in a single text file.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;-----BEGIN CERTIFICATE-----
(Content from DigiCertGlobalRootCA.crt.pem)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Content from DigiCertGlobalRootG2.crt.pem)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Content from MicrosoftRSARootCertificateAuthority2017.crt.pem)
-----END CERTIFICATE-----
&lt;/code&gt;&lt;/pre&gt;&lt;/blockquote&gt;
&lt;ol start=&#34;3&#34;&gt;
&lt;li&gt;SO, I downloaded the &lt;code&gt;DigiCertGlobalRootCA.crt.pem&lt;/code&gt; file I was using previously from my Snipe IT&amp;rsquo;s Azure File Storage and opened it in a text editor (the first block)&lt;/li&gt;
&lt;li&gt;I then appended the contents of &lt;code&gt;DigiCertGlobalRootG2.crt.pem&lt;/code&gt; to this file (the second block)&lt;/li&gt;
&lt;li&gt;Before I could do the same with the &lt;code&gt;Microsoft RSA Root Certificate Authority 2017.crt&lt;/code&gt; file I had to import the .CRT file onto my local Windows machine, then export it is a Base64 encoded file. Once this was done, this too could be add to the &lt;code&gt;DigiCertGlobalRootCA.crt.pem&lt;/code&gt; file (the third block)&lt;/li&gt;
&lt;li&gt;Once all the edits were made I save the edited file and uploaded it into the Azure Storage to overwrite the old version. This avoid the need to edit any other configuration settings that referenced the certificate file name.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Once this was done my Snipe IT instance loaded as expected.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>I have previously <a href="https://blogs.blackmarble.co.uk/rfennell/setting-up-snipe-it-on-azure">posted</a> about settings up <a href="https://github.com/grokability/snipe-it">Snipe IT</a> on Azure using a Docker container running in an Azure WebApp with a MySQL DB.</p>
<p>I recently had to revisit my setup as the site was failing to load with a DB connection failure. The issue was that the SSL certificate chain used by Microsoft for their hosted MySQL service had changed and my settings needed to be updated</p>
<h1 id="the-solution">The Solution</h1>
<ol>
<li>In te Azure Portal find your MySQL instance and select the networking blade, click the &lsquo;Download SSL Certificate&rsquo; link, you are taken to <a href="https://learn.microsoft.com/en-us/azure/mysql/flexible-server/.how-to-connect-tls-ssl#download-the-public-ssl-certificate">learn.microsoft.com</a>
. Download the <a href="https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem">DigiCert Global Root G2 certificate</a> and the <a href="https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt">Microsoft RSA Root Certificate Authority 2017 certificate</a></li>
<li>You <a href="https://learn.microsoft.com/en-us/azure/mysql/flexible-server/concepts-root-certificate-rotation#steps">need to combine these certificates</a> i.e. put the base64 coded versions of each in a single text file.</li>
</ol>
<blockquote>
<pre tabindex="0"><code>-----BEGIN CERTIFICATE-----
(Content from DigiCertGlobalRootCA.crt.pem)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Content from DigiCertGlobalRootG2.crt.pem)
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
(Content from MicrosoftRSARootCertificateAuthority2017.crt.pem)
-----END CERTIFICATE-----
</code></pre></blockquote>
<ol start="3">
<li>SO, I downloaded the <code>DigiCertGlobalRootCA.crt.pem</code> file I was using previously from my Snipe IT&rsquo;s Azure File Storage and opened it in a text editor (the first block)</li>
<li>I then appended the contents of <code>DigiCertGlobalRootG2.crt.pem</code> to this file (the second block)</li>
<li>Before I could do the same with the <code>Microsoft RSA Root Certificate Authority 2017.crt</code> file I had to import the .CRT file onto my local Windows machine, then export it is a Base64 encoded file. Once this was done, this too could be add to the <code>DigiCertGlobalRootCA.crt.pem</code> file (the third block)</li>
<li>Once all the edits were made I save the edited file and uploaded it into the Azure Storage to overwrite the old version. This avoid the need to edit any other configuration settings that referenced the certificate file name.</li>
</ol>
<p>Once this was done my Snipe IT instance loaded as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Another confusing Azure DevOps Pipelines YAML error message using StringList parameters</title>
      <link>https://blog.richardfennell.net/posts/another-confusing-ado-pipelines-yaml-error-message/</link>
      <pubDate>Tue, 02 Sep 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/another-confusing-ado-pipelines-yaml-error-message/</guid>
      <description>&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;The recent addition to Azure DevOps of the &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/release-notes/2025/sprint-257-update#stringlist-parameter-type&#34;&gt;&lt;code&gt;StringList&lt;/code&gt; parameter type&lt;/a&gt; can be really useful to dynamically create parallel stages or jobs in an Azure DevOps YAML pipeline.&lt;/p&gt;
&lt;p&gt;A &lt;code&gt;StringList&lt;/code&gt; parameter can be used to present a list of values to the user queuing a pipeline run, thus allowing the selection of one or more values that can be accessed using a YAML expression loop.&lt;/p&gt;
&lt;p&gt;This can be combined with YAML templates, where the &lt;code&gt;StringList&lt;/code&gt; is passed into a template as an &lt;code&gt;Object&lt;/code&gt;. Noting that you can&amp;rsquo;t use the &lt;code&gt;StringList&lt;/code&gt; type for a parameter type in the template definition.  However, this is not a problem as there is a dynamic conversion from &lt;code&gt;StringList&lt;/code&gt; to &lt;code&gt;Object&lt;/code&gt; e.g.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="introduction">Introduction</h1>
<p>The recent addition to Azure DevOps of the <a href="https://learn.microsoft.com/en-us/azure/devops/release-notes/2025/sprint-257-update#stringlist-parameter-type"><code>StringList</code> parameter type</a> can be really useful to dynamically create parallel stages or jobs in an Azure DevOps YAML pipeline.</p>
<p>A <code>StringList</code> parameter can be used to present a list of values to the user queuing a pipeline run, thus allowing the selection of one or more values that can be accessed using a YAML expression loop.</p>
<p>This can be combined with YAML templates, where the <code>StringList</code> is passed into a template as an <code>Object</code>. Noting that you can&rsquo;t use the <code>StringList</code> type for a parameter type in the template definition.  However, this is not a problem as there is a dynamic conversion from <code>StringList</code> to <code>Object</code> e.g.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="c"># Azure-Pipeline.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">environments</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">stringList</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">default</span><span class="p">:</span><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">dev</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">values</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">dev</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">test</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">prod</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">trigger</span><span class="p">:</span><span class="w"> </span><span class="l">none</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">extends</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="l">template.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">environments</span><span class="p">:</span><span class="w"> </span><span class="l">${{parameters.environments }}</span><span class="w">
</span></span></span></code></pre></div><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="c"># template.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">environments</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">object</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">stages</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">Deploy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="l">DeployJob</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="nt">checkout</span><span class="p">:</span><span class="w"> </span><span class="l">none</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="l">${{ each value in parameters.environments }}:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">                echo &#34;Selected environments: ${{ value }}&#34;</span><span class="w">
</span></span></span></code></pre></div><h1 id="the-problem">The Problem</h1>
<p>Whilst implementing this new parameter type on an existing pipeline we got a very strange set of error messages &lsquo;Unexpected a mapping&rsquo;, &lsquo;Unexpected at least one-value pair in the mapping&rsquo; and &lsquo;Unexpected state while attempting t read the mapping end state&rsquo;</p>
<p><img alt="StringList Error" loading="lazy" src="/images/rfennell/stringlist-error.png"></p>
<p>After much experimentation we found the issue was the indenting of a <code>steps</code> block later in the YAML. When the new loop expression was added only part of the jobs block was correctly indented.</p>
<p>The indenting was</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">${{ each environment in parameters.Environments }}:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="l">build_${{ environment }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="l">...</span><span class="w">
</span></span></span></code></pre></div><p>When it should have been</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">${{ each environment in parameters.Environments }}:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="l">build_${{ environment }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="l">...</span><span class="w">
</span></span></span></code></pre></div><p>So as usual when you get an unexpected and unclear error in Azure DevOps YAML, assume the problem is due to whitespace.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Finally I have working SONOFF SNZB-02P sensors with Home Assistant</title>
      <link>https://blog.richardfennell.net/posts/finally-have-working-temperature-sensors-with-ha/</link>
      <pubDate>Fri, 22 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/finally-have-working-temperature-sensors-with-ha/</guid>
      <description>&lt;p&gt;&lt;img alt=&#34;SONOFF Sensors&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/images/rfennell/ha-sensors.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;A while ago I &lt;a href=&#34;https://blog.richardfennell.net/posts/late-to-the-game-with-homeassistant/&#34;&gt;posted about starting to use Home Assistant&lt;/a&gt;. This has been working well for monitoring our &lt;a href=&#34;https://github.com/binsentsu/home-assistant-solaredge-modbus/wiki/Using-Templated-Sensors-to-Calculate-Power-Flow-and-Energy&#34;&gt;SolarEdge PV and Battery system&lt;/a&gt;, controlling if we should charge the battery with cheap overnight power if there has not been enough sun to fully charge the battery.&lt;/p&gt;
&lt;p&gt;Bitten by the Home Assistant bug, I decided I wanted to add more sensors to the systems, so bought some cheap Zigbee equipment from AliExpress&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><img alt="SONOFF Sensors" loading="lazy" src="/images/rfennell/ha-sensors.png"></p>
<p>A while ago I <a href="https://blog.richardfennell.net/posts/late-to-the-game-with-homeassistant/">posted about starting to use Home Assistant</a>. This has been working well for monitoring our <a href="https://github.com/binsentsu/home-assistant-solaredge-modbus/wiki/Using-Templated-Sensors-to-Calculate-Power-Flow-and-Energy">SolarEdge PV and Battery system</a>, controlling if we should charge the battery with cheap overnight power if there has not been enough sun to fully charge the battery.</p>
<p>Bitten by the Home Assistant bug, I decided I wanted to add more sensors to the systems, so bought some cheap Zigbee equipment from AliExpress</p>
<ul>
<li><a href="https://sonoff.tech/en-uk/products/sonoff-zigbee-temperature-and-humidity-sensor-snzb-02p">SONOFF Zigbee Temperature and Humidity Sensor | SNZB-02P</a></li>
<li><a href="https://sonoff.tech/en-uk/products/sonoff-zigbee-3-0-usb-dongle-plus-zbdongle-p">SONOFF Zigbee 3.0 USB Dongle Plus | ZBDongle-P</a></li>
</ul>
<p>The installation was easy using <a href="https://www.home-assistant.io/integrations/zha/">ZHA</a>, the gateway dongle and sensors were all found. However, the sensor readings I got were strange. On startup a reading was received, this stayed as a flat line for 6 hours then nothing for 6 hours then a new, usually different, value for 6 hours, and on and on.</p>
<p>After looking at forums it seems that I am not alone in having this problem with SONOFF kit, and nothing seemed to fix the problem. The general consensus seemed to be to avoid the brand.</p>
<p>Well, yesterday they fixed themselves, or rather the sensors got a firmware update delivered automatially via ZHA from 0x00002100 to 0x00002200 that seems to have fixed the problem. As the chart at the top of this post shows I now have the type of data I was expecting.</p>
<p>So if you are seeing similar problems with SNZB-02P sensors, look out for a firmware update</p>
]]></content:encoded>
    </item>
    <item>
      <title>Your out of support DevOps tooling is hurting you</title>
      <link>https://blog.richardfennell.net/posts/your-out-of-support-devops-tooling-is-hurting-you/</link>
      <pubDate>Mon, 18 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/your-out-of-support-devops-tooling-is-hurting-you/</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Note that this is a repost of my &lt;a href=&#34;https://www.linkedin.com/pulse/your-out-support-devops-tooling-hurting-you-richard-fennell-z8wme/&#34;&gt;LinkedIn article&lt;/a&gt; of the same name&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;h1 id=&#34;the-changing-face-of-devops&#34;&gt;The changing face of DevOps&lt;/h1&gt;
&lt;p&gt;Over the years the style of DevOps consultancy I have done has changed, along with its name. Changing from the simple &amp;lsquo;source control&amp;rsquo;, to SDLC, then ALM to now DevOps.&lt;/p&gt;
&lt;p&gt;Back in the days of Team Foundation Server (TFS), I used to do a lot of on-premise installs and upgrades. Getting our customers up and running, and helping them improve their adoption of the tools.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<blockquote>
<p><em>Note that this is a repost of my <a href="https://www.linkedin.com/pulse/your-out-support-devops-tooling-hurting-you-richard-fennell-z8wme/">LinkedIn article</a> of the same name</em></p></blockquote>
<h1 id="the-changing-face-of-devops">The changing face of DevOps</h1>
<p>Over the years the style of DevOps consultancy I have done has changed, along with its name. Changing from the simple &lsquo;source control&rsquo;, to SDLC, then ALM to now DevOps.</p>
<p>Back in the days of Team Foundation Server (TFS), I used to do a lot of on-premise installs and upgrades. Getting our customers up and running, and helping them improve their adoption of the tools.</p>
<p>With the move to cloud services the profile of this work changed to increasingly helping customers migrate their tooling and processes to the cloud. Specifically doing TFS/Azure DevOps Server to Azure DevOps Services cloud migrations, and then helping them get the best out of this new platform.</p>
<p>Today the profile has changing again. The majority of our clients have already moved to the cloud, so much of my engagements are focused on best practice usage of the tools, and the day to day development lifecycle. As opposed to installation, upgrade and migration.</p>
<p>That does not mean I don’t see installation, upgrade and migration projects in both the Azure DevOps and GitHub space, but certainly not in the volumes I used to.</p>
<h1 id="who-is-left-on-premises-and-why">Who is left on premises and why?</h1>
<p>Now we seem to be past the peak of cloud migrations, most clients who want to make the move have moved. Remaining on-premise we now have two broad groups:</p>
<ul>
<li>Companies with a governance reason, or maybe just a perceived reason, to stay on-premises. It is worth noting that I do see clients who assume a governance restriction, but don’t check with their regulatory/auditor body for current guidance to see what their options are. They are often surprised that the cloud, when used securely, is a viable option for them.</li>
<li>Companies who have neglected their on-premise server</li>
</ul>
<p>It is the latter I want to talk about in this article.</p>
<h1 id="falling-into-the-gap-between-it-and-development">Falling into the gap between IT and Development</h1>
<p>I have often seen that it is easy for a TFS/Azure DevOps Server to fall into a gap between the IT and Development teams. Wrongly, the server is not considered a business-critical server to be managed by IT, and not treated with rigor by the Development team as it would be by an IT team.</p>
<p>Because of this, I still see a number of old TFS/Azure DevOps Servers that could well have not been patched since their installation, maybe over a decade ago. The only reason they are being looked at now is the underlying Windows Operating Systems and SQL versions are reaching end of life and company wide IT audits are forcing changes to be made.</p>
<p>I have long said that for the on-premises TFS/Azure DevOps Server you should, on top of the usual OS patching, be looking at patching DevOps at least every 3 to 6 months. This is because this is the rough release cadence that features from the cloud version of Azure DevOps Services get packaged and made available to the on-premise version. These updates include not just new features, but also critical security patches.</p>
<p>It is common that in customers who have not patched their server that they have also not kept their development practices up to date. Often lacking CI/CD processes, which I feel are the core of modern DevOps good practice, the heartbeat of the process, and are commonly still using older source control patterns e.g. still on TFVC source control. This is not in itself wrong, but potentially limiting as modern practices and the skillsets of newer/younger developers are going to be based around the de-facto standard of Git.</p>
<h1 id="but-my-upgrade-is-too-complex">But my upgrade is too complex</h1>
<p>The issue with upgrades for such old systems is that there is often no direct path to the current version of Azure DevOps Server. Upgrades may require multiple steps, using temporary intermediate environments to address the complexities due to the limitations of supported SQL and OS versions that underpin TFS/Azure DevOps Server.</p>
<p><img alt="Azure DevOps Migration Path" loading="lazy" src="https://learn.microsoft.com/en-us/azure/devops/server/upgrade/media/upgrade-2022.png"></p>
<p>A common question from clients is ‘how long is this upgrade going to take’? A very difficult one to answer as everyone’s systems are different. You can think of the time required for this upgrade as taking the time for all the smaller updates you should have done over the life of your server and bundling it up into one massive job.</p>
<p>The majority of the time in a TFS upgrade is related to SQL operations, backups, copies, restores and schema/data updates. The sheer volume of data to be moved is usually the limiting factor. Anything involving potentially Terabytes of data takes time.</p>
<p>For major upgrades such as this, I always recommend a dry run to make sure the process is understood, timings known and to give the client&rsquo;s staff the opportunity for training and adoption of the new tooling available, such as Git and modern CI/CD.</p>
<p>But you can’t get away from the fact that you will probably need an amount of downtime, while your old TFS server is not available, before the new Azure DevOps Server is ready.</p>
<h1 id="but-cant-i-jump-to-the-cloud-now">But can’t I jump to the cloud now?</h1>
<p>Once I start this form of major upgrade engagement it is common for clients to ask &ldquo;can’t I just bypass all this and go to the cloud?&rdquo;. Of course the answer is ‘yes, but&hellip;’.</p>
<p>If they wish to do a full fidelity migration to Azure DevOps Services then they still have to get to the current version of Azure DevOps Server as the migration start point. So they cannot avoid the on-premise upgrades.</p>
<p>If they don’t need a full fidelity upgrade, or are considering a move to another toolset such as GitHub Enterprise, there are more options that do not require an update to the on-premise server, but these come with constraints and compromises, as you would expect whenever you swap toolsets.</p>
<h1 id="so-are-you-in-this-position">So are you in this position?</h1>
<p>The moral of the story is don’t neglect your DevOps tools, a craftsman does not let their tools get rusty and neither should you.</p>
<p>If you are not keeping your DevOps tool chain up to date you are hampering the efficiency and potentially the recruitment/retention of your developers as they look to work on projects using modern tooling.</p>
<p>So, if this sounds like your situation, why not get in touch with me or <a href="https://www.blackmarble.com/contact/">Black Marble</a> to discuss your DevOps options?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Recent Grey Matter Events</title>
      <link>https://blog.richardfennell.net/posts/grey-matter-events/</link>
      <pubDate>Tue, 12 Aug 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/grey-matter-events/</guid>
      <description>&lt;p&gt;&lt;img alt=&#34;Grey Matter logo&#34; loading=&#34;lazy&#34; src=&#34;https://greymatter.com/wp-content/uploads/2025/01/greymatter-logo-25.svg&#34;&gt;&lt;/p&gt;
&lt;p&gt;I have recently done a couple of streamable events with our friends at &lt;a href=&#34;https://greymatter.com/&#34;&gt;Grey Matter&lt;/a&gt;. Both are now available to enjoy on demand:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://shows.acast.com/the-grey-matter-podcast/episodes/inside-github&#34;&gt;Grey Matter Talks Tech podcast &amp;lsquo;Inside GitHub: The Platform Powering Open Sources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://event.on24.com/eventRegistration/console/apollox/mainEvent?&amp;amp;eventid=4997493&amp;amp;sessionid=1&amp;amp;username=&amp;amp;partnerref=&amp;amp;format=fhvideo1&amp;amp;mobile=&amp;amp;flashsupportedmobiledevice=&amp;amp;helpcenter=&amp;amp;key=925FBA53595C0B90D262F5B1A9C917B3&amp;amp;newConsole=true&amp;amp;nxChe=true&amp;amp;newTabCon=true&amp;amp;consoleEarEventConsole=true&amp;amp;consoleEarCloudApi=false&amp;amp;text_language_id=en&amp;amp;playerwidth=748&amp;amp;playerheight=526&amp;amp;eventuserid=770896916&amp;amp;contenttype=A&amp;amp;mediametricsessionid=662933860&amp;amp;mediametricid=7020330&amp;amp;usercd=770896916&amp;amp;mode=launch&#34;&gt;Grey Matter&amp;rsquo;s on-demand Webinar &amp;lsquo;Inside the Developer&amp;rsquo;s Toolkit&amp;rsquo;&lt;/a&gt; with &lt;a href=&#34;https://www.jetbrains.com/&#34;&gt;JetBrains&lt;/a&gt; and &lt;a href=&#34;https://smartbear.com/&#34;&gt;SmartBear&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If these sort of subjects are on interest, why not come to an in person event?&lt;/p&gt;
&lt;p&gt;I am speaking, as may other great speakers including my colleague &lt;a href=&#34;https://blogs.blackmarble.co.uk/awilson/&#34;&gt;Andrew Wilson&lt;/a&gt;, at &lt;a href=&#34;https://greymatter.com/grey-matter-tech-summit/&#34;&gt;Grey Matter&amp;rsquo;s Tech Summit&lt;/a&gt; in London on the 24th of September, and best news of all it is free.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><img alt="Grey Matter logo" loading="lazy" src="https://greymatter.com/wp-content/uploads/2025/01/greymatter-logo-25.svg"></p>
<p>I have recently done a couple of streamable events with our friends at <a href="https://greymatter.com/">Grey Matter</a>. Both are now available to enjoy on demand:</p>
<ul>
<li><a href="https://shows.acast.com/the-grey-matter-podcast/episodes/inside-github">Grey Matter Talks Tech podcast &lsquo;Inside GitHub: The Platform Powering Open Sources</a></li>
<li><a href="https://event.on24.com/eventRegistration/console/apollox/mainEvent?&amp;eventid=4997493&amp;sessionid=1&amp;username=&amp;partnerref=&amp;format=fhvideo1&amp;mobile=&amp;flashsupportedmobiledevice=&amp;helpcenter=&amp;key=925FBA53595C0B90D262F5B1A9C917B3&amp;newConsole=true&amp;nxChe=true&amp;newTabCon=true&amp;consoleEarEventConsole=true&amp;consoleEarCloudApi=false&amp;text_language_id=en&amp;playerwidth=748&amp;playerheight=526&amp;eventuserid=770896916&amp;contenttype=A&amp;mediametricsessionid=662933860&amp;mediametricid=7020330&amp;usercd=770896916&amp;mode=launch">Grey Matter&rsquo;s on-demand Webinar &lsquo;Inside the Developer&rsquo;s Toolkit&rsquo;</a> with <a href="https://www.jetbrains.com/">JetBrains</a> and <a href="https://smartbear.com/">SmartBear</a></li>
</ul>
<p>If these sort of subjects are on interest, why not come to an in person event?</p>
<p>I am speaking, as may other great speakers including my colleague <a href="https://blogs.blackmarble.co.uk/awilson/">Andrew Wilson</a>, at <a href="https://greymatter.com/grey-matter-tech-summit/">Grey Matter&rsquo;s Tech Summit</a> in London on the 24th of September, and best news of all it is free.</p>
<p>Hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why can&#39;t I see my Lenovo Thunderbolt 3 Dock?</title>
      <link>https://blog.richardfennell.net/posts/why-cant-i-see-my-thunderbolt-dock/</link>
      <pubDate>Thu, 31 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-cant-i-see-my-thunderbolt-dock/</guid>
      <description>&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;I recently rebuild my trusty Lenovo X1 Carbon laptop. As you usually find, a fresh install of Windows 11 meant a leap in performance.&lt;/p&gt;
&lt;p&gt;All was good for a couple of weeks, until I started to get problems. It still was working to supply power and the HDMI connection to my external monitor worked, but the USB devices could not be seen.&lt;/p&gt;
&lt;p&gt;I also checked with the &lt;a href=&#34;https://support.lenovo.com/us/en/solutions/nvid500262-lenovo-dock-manager-overview&#34;&gt;Lenovo Dock Manager&lt;/a&gt; and it could now no longer see the dock.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-problem">The Problem</h1>
<p>I recently rebuild my trusty Lenovo X1 Carbon laptop. As you usually find, a fresh install of Windows 11 meant a leap in performance.</p>
<p>All was good for a couple of weeks, until I started to get problems. It still was working to supply power and the HDMI connection to my external monitor worked, but the USB devices could not be seen.</p>
<p>I also checked with the <a href="https://support.lenovo.com/us/en/solutions/nvid500262-lenovo-dock-manager-overview">Lenovo Dock Manager</a> and it could now no longer see the dock.</p>
<h1 id="the-solution">The Solution</h1>
<p>I am not sure why it happened, but the fix was</p>
<ol>
<li>Open Device Manager: Press the Windows key + X and select &ldquo;Device Manager&rdquo;.</li>
<li>Show Hidden Devices: Click &ldquo;View&rdquo; in the menu bar and select &ldquo;Show hidden devices&rdquo;.</li>
<li>Locate Thunderbolt Controller: Expand &ldquo;System Devices&rdquo; and find the &ldquo;Thunderbolt Controller&rdquo; entry.</li>
<li>Uninstall: Right-click on &ldquo;Thunderbolt Controller&rdquo; and select &ldquo;Uninstall device&rdquo;. In the &ldquo;Uninstall Device&rdquo; window, check the box that says &ldquo;Delete the driver software for this device&rdquo;.</li>
</ol>
<p>Once this was done I disconnected the dock and reconnected it, and this dialog appeared</p>
<p><img alt="Dock Dialog" loading="lazy" src="/images/rfennell/dockfix.png"></p>
<p>I had to approve access to the dock.</p>
<p>My guess is that somehow this access control setting had got set in the wrong state. I have no idea how, as I was never shown this dialog after the Windows reinstall, but at least it is fixed now</p>
]]></content:encoded>
    </item>
    <item>
      <title>Creating a GitHub App based Azure DevOps Pipelines Service Connection</title>
      <link>https://blog.richardfennell.net/posts/setting-up-github-app-ado-service-connection-on-another-org/</link>
      <pubDate>Wed, 30 Jul 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-up-github-app-ado-service-connection-on-another-org/</guid>
      <description>&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;The &lt;a href=&#34;https://github.com/marketplace/azure-pipelines&#34;&gt;GitHub &amp;lsquo;Azure Pipelines&amp;rsquo; App&lt;/a&gt; enables some really interesting &amp;lsquo;better together&amp;rsquo; scenarios mixing the usage of Azure DevOps Pipelines for CI/CD processes while your source is stored on GitHub. It is particularly useful if an enterprise is migrating towards GitHub Enterprise over a period of time, maybe using &lt;a href=&#34;https://docs.github.com/en/enterprise-cloud@latest/migrations/using-github-enterprise-importer/migrating-from-azure-devops-to-github-enterprise-cloud/migrating-repositories-from-azure-devops-to-github-enterprise-cloud&#34;&gt;GitHub Enterprise Importer (GEI)&lt;/a&gt; to do the migration.&lt;/p&gt;
&lt;p&gt;I was recently working on such a migration for a client that involved numerous Azure DevOps organisations. In the past someone had setup the GitHub &amp;lsquo;Azure Pipelines&amp;rsquo; App on their GitHub organisation and got it working with one of their Azure DevOps organisation.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-problem">The Problem</h1>
<p>The <a href="https://github.com/marketplace/azure-pipelines">GitHub &lsquo;Azure Pipelines&rsquo; App</a> enables some really interesting &lsquo;better together&rsquo; scenarios mixing the usage of Azure DevOps Pipelines for CI/CD processes while your source is stored on GitHub. It is particularly useful if an enterprise is migrating towards GitHub Enterprise over a period of time, maybe using <a href="https://docs.github.com/en/enterprise-cloud@latest/migrations/using-github-enterprise-importer/migrating-from-azure-devops-to-github-enterprise-cloud/migrating-repositories-from-azure-devops-to-github-enterprise-cloud">GitHub Enterprise Importer (GEI)</a> to do the migration.</p>
<p>I was recently working on such a migration for a client that involved numerous Azure DevOps organisations. In the past someone had setup the GitHub &lsquo;Azure Pipelines&rsquo; App on their GitHub organisation and got it working with one of their Azure DevOps organisation.</p>
<p>The question was how do you setup a GitHub App based service connection in their other Azure DevOps organsations, without breaking what was already installed?</p>
<h1 id="service-connections--github-authentication">Service Connections &amp; GitHub Authentication</h1>
<p>The way Azure DevOps Pipelines communicate with GitHub is via a Service Connection.</p>
<p>To create a service connection you goto <code>Project Settings &gt; Pipelines &gt; Service Connections</code> then press the <code>New Service Connection</code> button, picking the service you require.</p>
<p>When you do this and pick the <code>GitHub</code> option you are offered two ways to authenticate</p>
<ul>
<li>A <a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens">PAT</a> - linked to a specific user and will expire after a fixed period</li>
<li>oAuth - again linked to a specific user, the one creating the service connection, but will not expire, so the preferred solution</li>
</ul>
<p>But neither of these are authentication via the <code>GitHub App</code>. So the question is how do you create a service connection of that type?</p>
<h1 id="solution">Solution</h1>
<p>It turns out adding the &lsquo;right type of service connection&rsquo; is easy, but really poorly documented. Every document seems to point to re-installing the Azure Pipelines App in GitHub, this is not required.</p>
<p>The process is as follows, assuming the Azure Pipeline GitHub App is installed and configured in your GitHub organisation.</p>
<ol>
<li>In a Team Project on the Azure DevOps organisation where you need the service connection create a new pipeline</li>
<li>Pick GitHub as the source location. You maybe prompted to authenticate, and then will be given a list of repositories to pick from, pick one, the choice of which one is not important.</li>
<li>Once this is done you are presented with a list of pipeline templates to use, at this point you can cancel the creation of the pipeline, completing it is not required.</li>
<li>If you look in <code>Project Settings &gt; Pipelines &gt; Service Connections</code> you will see a new Git Service Connection has been created with the name of the Azure DevOps Team Project that has the authentication method GitHub App, exactly what we require.</li>
</ol>
<p>In my scenario I was then able to rewire all the pipelines in my Azure DevOps organisation to point to GitHub using the GEI command <code>gh ado2gh share-service-connection</code> to share the new service connection to other Team Projects, and then <code>gh ado2gh rewire-pipeline</code> to change the source location.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A GitHub Actions equivalent to Azure DevOps Pipelines $(rev:r) revisions</title>
      <link>https://blog.richardfennell.net/posts/github-actions-equivalent-to-ado-revisions/</link>
      <pubDate>Tue, 17 Jun 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/github-actions-equivalent-to-ado-revisions/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;In Azure DevOps Pipelines there is a feature to dynamically configure the build number e.g. at the top of a YAML build add the following to get a version in the form 1.2.25123.1&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yml&#34; data-lang=&#34;yml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;$(major).$(minor).$(year:YY)$(dayofyear).$(rev:r)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Major&lt;/code&gt; &amp;amp; &lt;code&gt;Minor&lt;/code&gt; are user created and managed variables&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Year&lt;/code&gt;, &lt;code&gt;Dayofyear&lt;/code&gt; and &lt;code&gt;Rev&lt;/code&gt; are built in Azure DevOps pre-defined variables. The first two are obviously based on the current date, the &lt;code&gt;rev&lt;/code&gt; is the number of times a pipeline has been run on the current date.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The problem is that there is no equivalent feature to &lt;code&gt;name&lt;/code&gt; in GitHub Actions as build names/numbers are not really a thing in Actions.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>In Azure DevOps Pipelines there is a feature to dynamically configure the build number e.g. at the top of a YAML build add the following to get a version in the form 1.2.25123.1</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">$(major).$(minor).$(year:YY)$(dayofyear).$(rev:r)</span><span class="w">
</span></span></span></code></pre></div><ul>
<li><code>Major</code> &amp; <code>Minor</code> are user created and managed variables</li>
<li><code>Year</code>, <code>Dayofyear</code> and <code>Rev</code> are built in Azure DevOps pre-defined variables. The first two are obviously based on the current date, the <code>rev</code> is the number of times a pipeline has been run on the current date.</li>
</ul>
<p>The problem is that there is no equivalent feature to <code>name</code> in GitHub Actions as build names/numbers are not really a thing in Actions.</p>
<p>However, there are use cases where you want to use some or part of this dynamic build number for other purposes such as version stamping a file.</p>
<h1 id="the-solution">The Solution</h1>
<p>The date related parts of the build number are easy to recreate with PowerShell <code>get-date</code>,</p>
<p>The revision takes a bit more work, but can be done with more PowerShell and the GitHub CLI.</p>
<script src="https://gist.github.com/rfennell/c72aa92491dc905da6d4f6cf2d2c5b92.js"></script>
]]></content:encoded>
    </item>
    <item>
      <title>Problems migrating Non-English Azure DevOps Servers to Azure DevOps Services and how to solve them</title>
      <link>https://blog.richardfennell.net/posts/problems-migrating-non-english-ado-server-to-azure/</link>
      <pubDate>Fri, 13 Jun 2025 00:00:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-migrating-non-english-ado-server-to-azure/</guid>
      <description>&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;Over the years, I have been involved in numerous migrations of TFS/Azure DevOps Server from on-premises servers to the cloud hosted Azure DevOps Services using the &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/migrate/migration-overview?view=azure-devops#option-2-azure-devops-data-migration-tool&#34;&gt;Microsoft Azure DevOps Data Migration Tool&lt;/a&gt;. As long as you carefully followed the instructions, the process was relatively straightforward.&lt;/p&gt;
&lt;p&gt;I was recently involved in a cloud migration of a set of non-English Azure DevOps Servers i.e. the Azure DevOps Servers had been installed in French not English. Migrating these servers to Azure DevOps Services proved a challenge.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="introduction">Introduction</h1>
<p>Over the years, I have been involved in numerous migrations of TFS/Azure DevOps Server from on-premises servers to the cloud hosted Azure DevOps Services using the <a href="https://learn.microsoft.com/en-us/azure/devops/migrate/migration-overview?view=azure-devops#option-2-azure-devops-data-migration-tool">Microsoft Azure DevOps Data Migration Tool</a>. As long as you carefully followed the instructions, the process was relatively straightforward.</p>
<p>I was recently involved in a cloud migration of a set of non-English Azure DevOps Servers i.e. the Azure DevOps Servers had been installed in French not English. Migrating these servers to Azure DevOps Services proved a challenge.</p>
<h1 id="the-problem">The Problem</h1>
<p>The core of the problem with this type of migration is that the target system, Azure DevOps Services, is only available in English.</p>
<p>At this point it is important to explain what is meant by the previous statement. Azure DevOps Services allows you to set a language preference, which you set in your <a href="https://learn.microsoft.com/en-us/azure/devops/organizations/settings/set-your-preferences?view=azure-devops&amp;tabs=current-page">user profile</a>, but the underlying database is always in English. This means that if you have a non-English Azure DevOps Server, where the database is non-English, the migration will have problems.</p>
<p>This problem becomes apparent when you perform the first step of using the Azure DevOps Data Migration Tool, the <code>migrator validate</code> command to validate your TPC (Team Project Collection) prior to migration.</p>
<blockquote>
<p><strong>Note</strong> Remember you migrate a TPC, not a whole Azure DevOps Server, which could have many TPCs</p></blockquote>
<p>This command checks the source Azure DevOps Server TPC for issues that will block a migration. In the case of my Non-English server TPC the validation returned many issues.</p>
<p>The first was a warning about the database collation being <code>French_CI_AS</code>, so any migration could result in some characters not being migrated correctly e.g. Non-English characters like &lsquo;é&rsquo; might become equivalent to the English &rsquo;e&rsquo; after migration. However, this was just a warning, and not an error, so can be ignored for now and reviewed as part of the planned dry-run of the migration.</p>
<p>The bigger problem was the many errors in the form</p>
<pre tabindex="0"><code>VS403443: In order to migrate successfully, you must rename field &#39;System.Title&#39; to &#39;Title&#39;. Given name for &#39;System.Title&#39; is &#39;Titre&#39;
</code></pre><p>When you do these migrations there are often VSxxxxxx or TFxxxxxx errors, especially on TPCs that have been upgraded through various versions of TFS and Azure DevOps over the years. These are usually fixable by editing the process template <a href="https://learn.microsoft.com/en-us/azure/devops/migrate/migration-troubleshooting">as discussed in the migration documentation</a>.</p>
<p>However, when I tried to fix these issues I saw two completely separate sets of problems, depending on whether the TPC was using the older <a href="https://learn.microsoft.com/en-us/azure/devops/organizations/settings/work/inheritance-versus-hosted-xml?view=azure-devops">XML Process Templates or the newer Inherited Process Templates</a>.</p>
<h2 id="when-the-tpc-is-using-xml-process-templates">When the TPC is using XML Process Templates</h2>
<p>When the migration was for a TPC using the older XML Process templates, I found that once I had fixed the VS403443 errors using commands like</p>
<pre tabindex="0"><code>witadmin.exe changefield /collection:http://server:8080/tfs/defaultcollection, /n:System.Title /name:&#34;Title&#34; /noprompt
</code></pre><p>When I reran the <code>migrator validate</code> command, I got a new set of errors, but they were the same errors as before, just with the field names reversed e.g..</p>
<pre tabindex="0"><code>VS403443: In order to migrate successfully, you must rename field &#39;System.Title&#39; to &#39;Titre&#39;. Given name for &#39;System.Title&#39; is &#39;Title&#39;
</code></pre><p>I was in an endless loop of errors.</p>
<h2 id="when-the-tpc-is-using-inherited-process-templates">When the TPC is using Inherited Process Templates</h2>
<p>When the migration was for a TPC using the newer Inherited Process Templates, the problem was that I could not use the <code>witadmin</code> tool. There is in fact no supported way to update the display name for built-in work item types fields.</p>
<p>This is because the Inherited Process Templates are stored in the database, and not as XML files.</p>
<h1 id="the-solutions">The Solutions</h1>
<p>We have to consider the solutions to these problems separately.</p>
<h2 id="xml-process-templates">XML Process Templates</h2>
<p>After much unsuccessful experimentation, I reached out to the community to see if anyone had a solution to the problem and a fellow DevOps MVP, <a href="https://mvp.microsoft.com/en-US/MVP/profile/c183d172-3c9a-e411-93f2-9cb65495d3c4">Neno Loje</a> suggested he had had success in the past migrating TPCs originally setup in German, but the process he used came with some constraints.</p>
<p>The process I ended up with, based on his suggestions, was based around migrating the Non-English Team Project Collection (TPC) to an temporary English Azure DevOps Server:</p>
<ol>
<li>Build a new Azure DevOps Server, making sure you pick the default language English</li>
<li>Detach the TPC to be migrated on the Non-English Azure DevOps Server and make a SQL Backup of the detached TPC Database on the Non-English Azure DevOps Server
<blockquote>
<p><strong>Note</strong> Once the backup is done, you will usually reattach the TPC on the source Non-English server so users can continue to work while you do the dryrun migration</p></blockquote>
</li>
<li>Restore the TPC SQL backup on the new English Azure DevOps Server and attach the exported Non-English TPC to the new English Azure DevOps Server</li>
<li>On the new English Azure DevOps Server run the <code>migrator validate</code> command to check for errors, and fix them. This should be possible without getting into the error loops
<ul>
<li>
<p>Update all WIT fields causing the VS403443 errors, rename display names to the English names as discussed above using the <code>witadmin</code> command)</p>
<blockquote>
<p><strong>Note</strong> I wrote this <a href="https://gist.github.com/rfennell/e29cd9250de0eb955c55373721b9f786">PowerShell script</a> to automate the process of generating the commands to do the renaming of fields in a process template based on the migration error log.</p></blockquote>
</li>
<li>
<p>Fixing any remaining errors in the migration should be possible with further XML template file edits which are published using the <code>witadmin</code> command as discussed in the <a href="https://learn.microsoft.com/en-us/azure/devops/migrate/migration-troubleshooting">migration documentation</a></p>
<blockquote>
<p><strong>Note</strong> Watch out for any <code>for=[group]</code> and <code>not=[group]</code> attributes in the WITD XML files, used to conditionally set field constraints e.g. make a field readonly for all users except those in a specific group. These constraints are not supported in Azure DevOps Services, but are not always picked up by <code>migrator validate</code>. If they are not removed, they will cause the import to fail. An import failure email message will be recieved via email in form</p>
<p><code>Step : ProcessValidation - Failure Type - Validation failed : Invalid process template: WorkItem Tracking\TypeDefinitions\Bug.xml:: Object reference not set to an instance of an object.</code></p></blockquote>
</li>
<li>
<p>You can also look at using the <a href="https://github.com/microsoft/process-customization-scripts">ConformProject.ps1</a> script to help with this process. ConformProject will take a defined process template that is on your local machine and apply it, in full, to a specified project. This is a good way to get rid of all the errors, but you will lose all your customisations, so use with care.</p>
</li>
</ul>
</li>
<li>Once all the errors are fixed, the validation should pass, and hence the rest of the migration process should be possible</li>
</ol>
<p>However, the fact your migration completed might not be the end of your issues. There is of course the potential issue of the collation of the database, but arguably this is a minor issue. The more important point is something Neno had mentioned. He commented that this process is good for a read-only migration to Azure DevOps Services, but if you wish to continue using the migrated work items more work is required.</p>
<p>The reason for this is not the migration of the process template, it&rsquo;s the actual work item data. The imported work items are still using strings for state and other fields in the non-English language and need to be &ldquo;converted&rdquo; to values that match the constraints of the &lsquo;fixed&rsquo; english process template.</p>
<p>To fix this means edits are required for all active work items. This of course could be done one-by-one as Work Items are edited, or via scripts to make the changes at scale, but more the work will be required to make the work item data usable in Azure DevOps Services.</p>
<h2 id="inherited-process-templates">Inherited Process Templates</h2>
<blockquote>
<p><strong>&mdash;&mdash; DANGER ZONE &mdash;&mdash;</strong></p>
<p>The following section describes a process that is <strong>not supported</strong> by Microsoft and could lead to serious issues with your Azure DevOps Server.</p>
<p>The process should only be used on a copy of a system, <strong>not on a live system</strong> itself.</p>
<p><strong>I accept no responsibility</strong> for any issues that arise from following this process, you follow it at your own risk.</p></blockquote>
<p>When the migration was for a TPC using Inherited Process Templates, the solution is very different, and can only be described as a <strong>massive hack</strong>.</p>
<p>There are two reasons for the need for a different process</p>
<ul>
<li>With inherited process templates the built-in work item types are by design read only. Your customisations are additions to the built-in work item types. You cannot change the built-in work item types, you can only hide or add new fields to them. There is no equivalent of the <code>witadmin</code> command to change the display names of fields in the Inherited Process Templates, all changes are done via the Azure DevOps UI or REST API.</li>
<li>The other reason for the different process is that it is not possible to attach a Non-English Inherited Process TPC to an English Azure DevOps Server. Any &lsquo;fixing&rsquo; process must be done on a Non-English server.</li>
</ul>
<p>So, the process I ended up with is as follows:</p>
<ol>
<li><strong>IMPORTANT</strong> Create a duplicate of your Non-English Azure DevOps Server, this is the one you will use for the migration process. This step is very important, as you will be making SQL edits to the database which is very unsupported.</li>
<li>On the duplicate Non-English Azure DevOps Server load <a href="https://learn.microsoft.com/en-us/ssms/install/install">SQL Server Management Studio (SSMS)</a> and connect to the TPC database.</li>
<li>For each VS403443 error run the following SQL command to update the display name of the field in the database:
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sql" data-lang="sql"><span class="line"><span class="cl"><span class="k">UPDATE</span><span class="w"> </span><span class="p">[</span><span class="n">dbo</span><span class="p">].[</span><span class="n">WorkItemTypeField</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="k">SET</span><span class="w"> </span><span class="p">[</span><span class="n">Name</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">&#39;Title&#39;</span><span class="p">,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="p">[</span><span class="n">ReportingName</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">&#39;Title&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="k">WHERE</span><span class="w"> </span><span class="p">[</span><span class="n">ReferenceName</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="s1">&#39;System.Title&#39;</span><span class="w">
</span></span></span></code></pre></div><blockquote>
<p><strong>Note</strong> Again I wrote a <a href="https://gist.github.com/rfennell/3fb21ce54d9baed33fcca0465f933b51">PowerShell script</a> to automate the process of generating the SQL commands to do the renaming of fields in a process template based on the migration error log.</p></blockquote>
</li>
<li>Rerun the <code>migrator validate</code> command to check for any remaining errors, and fix them.</li>
<li>Once all the errors are fixed, the validation should pass, and hence the rest of the migration process should be possible</li>
</ol>
<p>Again the fact your migration completed is not the end of your issues, in fact the issues are much worse than with the XML Process Templates.</p>
<p>The reason for this is that the Non-English inherited process template tries to match all your work items to the built-in work item types based on the string of the work item type name. So for example, it will try to match the work item type &lsquo;Bogue&rsquo; in your French team project to the built-in work item type &lsquo;Bug&rsquo;, which will of course fail.</p>
<p>This is a serious problem, as it means much of the Azure Boards and Azure Test Plans UI will fail to load any work items, if they even load at all. The work item data is there, you can see it via the REST API e.g. <code>https://dev.azure.com/{organization}/{project}/_apis/wit/workitems/{id}</code>, but the UI will not work as the work item type names are not what the English based Azure DevOps Services instance is expecting.</p>
<p>In theory there must be a way to fix this, but it will require a lot of SQL edits prior to migration to Azure. Far more work than I have the appetite to do.</p>
<p>So in summary, you have to consider this process for migrating Non-English inherited process based TPCs to only be acceptable if your key goal is to migrate your TFVC repositories with no data loss to Azure DevOps Services, and that you are prepared to accept that the work item data will not be usable.</p>
<p>Your only real option if you need the work item data from a Non-English on-premises Inherited process TPC in Azure DevOps Services is to look at using <a href="https://github.com/nkdAgility/azure-devops-migration-tools">Martin Hinshelwood Azure DevOps Migration Tools</a> to migrate the on-premises work items to a new Azure DevOps Services project on the migrated Azure DevOps Service instance, one that is setup using an English based inherited process template. Given the need to address translation issues during this migration this will also be a not insignificant undertaking.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Migrating a non-English Azure DevOps Server to Azure DevOps Services is possible, but it is <strong>not officially supported</strong> and requires some extra steps, some of them <strong>very &lsquo;hacky&rsquo;</strong>.</p>
<p>If you are considering either of the migration paths in this post, I would recommend in the strongest terms that you perform your migrations  on a duplicate of your Azure DevOps Server and not on your live system.</p>
<p>I cannot stress enough, please <strong>DO NOT</strong> make SQL edits of your live Azure DevOps Server database, as this is not supported and could lead to serious issues with your live system.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using GitHub Copilot to write an Azure DevOps Test Plan Export Tool</title>
      <link>https://blog.richardfennell.net/posts/using-copilot-to-write-an-azure-devops-export-tool/</link>
      <pubDate>Wed, 28 May 2025 00:00:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-copilot-to-write-an-azure-devops-export-tool/</guid>
      <description>&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;I got asked today by a client if there was a way to automate the exporting Azure DevOps Test Plans to Excel files. They knew they could do it manually via the Azure DevOps UI, but had a lot of Test Plans to export and wanted to automate the process.&lt;/p&gt;
&lt;h1 id=&#34;the-options&#34;&gt;The Options&lt;/h1&gt;
&lt;p&gt;I considered a few options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/test/test-case-managment-reference?view=azure-devops&#34;&gt;TCM CLI&lt;/a&gt; - This is a command line tool that can be used to interact with Azure DevOps Test Plans. It can be used to import or clone Test Plans, but not to export them.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://learn.microsoft.com/en-us/cli/azure/devops?view=azure-cli-latest&#34;&gt;AZ DEVOPS CLI&lt;/a&gt; - This is a command line tool that can be used to interact with Azure DevOps. Unfortunately, it does not have any commands to export Test Plans.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.2&#34;&gt;Azure DevOps REST API&lt;/a&gt; - This is a powerful API that can be used to interact with Azure DevOps, but the documentation makes no mention of a call to export Test Plans.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But&amp;hellip;..&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="introduction">Introduction</h1>
<p>I got asked today by a client if there was a way to automate the exporting Azure DevOps Test Plans to Excel files. They knew they could do it manually via the Azure DevOps UI, but had a lot of Test Plans to export and wanted to automate the process.</p>
<h1 id="the-options">The Options</h1>
<p>I considered a few options:</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/azure/devops/test/test-case-managment-reference?view=azure-devops">TCM CLI</a> - This is a command line tool that can be used to interact with Azure DevOps Test Plans. It can be used to import or clone Test Plans, but not to export them.</li>
<li><a href="https://learn.microsoft.com/en-us/cli/azure/devops?view=azure-cli-latest">AZ DEVOPS CLI</a> - This is a command line tool that can be used to interact with Azure DevOps. Unfortunately, it does not have any commands to export Test Plans.</li>
<li><a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.2">Azure DevOps REST API</a> - This is a powerful API that can be used to interact with Azure DevOps, but the documentation makes no mention of a call to export Test Plans.</li>
</ul>
<p>But&hellip;..</p>
<h1 id="the-solution">The Solution</h1>
<p>I used a process I have used before to write tools for Azure DevOps. I opened the page in the Azure DevOps UI that I wanted to automate, and then used the browser developer tools to inspect the network traffic. I found a call that was made to export the Test Plan to Excel, and then used that as the basis for my tool.</p>
<p>Turns out there is an undocumented API call that can be used to export Test Plans to Excel. A POST call is made to the following URL:</p>
<pre tabindex="0"><code>https://dev.azure.com/{organization}/{project}/_apis/testplan/TestCases/TestCaseFile?api-version=7.1-preview.1
</code></pre><p>with a payload that looks like this:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&#34;testPlanId&#34;</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&#34;testSuiteId&#34;</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&#34;testCaseIds&#34;</span><span class="p">:</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">],</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&#34;columnOptions&#34;</span><span class="p">:</span> <span class="p">[</span><span class="s2">&#34;System.Id&#34;</span><span class="p">,</span><span class="s2">&#34;System.AssignedTo&#34;</span><span class="p">,</span><span class="s2">&#34;System.State&#34;</span><span class="p">]</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><h1 id="the-code">The Code</h1>
<p>So I needed to write a tool get all the test plans and test suites in an Azure DevOps Team project, and then call the API to export them to a set of Excel files.</p>
<p>I usually write this type of tool in PowerShell, but try as I might, even though I appeared to get the expected data back from the API call, I could not write it to disk in such a way Excel could read it. Excel said the file was corrupt.</p>
<p>As I had delivered a webinar today on using GitHub Copilot to write code, I thought I should practice what I preach and use it to fix the tool. As I suspected the issue was not the API call, but the PowerShell encoding of the data, I decided to ask Copilot to convert the code to Python, a language I am not too familiar with.</p>
<p>So with a couple of prompts my simple PowerShell proof of concept was converted to a Python script, and as I had hoped the resultant saved file loaded in Excel without a problem.</p>
<p>It was then a simple matter of adding more code, via Copilot prompts, to loop through all the Test Plans and Test Suites in the Team Project, and then call the API to export them to a set of Excel files.</p>
<p>This is not to say I did not have to make some tweaks to the code, but Copilot got most of it correct, there were a couple of edits I had to make to API URL strings, but Copilot certainly wrote the Python code much faster than I could have done myself.</p>
<p>So certainly a win for GitHub Copilot, and I now have a tool that can be used to export Azure DevOps Test Plans to Excel files at scale.</p>
<script src="https://gist.github.com/rfennell/43f0746e93d57de55ea5989dc213fc2d.js"></script>
]]></content:encoded>
    </item>
    <item>
      <title>Presenting at Black Marble event GitHub Copilot how to harness the power of AI for Developers&#39;</title>
      <link>https://blog.richardfennell.net/posts/bm-github-copilot-event/</link>
      <pubDate>Wed, 21 May 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/bm-github-copilot-event/</guid>
      <description>&lt;p&gt;There has been much talk at the &lt;a href=&#34;https://build.microsoft.com/en-US/home&#34;&gt;Microsoft Build&lt;/a&gt; conference of the new agentic world, where you get AI agents to perform tasks on your behalf. This could be in your Enterprise applications, but also in the DevOps process that you use to create these new AI aware applications.&lt;/p&gt;
&lt;p&gt;This can all seem a bit &amp;lsquo;in the future&amp;rsquo;, but it is a future that is arriving rapidly, and one that will make for a seismic change in what it means to be in a development team.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There has been much talk at the <a href="https://build.microsoft.com/en-US/home">Microsoft Build</a> conference of the new agentic world, where you get AI agents to perform tasks on your behalf. This could be in your Enterprise applications, but also in the DevOps process that you use to create these new AI aware applications.</p>
<p>This can all seem a bit &lsquo;in the future&rsquo;, but it is a future that is arriving rapidly, and one that will make for a seismic change in what it means to be in a development team.</p>
<p>So to find out what your next steps in this space could be, why not come to the <a href="https://www.blackmarble.com/events/2025-05-28-github-copilot">Black Marble &lsquo;GitHub Copilot how to harness the power of AI for Developers&rsquo;</a> event I am presenting at next week next week?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences Migrating CI/CD solutions with GitHub Copilot</title>
      <link>https://blog.richardfennell.net/posts/experiences-migrating-cicd-with-copilot/</link>
      <pubDate>Fri, 09 May 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-migrating-cicd-with-copilot/</guid>
      <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I have recently been swapping some Azure DevOps Pipelines to GitHub Actions as part of a large GitHub Enterprise migration. The primary tool I have been using for this is GitHub Copilot in the new &lt;a href=&#34;https://code.visualstudio.com/blogs/2025/02/24/introducing-copilot-agent-mode&#34;&gt;Agent Mode&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Frankly, Copilot is like magic, it is amazing how close it gets to a valid solution. I say this after suffering years of undelivered marketing promises of no-code &amp;ldquo;we won&amp;rsquo;t need developers in the future&amp;rdquo;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>I have recently been swapping some Azure DevOps Pipelines to GitHub Actions as part of a large GitHub Enterprise migration. The primary tool I have been using for this is GitHub Copilot in the new <a href="https://code.visualstudio.com/blogs/2025/02/24/introducing-copilot-agent-mode">Agent Mode</a></p>
<p>Frankly, Copilot is like magic, it is amazing how close it gets to a valid solution. I say this after suffering years of undelivered marketing promises of no-code &ldquo;we won&rsquo;t need developers in the future&rdquo;.</p>
<p>Thats said, I have yet to find a Copilot translated GitHub Action workflow that works 100% first time. Interestingly they often run without error, but don&rsquo;t have the desired results of the source pipeline.</p>
<h2 id="so-what-have-i-learnt">So what have I learnt?</h2>
<h3 id="your-prompts-will-not-be-as-definitive-as-you-think">Your prompts will not be as &lsquo;definitive&rsquo; as you think.</h3>
<p>Many steps done using Azure DevOps tasks have no obvious 1-to-1 equivalent in GitHub Actions, so Copilot generates script blocks to perform the same function.</p>
<p>I was moving an Azure DevOps YAML pipeline that was targeting Ubuntu to GitHub Actions, so not unreasonably the generate scripts were in BASH. The problem for me was that I don&rsquo;t have that much BASH experience, so making my debugging slower.</p>
<p>The solution in my case was as simple as to add an initial prompt stating any generated scripts should be in PowerShell Core, or as I did, use subsequent prompts to alter the generated scripts.</p>
<blockquote>
<p><strong>Note</strong> If you always want given prompt to be present, such as to favour PowerShell in a given repo, the you could look at adding a <a href="https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot">.github/copilot-instructions.md</a> file to define prompts that are added to everything.</p></blockquote>
<h3 id="silent-errors-are-the-root-of-many-issues">Silent errors are the root of many issues</h3>
<p>Most of the problems I had with migrated pipelines were the ones you experience in any CI/CD solution i.e. you are in the wrong folder, a filter is wrong, the built URI is incorrect etc.</p>
<p>Using Copilot for this migration, these types of errors seemed to be caused by either</p>
<ul>
<li>the move from Azure DevOps tasks, that may have hidden behaviors or defaults, to scripts</li>
<li>or the differences in CLI tools called between the tasks and generated scripts</li>
</ul>
<p>I good example was the way an archive/ZIP step worked. I ended up creating a ZIP file of static website content with an extra root folder, so all other folders were one level deeper than expected. This deployed without error, but obviously did not work as the old site content had not been overwritten, all the new content was in a sub-folder. A simple problem, but one that took me too lon to realised what the cause was as I got no error messages.</p>
<h2 id="summary">Summary</h2>
<p>GitHub Copilot can get you 90% plus of the way to a solution, but as with all coding the &lsquo;devil is in the detail&rsquo;. That last few percent will take a disproportionate time.</p>
<p>Also there is the danger it generates code you don&rsquo;t understand, but that is a learning opportunity is it not?</p>
<p>That said, with careful crafting of your prompts you get a better solution faster, as long as you don&rsquo;t relax the quality checks you should be doing for any solution whether human or AI created.</p>
]]></content:encoded>
    </item>
    <item>
      <title>You need to wait longer when restarting Logic Apps</title>
      <link>https://blog.richardfennell.net/posts/you-need-to-wait-longer-when-restarting-logic-apps/</link>
      <pubDate>Tue, 25 Mar 2025 00:00:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-need-to-wait-longer-when-restarting-logic-apps/</guid>
      <description>&lt;p&gt;I have been doing some work one a Logic Apps that routes its traffic out via a vNet and accesses its underlying &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/storage/common/storage-private-endpoints&#34;&gt;Storage Account via private endpoints&lt;/a&gt;. &amp;lsquo;Nothing that special in that configuration&amp;rsquo; I hear you saying, but I did manage to confuse myself whilst testing.&lt;/p&gt;
&lt;p&gt;After changing my configuration of the vNet, I restarted the Logic App to make sure everything was working as expected on startup.&lt;/p&gt;
&lt;p&gt;The Logic App appeared to start within a few seconds, the Overview pane in the Azure Portal showing the correct values.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been doing some work one a Logic Apps that routes its traffic out via a vNet and accesses its underlying <a href="https://learn.microsoft.com/en-us/azure/storage/common/storage-private-endpoints">Storage Account via private endpoints</a>. &lsquo;Nothing that special in that configuration&rsquo; I hear you saying, but I did manage to confuse myself whilst testing.</p>
<p>After changing my configuration of the vNet, I restarted the Logic App to make sure everything was working as expected on startup.</p>
<p>The Logic App appeared to start within a few seconds, the Overview pane in the Azure Portal showing the correct values.</p>
<p>However, if I tried to access any of the workflows in the Logic App, I was presented with a &lsquo;bad gateway&rsquo; error. On receiving this error, I assumed that I had made a mistake in the configuration of the vNet and started to investigate.</p>
<p>However, I should not have been so hasty. After waiting a minute or two, the Logic App started to work as expected. It appears that the Logic App Overview pane is updated long before the Logic App is fully operational,so can&rsquo;t be trusted as a source of truth.</p>
<p>So not a revelatory blog post, but a reminder to myself to be patient when restarting Logic Apps.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why cannot I see my Logic App action output?</title>
      <link>https://blog.richardfennell.net/posts/why-cant-i-see-my-logic-app-action-output/</link>
      <pubDate>Sat, 15 Mar 2025 00:00:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-cant-i-see-my-logic-app-action-output/</guid>
      <description>&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;I recently was debugging an &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/logic-apps/add-run-powershell-scripts#:~:text=To%20perform%20custom%20integration%20tasks%20inline%20with%20your,the%20Inline%20Code%20action%20named%20Execute%20PowerShell%20Code.&#34;&gt;Execute PowerShell Code&lt;/a&gt; Logic App action, so wanted to see it&amp;rsquo;s output. However, when I reviewed the run history, the output for my code action was empty.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;No Output&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/images/rfennell/logic-app-actions-output-bad.png&#34;&gt;&lt;/p&gt;
&lt;h1 id=&#34;the-cause-and-solution&#34;&gt;The Cause (and Solution)&lt;/h1&gt;
&lt;p&gt;The problem turned out to be that the Logic App&amp;rsquo;s Inbound traffic configuration. You can replicate this issue easily&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a new Logic App Workflow, use any trigger and then add an &lt;code&gt;Execute PowerShell Code&lt;/code&gt; action, you don&amp;rsquo;t need to edit the default code sample.&lt;/li&gt;
&lt;li&gt;Save the new Action and then run it.&lt;/li&gt;
&lt;li&gt;Review the run history and you&amp;rsquo;ll see that the output as expected.
&lt;img alt=&#34;Output Shown&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/images/rfennell/logic-app-actions-output-good.png&#34;&gt;&lt;/li&gt;
&lt;li&gt;Next, change the Logic App&amp;rsquo;s &amp;lsquo;Inbound traffic configuration &amp;gt; Public Network Access&amp;rsquo; to &amp;lsquo;Disabled&amp;rsquo; and run the Logic App again, and the output now cannot be seen, as shown above.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You are actually warned about this in the Azure Portal the first time you view a run history the settings that block it&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-problem">The Problem</h1>
<p>I recently was debugging an <a href="https://learn.microsoft.com/en-us/azure/logic-apps/add-run-powershell-scripts#:~:text=To%20perform%20custom%20integration%20tasks%20inline%20with%20your,the%20Inline%20Code%20action%20named%20Execute%20PowerShell%20Code.">Execute PowerShell Code</a> Logic App action, so wanted to see it&rsquo;s output. However, when I reviewed the run history, the output for my code action was empty.</p>
<p><img alt="No Output" loading="lazy" src="/images/rfennell/logic-app-actions-output-bad.png"></p>
<h1 id="the-cause-and-solution">The Cause (and Solution)</h1>
<p>The problem turned out to be that the Logic App&rsquo;s Inbound traffic configuration. You can replicate this issue easily</p>
<ol>
<li>Create a new Logic App Workflow, use any trigger and then add an <code>Execute PowerShell Code</code> action, you don&rsquo;t need to edit the default code sample.</li>
<li>Save the new Action and then run it.</li>
<li>Review the run history and you&rsquo;ll see that the output as expected.
<img alt="Output Shown" loading="lazy" src="/images/rfennell/logic-app-actions-output-good.png"></li>
<li>Next, change the Logic App&rsquo;s &lsquo;Inbound traffic configuration &gt; Public Network Access&rsquo; to &lsquo;Disabled&rsquo; and run the Logic App again, and the output now cannot be seen, as shown above.</li>
</ol>
<p>You are actually warned about this in the Azure Portal the first time you view a run history the settings that block it</p>
<p><img alt="No Output Message" loading="lazy" src="/images/rfennell/logic-app-actions-output-msg.png"></p>
<p>But it is easy to miss, or as I did, just forget about the limitation.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Parsing CSV files in Azure Logic Apps</title>
      <link>https://blog.richardfennell.net/posts/parsing-csv-file-in-logic-apps/</link>
      <pubDate>Wed, 05 Mar 2025 00:00:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/parsing-csv-file-in-logic-apps/</guid>
      <description>&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;I recently had the need in an Azure Logic App to read a CSV file from an Azure Storage account, parse the file, and then process the data row by row.&lt;/p&gt;
&lt;p&gt;Unfortunately, there is no built-in action in Logic Apps to parse CSV files.&lt;/p&gt;
&lt;p&gt;So as to avoid having to write an Azure function, or use a number of slow, low level Logic App actions, I decided to use the &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/logic-apps/add-run-powershell-scripts&#34;&gt;PowershellCode&lt;/a&gt; action to parse the CSV file quickly inline with the rest of the Logic App.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-problem">The Problem</h1>
<p>I recently had the need in an Azure Logic App to read a CSV file from an Azure Storage account, parse the file, and then process the data row by row.</p>
<p>Unfortunately, there is no built-in action in Logic Apps to parse CSV files.</p>
<p>So as to avoid having to write an Azure function, or use a number of slow, low level Logic App actions, I decided to use the <a href="https://learn.microsoft.com/en-us/azure/logic-apps/add-run-powershell-scripts">PowershellCode</a> action to parse the CSV file quickly inline with the rest of the Logic App.</p>
<p>The documented code sample suggests the following should work</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="c"># Retrieve outputs from prior steps</span>
</span></span><span class="line"><span class="cl"><span class="nv">$csvdata</span> <span class="p">=</span> <span class="nb">Get-ActionOutput</span> <span class="n">-ActionName</span> <span class="n">FileContent</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nv">$rows</span> <span class="p">=</span> <span class="nv">$csvdata</span> <span class="p">|</span> <span class="nb">ConvertFrom-csv</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c"># Push outputs forward to subsequent actions</span>
</span></span><span class="line"><span class="cl"><span class="nb">Push-WorkflowOutput</span> <span class="n">-Output</span> <span class="nv">$rows</span>
</span></span></code></pre></div><p>However this resulted in a <code>null</code> value for <code>$rows</code>.</p>
<h1 id="the-solution">The Solution</h1>
<p>The problem turned out to be that the <code>getFileContentV2</code> action that I used to load the file from Azure storage, returned the file content as part of an object, not as a simple string. So I had to request the correct property from the object.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="c"># Retrieve outputs from prior steps</span>
</span></span><span class="line"><span class="cl"><span class="nv">$action</span> <span class="p">=</span> <span class="nb">Get-ActionOutput</span> <span class="n">-ActionName</span> <span class="n">FileContent</span>
</span></span><span class="line"><span class="cl"><span class="nv">$csvdata</span> <span class="p">=</span> <span class="nv">$action</span><span class="p">.</span><span class="n">outputs</span><span class="p">[</span><span class="s1">&#39;body&#39;</span><span class="p">].</span><span class="py">ToString</span><span class="p">();</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nv">$rows</span> <span class="p">=</span> <span class="nv">$csvdata</span> <span class="p">|</span> <span class="nb">ConvertFrom-csv</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="c"># Push outputs forward to subsequent actions</span>
</span></span><span class="line"><span class="cl"><span class="nb">Push-WorkflowOutput</span> <span class="n">-Output</span> <span class="nv">$rows</span>
</span></span></code></pre></div><p>Once this change was made, the CSV file was parsed correctly and I was able to process the data as required.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I know I am late to the game with Home Assistant....</title>
      <link>https://blog.richardfennell.net/posts/late-to-the-game-with-homeassistant/</link>
      <pubDate>Thu, 20 Feb 2025 00:00:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/late-to-the-game-with-homeassistant/</guid>
      <description>&lt;p&gt;Since moving house a couple of years ago, I have gained an ever increasing set of apps on my phone to manage various things, such the &lt;a href=&#34;https://www.solaredge.com/uk/home/solaredge-home&#34;&gt;Solaredge&lt;/a&gt; PV and battery system we installed and our &lt;a href=&#34;https://share.octopus.energy/fawn-may-441&#34;&gt;Octopus Energy&lt;/a&gt; account, to name but two.&lt;/p&gt;
&lt;p&gt;In the past, I had avoided IOT and the &amp;lsquo;Intelligent Home&amp;rsquo; as I did not want to get to the point where &amp;lsquo;I can&amp;rsquo;t do the washing up, the Internet is down&amp;rsquo;. However, I recently got sent an &lt;a href=&#34;https://octopus.energy/blog/octopus-home-mini/&#34;&gt;Octopus Mini&lt;/a&gt; which meant for the first time I could get real time information from their API, so no longer needing to wait until my Smart Meter got around to transferring data to them, which could be up to 48 hours later.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since moving house a couple of years ago, I have gained an ever increasing set of apps on my phone to manage various things, such the <a href="https://www.solaredge.com/uk/home/solaredge-home">Solaredge</a> PV and battery system we installed and our <a href="https://share.octopus.energy/fawn-may-441">Octopus Energy</a> account, to name but two.</p>
<p>In the past, I had avoided IOT and the &lsquo;Intelligent Home&rsquo; as I did not want to get to the point where &lsquo;I can&rsquo;t do the washing up, the Internet is down&rsquo;. However, I recently got sent an <a href="https://octopus.energy/blog/octopus-home-mini/">Octopus Mini</a> which meant for the first time I could get real time information from their API, so no longer needing to wait until my Smart Meter got around to transferring data to them, which could be up to 48 hours later.</p>
<p>So I thought it time to look at unifying the management of all these systems, and potentially automating some behaviors such as switching things on or off based the current cost of power and the performance of our PV/Battery system.</p>
<p>So enter <a href="https://www.home-assistant.io/">Home Assistant</a>, an OSS project I had heard very good things about, and I have to say lived up to the hype.</p>
<p><img alt="Home Assistant" loading="lazy" src="/images/rfennell/ha-dashboard.png"></p>
<p>The <a href="https://www.home-assistant.io/installation/generic-x86-64/">installation was very easy</a>, I used an old Intel NUC PC I had lying around.</p>
<p>Once running on my home network, Home Assistant automatically detected far more devices than I had expected. These included an LG TV, AudioCast devices, a old NetGear NAS, and a good few more. Very impressive, using only &lsquo;standard device integrations&rsquo; shipped in the basic Home Assistant install, and no additional configuration by me.</p>
<p>Connecting to Octopus Energy and Solaredge took a bit more work as these integration were not shipped as standard.</p>
<p>First, I installed <a href="https://hacs.xyz/">Home Assistant Community Store (HACS)</a>. This allows you to easily install any community Home Assistant add-in hosted GitHub. Once this was in place I could install community integration for <a href="https://github.com/BottlecapDave/HomeAssistant-OctopusEnergy">Octopus Energy</a> and <a href="https://github.com/binsentsu/home-assistant-solaredge-modbus">Solaredge Modbus</a>.</p>
<blockquote>
<p><strong>Note</strong> There is a <a href="https://www.home-assistant.io/integrations/solaredge">Solaredge</a> device integration shipped with Home Assistant, but it queries the Solaredge web-hosted API and not the local device, so any data is 15-30 minutes out of date. The community Modbus version queries the Solaredge inverter directly over Ethernet, so provides real time data. Much for useful if planning to do home automation based on current conditions.</p></blockquote>
<p>Again, this was all very easy, a few clicks and they were installed. The only complexity, where I had to do some reading and learning, was getting the correct data for Home Assistant Energy Dashboard (shown above). This was because the Solaredge Modbus integration provides a lot of low level raw data, but not all the values the Energy Dashboard needs.</p>
<p>However, as I was learning to expect for Home Assistant, the solution was well documented in the <a href="https://github.com/binsentsu/home-assistant-solaredge-modbus/wiki/Using-Templated-Sensors-to-Calculate-Power-Flow-and-Energy">Solaredge Modbus integration GitHub Project WIKI</a>. To use <a href="https://www.home-assistant.io/integrations/template/">Sensor Templates</a> to process the raw data to the forms needed. With hindsight, having to work my way though configuring this was an excellent way to learn Home Assistant better. I find it is always better to learn by trying to fix a problem you are experiencing, rather than just following tutorials</p>
<p>So, my Home Assistant is up and running. It is great to see such a successful and well documented OSS project with a large community of contributors. I am looking forward to seeing what else I can do with Home Assistant, but still fear I am on the slippery slope to being &lsquo;unable to do the washing up due to an Internet outage&rsquo;.</p>
]]></content:encoded>
    </item>
    <item>
      <title>What are my options authenticating the az devops CLI?</title>
      <link>https://blog.richardfennell.net/posts/what-are-my-options-authenticating-az-devops-api/</link>
      <pubDate>Fri, 31 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-are-my-options-authenticating-az-devops-api/</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Edited:&lt;/strong&gt; 4th Feb 2025 to add detail on Azure DevOps Pipelines Service Connection&lt;/p&gt;&lt;/blockquote&gt;
&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;When automating administration tasks via scripts in Azure DevOps in the past I would have commonly used the &lt;a href=&#34;https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.2&#34;&gt;Azure DevOps REST API&lt;/a&gt;. However, today I tend to favour the &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/cli/?view=azure-devops&#34;&gt;Azure DevOps CLI&lt;/a&gt;. The reason for this is that the CLI wrappers the REST API in such a way that it is easier to use and more consistent.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<blockquote>
<p><strong>Edited:</strong> 4th Feb 2025 to add detail on Azure DevOps Pipelines Service Connection</p></blockquote>
<h1 id="introduction">Introduction</h1>
<p>When automating administration tasks via scripts in Azure DevOps in the past I would have commonly used the <a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.2">Azure DevOps REST API</a>. However, today I tend to favour the <a href="https://learn.microsoft.com/en-us/azure/devops/cli/?view=azure-devops">Azure DevOps CLI</a>. The reason for this is that the CLI wrappers the REST API in such a way that it is easier to use and more consistent.</p>
<p>One of the most noticeable advantages is in the area of authentication. The CLI supports a number of authentication methods that are not directly available to the REST API.</p>
<p>In this post, I will explore the options available to authenticate the Azure DevOps CLI.</p>
<h1 id="authentication-options">Authentication Options</h1>
<p>The Azure CLI has a login command <code>az login</code>, but so does the Azure DevOps extension <code>az devops login</code>. We have the option to use either depending on what we are are trying to do.</p>
<p>The key thing to considers is scope, whether you are trying to autenticate with the whole an Azure subscription or just an Azure DevOps instance.</p>
<h2 id="azure-cli-personal-access-token-pat">Azure CLI Personal Access Token (PAT)</h2>
<p><a href="https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&amp;tabs=Windows">PATs</a> have historically been the usual means to authenticate with Azure DevOps . They are still perfectly valid, and the easiest way to authenticate for command line usage. However, it is worth noting that <a href="https://devblogs.microsoft.com/devops/reducing-pat-usage-across-azure-devops/">Microsoft are now recommending a move away from PATs</a>.</p>
<p>The basic command to use a PAT is <code>az devops login</code>, when this is run you will be prompted for your PAT.</p>
<p>A script can build on this authentication mechanism and allow the PAT to be passed into the script by echoing it into the <code>az</code> command.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="nb">echo </span> <span class="s2">&#34;&lt;my-pat&gt;&#34;</span> <span class="p">|</span> <span class="n">az</span> <span class="n">devops</span> <span class="n">login</span> <span class="p">-</span><span class="n">-organization</span> <span class="s2">&#34;https://dev.azure.com/myorg
</span></span></span></code></pre></div><h2 id="interactive-login-using-entra-id">Interactive Login using Entra ID</h2>
<p>An alternative to using a PAT is to login to the Azure CLI using your corporate ID that has access to your Azure Subscription. This can be done outside of any script by using the <code>az login</code> command prior to running the script, or making this command the first one in the script.</p>
<p>When the <code>az login</code> command is run, if it can, it will launch a browser allowing you to authenticate via Entra ID using your corporate ID. Once done, your session will be valid for a period of time, and you can use any <code>az</code> commands.</p>
<p>If a browser is not available, the <code>az login</code> command initiates the device code flow and instructs the user to open a browser page at <a href="https://aka.ms/devicelogin">https://aka.ms/devicelogin</a> on another device. Then, enter the code displayed in the terminal. This flow can be forced by using the command</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"> <span class="n">az</span> <span class="n">login</span> <span class="p">-</span><span class="n">-use-device-code</span>
</span></span></code></pre></div><p>The problem both those mechanisms that they are not suitable for a script that needs to run unattended, as they requires a user to be present to authenticate.</p>
<h2 id="using-a-service-principle">Using a Service Principle</h2>
<p>A <a href="https://learn.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/service-principal-managed-identity?view=azure-devops">Service Principle</a> is an application registration within Entra ID.</p>
<p>In Azure DevOps, a Service Principle can be granted access to an Azure DevOps Organisation and Projects as you would a user account i.e. the Service Account can be granted a license and given permissions inside a project.</p>
<p>You can create a Service Principle using the command</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="n">az</span> <span class="n">ad</span> <span class="nb">sp create-for</span><span class="n">-rbac</span> <span class="p">-</span><span class="n">-name</span> <span class="n">myServicePrincipalName1</span> <span class="p">-</span><span class="n">-role</span> <span class="n">reader</span> <span class="p">-</span><span class="n">-scopes</span> <span class="p">/</span><span class="n">subscriptions</span><span class="p">/</span><span class="mf">00000000</span><span class="p">-</span><span class="mf">0000</span><span class="p">-</span><span class="mf">0000</span><span class="p">-</span><span class="mf">0000</span><span class="p">-</span><span class="mf">000000000000</span> 
</span></span></code></pre></div><blockquote>
<p><strong>Note:</strong> The Service Principle must have at least the role of &lsquo;reader&rsquo; in the Azure Subscription associated with the Entra ID</p></blockquote>
<p>When the above command is run, the result will be block of JSON including an appId, password and tenantID. These values should be stored securely</p>
<p>Once the Service Principle has been created you can grant it permissions to the Azure DevOps Organisation and Projects as needed.</p>
<p>Finally, using the securely stored values, you can now authenticate with the Azure ClI using the form</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="n">az</span> <span class="n">login</span> <span class="p">-</span><span class="n">-service-principal</span> <span class="n">-u</span> <span class="p">&lt;</span><span class="n">appID</span><span class="p">&gt;</span> <span class="n">-p</span> <span class="p">&lt;</span><span class="n">password</span><span class="p">&gt;</span> <span class="p">-</span><span class="n">-tenant</span> <span class="p">&lt;</span><span class="n">tenantid</span><span class="p">&gt;</span>
</span></span></code></pre></div><p>Unlike the interactive login, if you use this mechanism you have the option to pass these values into a script as parameters. So can be used for unattended scripts.</p>
<p>The issue with this mechanism is that the password will expire, and has to be renewed. So there is a maintainance overhead.</p>
<h2 id="managed-identity">Managed Identity</h2>
<p>The final option is to use an <a href="https://learn.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/service-principal-managed-identity?toc=%2Fazure%2Fdevops%2Fmarketplace-extensibility%2Ftoc.json&amp;view=azure-devops">Azure Managed Identity</a>.</p>
<p>A <a href="https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview">Managed Identity</a> is created as a resource in Azure. Like a Service Principle, a Managed Identity can be granted a license and permissions in Azure DevOps.</p>
<p>To login with a Managed Identity on an Azure hosted VM you can use the command</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="n">az</span> <span class="n">login</span> <span class="p">-</span><span class="n">-identity</span>
</span></span></code></pre></div><p>This works because the Managed Identity must be associated with the Azure VM you are running the az command on.</p>
<p>The need to be on an Azure resource is the limitation of Managed Identities. So are not appropriate to your needs if you wish to run your script, that uses the Azure CLI, from locations outside Azure.</p>
<h1 id="so-what-do-i-use">So what do I use?</h1>
<p>My most common use-case is to run a script that uses the AZ CLI from within an Azure DevOps Pipeline. This requirement gives need to two ways of working.</p>
<h2 id="when-i-need-to-access-an-azure-subscription">When I need to access an Azure Subscription</h2>
<p>If my script needs permissions beyond Azure DevOps, interacting with a range of Azure resource in a subscription. I could create a Service Principle and grant it permissions to the required resources in Azure, and then pass the Service Principle values into the script as parameters from secret Azure DevOps Pipeline variables.</p>
<p>However, that is a lot a work, and there is a better option built into Azure DevOps, <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops">Service Connections</a>. When you add an &lsquo;Azure Resource Manager&rsquo; service connection you are able to pick from a variety of options such as App Registration (Service Principle), or Managed Identity. Most of the options availble automatically create the required Azure resources required and can support <a href="https://devblogs.microsoft.com/devops/public-preview-of-workload-identity-federation-for-azure-pipelines/">workload identity federation</a> so you don&rsquo;t have to worry over expiring secrets.</p>
<p>You can then very easily run your scripts on a pre authenticated PowerShell session using the <code>AzurePowerShell</code> task</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">AzurePowerShell@5</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Run a Script against Azure Resources</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">azureSubscription</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;MyServiceConnection&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">azurePowerShellVersion</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;LatestVersion&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">ScriptType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;InlineScript&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">Inline</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      # your az commands, not need to login as it is already done
</span></span></span><span class="line"><span class="cl"><span class="sd">      az ....</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">FailOnStandardError</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">pwsh</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span></code></pre></div><h2 id="when-i-only-need-to-access-azure-devops">When I only need to access Azure DevOps</h2>
<p>However, my scripts commonly only need to interact with Azure DevOps, and in this case I just use the Build Agent token as a PAT. This is because it avoids the need to set anything up, you can easily pass the token into the script as an environment variable.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="nt">trigger</span><span class="p">:</span><span class="w"> </span><span class="l">none</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PowerShell@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Increment PBI count&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">targetType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inline&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">pwsh</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="l">|   </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># I don&#39;t need to call az login as it done automaticallly</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="l">az devops ....</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">env</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">AZURE_DEVOPS_EXT_PAT</span><span class="p">:</span><span class="w"> </span><span class="l">$(System.AccessToken)</span><span class="w">
</span></span></span></code></pre></div>]]></content:encoded>
    </item>
    <item>
      <title>Passing Azure DevOps WI field names in PowerShell to Az DevOps CLI as variables</title>
      <link>https://blog.richardfennell.net/posts/passing-ado-wi-fieldnames-in-powershell-to-azcli/</link>
      <pubDate>Mon, 27 Jan 2025 00:00:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/passing-ado-wi-fieldnames-in-powershell-to-azcli/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;The Azure DevOps CLI command &lt;a href=&#34;https://learn.microsoft.com/en-us/cli/azure/boards/work-item?view=azure-cli-latest#az-boards-work-item-update&#34;&gt;az boards work-item update&lt;/a&gt; can take a list of fields as a set if value pairs e.g.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-ps&#34; data-lang=&#34;ps&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nf&#34;&gt;az&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;boards&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;work-item&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;update&lt;/span&gt;  &lt;span class=&#34;nf&#34;&gt;--id&lt;/span&gt; &lt;span class=&#34;mf&#34;&gt;123&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;--fields&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;Microsoft.VSTS.Scheduling.Effort=10&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;However, if you try to replace the field name with a variable in PowerShell like this&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-ps&#34; data-lang=&#34;ps&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nf&#34;&gt;$fieldname&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;&amp;#34;Microsoft.VSTS.Scheduling.Effort&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nf&#34;&gt;az&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;boards&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;work-item&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;update&lt;/span&gt;  &lt;span class=&#34;nf&#34;&gt;--id&lt;/span&gt; &lt;span class=&#34;mf&#34;&gt;123&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;--fields&lt;/span&gt; &lt;span class=&#34;nf&#34;&gt;$fieldname=10&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;you will get an error like this&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &amp;ndash;fields argument should consist of space separated &amp;ldquo;field=value&amp;rdquo; pairs.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id=&#34;the-solution&#34;&gt;The Solution&lt;/h2&gt;
&lt;p&gt;The solution is simple, and the same one required if you wish to pass multiple fields into the command, you need to wrapper the set of key value pairs for the fields in quotes e.g.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>The Azure DevOps CLI command <a href="https://learn.microsoft.com/en-us/cli/azure/boards/work-item?view=azure-cli-latest#az-boards-work-item-update">az boards work-item update</a> can take a list of fields as a set if value pairs e.g.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ps" data-lang="ps"><span class="line"><span class="cl"><span class="nf">az</span> <span class="nf">boards</span> <span class="nf">work-item</span> <span class="nf">update</span>  <span class="nf">--id</span> <span class="mf">123</span> <span class="nf">--fields</span> <span class="nf">Microsoft.VSTS.Scheduling.Effort=10</span>
</span></span></code></pre></div><p>However, if you try to replace the field name with a variable in PowerShell like this</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ps" data-lang="ps"><span class="line"><span class="cl"><span class="nf">$fieldname</span> <span class="nf">=</span> <span class="nf">&#34;Microsoft.VSTS.Scheduling.Effort&#34;</span>
</span></span><span class="line"><span class="cl"><span class="nf">az</span> <span class="nf">boards</span> <span class="nf">work-item</span> <span class="nf">update</span>  <span class="nf">--id</span> <span class="mf">123</span> <span class="nf">--fields</span> <span class="nf">$fieldname=10</span>
</span></span></code></pre></div><p>you will get an error like this</p>
<blockquote>
<p>The &ndash;fields argument should consist of space separated &ldquo;field=value&rdquo; pairs.</p></blockquote>
<h2 id="the-solution">The Solution</h2>
<p>The solution is simple, and the same one required if you wish to pass multiple fields into the command, you need to wrapper the set of key value pairs for the fields in quotes e.g.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ps" data-lang="ps"><span class="line"><span class="cl"><span class="nf">$fieldname</span> <span class="nf">=</span> <span class="nf">&#34;Microsoft.VSTS.Scheduling.Effort&#34;</span>
</span></span><span class="line"><span class="cl"><span class="nf">az</span> <span class="nf">boards</span> <span class="nf">work-item</span> <span class="nf">update</span>  <span class="nf">--id</span> <span class="mf">123</span> <span class="nf">--fields</span> <span class="nf">&#34;$fieldname=10&#34;</span>
</span></span></code></pre></div><p>Once this is done the command will work as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Signing files in GitHub Actions</title>
      <link>https://blog.richardfennell.net/posts/signing-files-in-github-actions/</link>
      <pubDate>Mon, 20 Jan 2025 00:00:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/signing-files-in-github-actions/</guid>
      <description>&lt;h1 id=&#34;background&#34;&gt;Background&lt;/h1&gt;
&lt;p&gt;I recently &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/why-cant-i-digitally-sign-files-in-my-pipeline/&#34;&gt;wrote about the changes&lt;/a&gt; I had had to make to our Azure DevOps pipelines to address the changes required when code signing with a new DigiCert certificate due to new &lt;a href=&#34;https://knowledge.digicert.com/general-information/new-private-key-storage-requirement-for-standard-code-signing-certificates-november-2022&#34;&gt;private key storage requirements for Code Signing certificates&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Today I had to do the same for a GitHub Actions pipeline. The process is very similar, but there are a few differences in the syntax and the way the secrets are stored.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="background">Background</h1>
<p>I recently <a href="https://blogs.blackmarble.co.uk/rfennell/why-cant-i-digitally-sign-files-in-my-pipeline/">wrote about the changes</a> I had had to make to our Azure DevOps pipelines to address the changes required when code signing with a new DigiCert certificate due to new <a href="https://knowledge.digicert.com/general-information/new-private-key-storage-requirement-for-standard-code-signing-certificates-november-2022">private key storage requirements for Code Signing certificates</a></p>
<p>Today I had to do the same for a GitHub Actions pipeline. The process is very similar, but there are a few differences in the syntax and the way the secrets are stored.</p>
<h1 id="the-solution">The Solution</h1>
<h2 id="step-1-create-a-composite-action">Step 1: Create a Composite Action</h2>
<p>I stored theses steps as a <a href="https://docs.github.com/en/actions/sharing-automations/creating-actions/creating-a-composite-action">Composite Action</a> for easier reuse, but you could put them within a workflow if you prefer.</p>
<p>The composite action installs the DigiCert tools in the first step, and then finds and signs the files in the second.</p>
<blockquote>
<p>Yes, I know I could just have a single step, but I wanted to follow the Azure DevOps flow as closely as possible.</p></blockquote>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Sign Code with DigiCert&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Signs the contents of a folder with DigiCert&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">digicert-api-key</span><span class="p">:</span><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;The DigiCert API key&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">tools-download-url</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;The URL for the DigiCert Windows MSI&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">default</span><span class="p">:</span><span class="w"> </span><span class="l">https://one.digicert.com/signingmanager/api-ui/v1/releases/smtools-windows-x64.msi/download</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">signer_p12_file</span><span class="p">:</span><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;The DigiCert Signer P12 File as base64 encoded string&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">cert_crt_file</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;The DigiCert Certificate CRT File as base64 encoded string&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">digicert_host</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;The DigiCert Host&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">default</span><span class="p">:</span><span class="w"> </span><span class="l">https://clientauth.one.digicert.com</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">keypair_alias</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;DigiCert certifiate alias&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">password</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Digicert password&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">file-path</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">description</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Path to scan for files&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">runs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">using</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;composite&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Install DigiCert Client Tools</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">shell</span><span class="p">:</span><span class="w"> </span><span class="l">pwsh</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l">| </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="l">curl -X GET  ${{ inputs.tools-download-url }} -H &#34;x-api-key:${{ inputs.digicert-api-key }} &#34; -o smtools-windows-x64.msi </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="l">msiexec /i smtools-windows-x64.msi /quiet /qn </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Code Sign Files</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">shell</span><span class="p">:</span><span class="w"> </span><span class="l">pwsh</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      # Define the base path where signtool.exe is located
</span></span></span><span class="line"><span class="cl"><span class="sd">      $basePath = &#34;C:\Program Files (x86)\Windows Kits\10\bin&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      # Filtering via version and architecture, could use just one of the these, depends on needs
</span></span></span><span class="line"><span class="cl"><span class="sd">      $preferredVersion = &#34;x64&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      # Get the matching signtool path (pick the last in the list if multiple returned)
</span></span></span><span class="line"><span class="cl"><span class="sd">      $signtoolPath = (Get-ChildItem -Path $basePath -Recurse -Filter &#34;signtool.exe&#34; -File | Where-Object { $_.FullName -like &#34;*\$preferredVersion\*&#34; })[-1] | Select-Object -ExpandProperty FullName
</span></span></span><span class="line"><span class="cl"><span class="sd">      write-host &#34;Found signtool at $signtoolPath&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      
</span></span></span><span class="line"><span class="cl"><span class="sd">      set-content -Path &#39;signer.p12.base64&#39; -Value &#39;${{ inputs.signer_p12_file }}&#39; 
</span></span></span><span class="line"><span class="cl"><span class="sd">      certutil -decode -f &#39;signer.p12.base64&#39; &#39;DigiCert Signer Certificate_pkcs12.p12&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      set-content -Path &#39;cert.crt.base64&#39; -Value &#39;${{ inputs.cert_crt_file }}&#39; 
</span></span></span><span class="line"><span class="cl"><span class="sd">      certutil -decode -f &#39;cert.crt.base64&#39; &#39;cert.crt&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      # Find the files that match 
</span></span></span><span class="line"><span class="cl"><span class="sd">      write-host &#34;Finding files that match&#39;the path ./src/sample/bin/**/**/*.dll&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Get-ChildItem -Path &#34;${{ inputs.file-path }}&#34; | ForEach-Object {
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filePath = $_.FullName
</span></span></span><span class="line"><span class="cl"><span class="sd">        write-host &#34;Signing file $filePath&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        &amp; $signtoolPath sign /v /tr http://timestamp.digicert.com /td SHA256 /fd SHA256 /csp &#34;DigiCert Signing Manager KSP&#34; /kc &#34;${{ inputs.keypair_alias}}&#34; /f &#34;cert.crt&#34; $filePath
</span></span></span><span class="line"><span class="cl"><span class="sd">      }   </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">env</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">SM_HOST</span><span class="p">:</span><span class="w"> </span><span class="l">${{ inputs.digicert_host }} </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">SM_API_KEY</span><span class="p">:</span><span class="w"> </span><span class="l">${{ inputs.digicert-api-key }} </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">SM_CLIENT_CERT_PASSWORD</span><span class="p">:</span><span class="w"> </span><span class="l">${{ inputs.password }} </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">SM_CLIENT_CERT_FILE</span><span class="p">:</span><span class="w"> </span><span class="l">DigiCert Signer Certificate_pkcs12.p12</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">SM_TLS_SKIP_VERIFY</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span></code></pre></div><h2 id="step-2-store-the-certificates-as-secrets">Step 2: Store the Certificates as Secrets</h2>
<p>As with Azure DevOps implementation we now have a pair of files, the .CRT certificate and the .P12 signer’s certificate.</p>
<p>GitHub does not have a feature like <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/secure-files?view=azure-devops">Azure DevOps Secure Files</a>. So, we have to store the certificates as secrets.</p>
<p>To store these certificates as GitHub Secrets you need to encode the file content into a base64 string and then add it as a secret in your GitHub repository, or Organisation or Enterprise. Use whichever level works best for you, in my case I chose to store them as Organisation secrets.</p>
<p>Here are the steps:</p>
<ol>
<li>Open Command Prompt or PowerShell.</li>
<li>Use the <code>certutil</code> command to encode the file:
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">certutil -encode yourfile.crt yourfile.crt.base64
</span></span></code></pre></div></li>
<li>Open the generated <code>yourfile.crt.base64</code> file in a text editor (e.g., Notepad) and copy its content.</li>
<li>Add the secret to GitHub at the level you chose:
<ul>
<li>Name your secret (e.g. CRT_FILE).</li>
<li>Paste the base64-encoded content into the Value field.</li>
</ul>
</li>
</ol>
<p>Repeat this process for both the .CRT and the .P12 file.</p>
<h2 id="step-3-store-the-other-digicert-settings-as-secrets-and-variables">Step 3: Store the other DigiCert settings as secrets and variables</h2>
<p>The other DigiCert settings can be stored as a mixture of secrets and variables. Use variables if reading the value in the log is not deemed a security risk.</p>
<p>The secrets:</p>
<ul>
<li>DIGICERT_API_KEY</li>
<li>DIGICERT_SIGNER_P12_FILE as base64 encoded string (see above)</li>
<li>DIGICERT_CERT_CRT_FILE as base64 encoded string (see above)</li>
<li>DIGICERT_CLIENT_CERT_PASSWORD</li>
</ul>
<p>and as a variable:</p>
<ul>
<li>DIGICERT_KEYPAIR_ALIAS</li>
</ul>
<h2 id="step-4-using-the-composite-action-in-a-workflow">Step 4: Using the Composite Action in a Workflow</h2>
<p>Finally pull it all together in your workflow</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="w"> </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Sign files with Digicert</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">blackmarble/Sign-Code-With-DigiCert@v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">digicert-api-key</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;${{ secrets.DIGICERT_API_KEY }}&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">signer_p12_file</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;${{ secrets.DIGICERT_SIGNER_P12_FILE }}&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">cert_crt_file</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;${{ secrets.DIGICERT_CERT_CRT_FILE }}&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">keypair_alias</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;${{ vars.DIGICERT_KEYPAIR_ALIAS}}&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">password</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;${{ secrets.DIGICERT_CLIENT_CERT_PASSWORD }}&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">file-path</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;./src/Sample/bin/**/**/*.dll&#39;</span><span class="w">
</span></span></span></code></pre></div>]]></content:encoded>
    </item>
    <item>
      <title>Re-authenticating Microsoft Authenticator after swapping your phone</title>
      <link>https://blog.richardfennell.net/posts/re-authenticating-mfa-after-swapping-your-phone/</link>
      <pubDate>Mon, 20 Jan 2025 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/re-authenticating-mfa-after-swapping-your-phone/</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This one of those posts that is more a note to self as I keep forgetting how to do this, but I hope it helps others.&lt;/p&gt;&lt;/blockquote&gt;
&lt;h1 id=&#34;background&#34;&gt;Background&lt;/h1&gt;
&lt;p&gt;I use &lt;a href=&#34;https://www.microsoft.com/en-gb/security/mobile-authenticator-app&#34;&gt;Microsoft&amp;rsquo;s Authenticator&lt;/a&gt; to provide MFA on a number of accounts. I recently swapped my Android phone and had to, after restoring a backup, re-authenticate some accounts on the new device.&lt;/p&gt;
&lt;p&gt;This was a simple process for most accounts, just a case of validating the code generated by the new device, but I had a problem with the entries where my Black Marble Entra ID account was &lt;a href=&#34;https://learn.microsoft.com/en-us/entra/external-id/b2b-quickstart-add-guest-users-portal&#34;&gt;a guest&lt;/a&gt; in other company&amp;rsquo;s Entra ID directories.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<blockquote>
<p>This one of those posts that is more a note to self as I keep forgetting how to do this, but I hope it helps others.</p></blockquote>
<h1 id="background">Background</h1>
<p>I use <a href="https://www.microsoft.com/en-gb/security/mobile-authenticator-app">Microsoft&rsquo;s Authenticator</a> to provide MFA on a number of accounts. I recently swapped my Android phone and had to, after restoring a backup, re-authenticate some accounts on the new device.</p>
<p>This was a simple process for most accounts, just a case of validating the code generated by the new device, but I had a problem with the entries where my Black Marble Entra ID account was <a href="https://learn.microsoft.com/en-us/entra/external-id/b2b-quickstart-add-guest-users-portal">a guest</a> in other company&rsquo;s Entra ID directories.</p>
<p>I could not remember the process.</p>
<h1 id="the-solution">The Solution</h1>
<p>When the Authenticator app shows the message</p>
<blockquote>
<p>Action Required</p>
<p>Scan the QR code provided by your organization to finish recovering your account</p></blockquote>
<p>and the account is in the form <code>richard_domain.com#EXT#@anothercompany.onmicrosoft.com</code></p>
<p>The steps are as follows (assuming you still have access to the old MFA device):</p>
<ol>
<li>In a browser open <a href="https://mysignins.microsoft.com/security-info">https://mysignins.microsoft.com/security-info</a></li>
<li>Sign in with your own company Entra ID account i.e. in this example <code>richard@domain.com</code></li>
<li>Click the &lsquo;organisation&rsquo; button (a icon of a directory tree, in the top right near the help button).
<img alt="Organisation Button" loading="lazy" src="/images/rfennell/mfa-password-reset-1.png"></li>
<li>A list of organisations you are a member of will appear as a panel on the right of the page
<img alt="Organisation List" loading="lazy" src="/images/rfennell/mfa-password-reset-2.png"></li>
<li>Pick the required organisation from the menu on the right, in my <code>richard_domain.com#EXT#@anothercompany.onmicrosoft.com</code> example, this would be an organisation called <code>anothercompany</code></li>
<li>You will need to authenticate with the old MFA device</li>
<li>On the refreshed <a href="https://mysignins.microsoft.com/security-info">https://mysignins.microsoft.com/security-info</a> page for the selected organisation you can now use the <code>add a new device process</code> to generate the QR code</li>
<li>On the new MFA device pick the account to be re-authenticate and scan the QR code. This is all that is required to add the new device as sign-in method (note you do not have to completed the new device wizard in the browser, but it is probably a good idea as a final check.).</li>
</ol>
<blockquote>
<p>If you don&rsquo;t have access to the old MFA device, you will need to contact the organisation&rsquo;s IT support to get them to reset the MFA for you.</p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>Why am I getting no private key is available error when I try to digitally sign files in my Azure DevOps Pipeline?</title>
      <link>https://blog.richardfennell.net/posts/why-cant-i-digitally-sign-files-in-my-pipeline/</link>
      <pubDate>Wed, 11 Dec 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-cant-i-digitally-sign-files-in-my-pipeline/</guid>
      <description>&lt;h1 id=&#34;background&#34;&gt;Background&lt;/h1&gt;
&lt;p&gt;It is becoming increasingly important to sign files digitally to ensure that they have not been tampered with, to secure the software supply chain. This is something we have done for a good while as a step in our Azure DevOps pipelines. However, recent(ish) changes in the way certificates are issued has meant we have had to revise our approach.&lt;/p&gt;
&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;We used to use a .PFX file, stored as an Azure DevOps secure file, that contained the public and private keys and was accessed using a password, to sign our files.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="background">Background</h1>
<p>It is becoming increasingly important to sign files digitally to ensure that they have not been tampered with, to secure the software supply chain. This is something we have done for a good while as a step in our Azure DevOps pipelines. However, recent(ish) changes in the way certificates are issued has meant we have had to revise our approach.</p>
<h1 id="the-problem">The Problem</h1>
<p>We used to use a .PFX file, stored as an Azure DevOps secure file, that contained the public and private keys and was accessed using a password, to sign our files.</p>
<p>However, when we renewed our code signing certificate with DigiCert we found this approach no longer valid due to <a href="https://knowledge.digicert.com/general-information/new-private-key-storage-requirement-for-standard-code-signing-certificates-november-2022">new private key storage requirements for Code Signing certificates</a>.</p>
<p>The basic issue is that now when we sign a file, the signing tool needs to make a call back to a secure location to validate the certificate. In our case Digicert&rsquo;s Keylocker service.</p>
<h1 id="the-solution">The Solution</h1>
<p>This change required some changes to our pipelines.</p>
<ol>
<li>Store our .CRT certificate file as a secure file in Azure DevOps</li>
<li>Store our .p12 signer&rsquo;s certificate file as a secure file in Azure DevOps</li>
<li>Store our DigiCert account settings as Azure DevOps pipeline variables (a mixture of standard and secret ones)</li>
<li>Update our pipeline as follows</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">SSMClientToolsSetup@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Install DigiCert Client Tools </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">DownloadSecureFile@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Download DigiCert Code Signing Certificate File</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">secureFile</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;DigicertCodeSigningCert.crt&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">DownloadSecureFile@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Download DigiCert Signer Certificate File</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">secureFile</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;DigiCertSignerCertificate.p12&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PowerShell@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Code Sign Files</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">targetType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inline&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      # Define the base path where signtool.exe is located
</span></span></span><span class="line"><span class="cl"><span class="sd">      $basePath = &#34;C:\Program Files (x86)\Windows Kits\10\bin&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      # Filtering via version and architecture, could use just one of the these, depends on needs
</span></span></span><span class="line"><span class="cl"><span class="sd">      $preferredVersion = &#34;x64&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      # Get the matching signtool path (pick the last in the list if multiple returned)
</span></span></span><span class="line"><span class="cl"><span class="sd">      $signtoolPath = (Get-ChildItem -Path $basePath -Recurse -Filter &#34;signtool.exe&#34; -File | Where-Object { $_.FullName -like &#34;*\$preferredVersion\*&#34; })[-1] | Select-Object -ExpandProperty FullName
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      # Find the files that match filter that need to be signed
</span></span></span><span class="line"><span class="cl"><span class="sd">      write-host &#34;Finding files to sign&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Get-ChildItem -Path &#34;$(Build.SourcesDirectory)/myproject/**/*.exe&#34; | ForEach-Object {
</span></span></span><span class="line"><span class="cl"><span class="sd">         $filePath = $_.FullName
</span></span></span><span class="line"><span class="cl"><span class="sd">         write-host &#34;Signing file $filePath&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">         &amp; $signtoolPath sign /v /tr http://timestamp.digicert.com /td SHA256 /fd SHA256 /csp &#34;DigiCert Signing Manager KSP&#34; /kc &#34;$(SM_KEYPAIR_ALIAS)&#34; /f &#34;$(Agent.TempDirectory)\DigicertCodeSigningCert.crt&#34; $filePath
</span></span></span><span class="line"><span class="cl"><span class="sd">       }   </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">env</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">SM_HOST</span><span class="p">:</span><span class="w"> </span><span class="l">$(SM_HOST) </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">SM_API_KEY</span><span class="p">:</span><span class="w"> </span><span class="l">$(SM_API_KEY) </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">SM_CLIENT_CERT_PASSWORD</span><span class="p">:</span><span class="w"> </span><span class="l">$(SM_CLIENT_CERT_PASSWORD) </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">SM_CLIENT_CERT_FILE</span><span class="p">:</span><span class="w"> </span><span class="l">$(Agent.TempDirectory)\DigiCert Signer Certificate_pkcs12.p12</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="nt">SM_TLS_SKIP_VERIFY</span><span class="p">:</span><span class="w"> </span><span class="l">$(SM_TLS_SKIP_VERIFY)</span><span class="w">
</span></span></span></code></pre></div><h1 id="the-gotcha-with-variables">The gotcha with variables</h1>
<p>I have blogged a number of times before about the need to <a href="https://blogs.blackmarble.co.uk/rfennell/getting-confused-over-azure-devops-pipeline-variable-evaluation/">be careful with the syntax for Azure DevOps variables</a>. Guess what, I got caught out by this again!</p>
<p>I had initially used the <code>${{ variables.XXX }}</code> format for injecting the Azure DevOps variables as PowerShell environment variables e.g.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w"> </span><span class="nt">SM_HOST</span><span class="p">:</span><span class="w"> </span><span class="l">${{ variables.SM_HOST }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> </span><span class="nt">SM_API_KEY</span><span class="p">:</span><span class="w"> </span><span class="l">${{ variables.SM_API_KEY }}</span><span class="w">
</span></span></span></code></pre></div><p>This worked fine for the standard variables, but not for the secret ones. The secret ones were not being injected into the script as environment variables so when we tried to sign a file we got the error</p>
<blockquote>
<p>SignTool Error: No private key is available.</p></blockquote>
<p>This was a case of trying to be too clever with the syntax, the correct syntax in this case is to use the standard macro syntax <code>$(XXX)</code> format for all variables, especially the secret ones.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w"> </span><span class="nt">SM_HOST</span><span class="p">:</span><span class="w"> </span><span class="l">$(SM_HOST)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> </span><span class="nt">SM_API_KEY</span><span class="p">:</span><span class="w"> </span><span class="l">$(SM_API_KEY)</span><span class="w">
</span></span></span></code></pre></div><p>Once this was done, the signing process worked as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Inject a step into Web Deploy</title>
      <link>https://blog.richardfennell.net/posts/inject-a-step-into-msdeploy/</link>
      <pubDate>Wed, 04 Dec 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/inject-a-step-into-msdeploy/</guid>
      <description>&lt;p&gt;I really like &lt;a href=&#34;https://www.iis.net/downloads/microsoft/web-deploy&#34;&gt;Web Deploy&lt;/a&gt;, it is a powerful tool for &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/porting-my-visual-studio-parameters-xml-generator-tool-to-visual-studio-2022-preview/?query=parameters.&#34;&gt;injecting parameters&lt;/a&gt; whilst deploying web applications to both Azure or an on-premise IIS Server.&lt;/p&gt;
&lt;p&gt;Every project is different, and sometimes you need to be able to inject a step into the Web Deploy package creation process to complete some extra step. This can be done by adding a target to the &lt;code&gt;.csproj&lt;/code&gt; project file.&lt;/p&gt;
&lt;p&gt;The following example shows how you could sign the assemblies before the Web Deploy package is created.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I really like <a href="https://www.iis.net/downloads/microsoft/web-deploy">Web Deploy</a>, it is a powerful tool for <a href="https://blogs.blackmarble.co.uk/rfennell/porting-my-visual-studio-parameters-xml-generator-tool-to-visual-studio-2022-preview/?query=parameters.">injecting parameters</a> whilst deploying web applications to both Azure or an on-premise IIS Server.</p>
<p>Every project is different, and sometimes you need to be able to inject a step into the Web Deploy package creation process to complete some extra step. This can be done by adding a target to the <code>.csproj</code> project file.</p>
<p>The following example shows how you could sign the assemblies before the Web Deploy package is created.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"> <span class="nt">&lt;Target</span> <span class="na">Name=</span><span class="s">&#34;SignWebExe&#34;</span> <span class="na">AfterTargets=</span><span class="s">&#34;GenerateMsdeployManifestFiles&#34;</span> <span class="na">BeforeTargets=</span><span class="s">&#34;PackageUsingManifest&#34;</span> <span class="na">Condition=</span><span class="s">&#34;&#39;$(Configuration)&#39; == &#39;Release&#39;&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;PropertyGroup&gt;</span>
</span></span><span class="line"><span class="cl">      <span class="nt">&lt;Cmd&gt;</span>signtool.exe sign /debug /f &#34;$(certPath)&#34; /p &#34;$(certPassword)&#34; &#34;$(ProjectDir)obj\$(Configuration)\Package\PackageTmp\bin\MyNamespace*.dll&#34;<span class="nt">&lt;/Cmd&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;/PropertyGroup&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;Message</span> <span class="na">Text=</span><span class="s">&#34;Signing web deploy executable with command: $(Cmd)&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;Exec</span> <span class="na">Command=</span><span class="s">&#34;$(Cmd)&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/Target&gt;</span>
</span></span></code></pre></div><p>The required parameters for this extra step are then passed in as MSBuild arguments. For example, in an Azure DevOps pipeline, you could use the following task:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">VSBuild@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Build Core Services Solution&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">solution</span><span class="p">:</span><span class="w"> </span><span class="l">src/MySolution.sln</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">msbuildArgs</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;/p:DeployOnBuild=true;PublishProfile=Release /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DeployIisAppPath=&#34;__SITENAME__&#34; /p:certPassword=&#34;$(SigningPassword)&#34; /p:certPath=&#34;$(SigningCertFilePath)&#34;&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">platform</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(BuildPlatform)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">configuration</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(BuildConfiguration)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">clean</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span></code></pre></div>]]></content:encoded>
    </item>
    <item>
      <title>ProjectFileIndexer exceptions in SonarQube</title>
      <link>https://blog.richardfennell.net/posts/projectfileindexer-exceptions-in-sonarqube/</link>
      <pubDate>Thu, 31 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/projectfileindexer-exceptions-in-sonarqube/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;We are running our &lt;a href=&#34;https://www.sonarsource.com/products/sonarqube/&#34;&gt;SonarQube&lt;/a&gt; instance as an &lt;a href=&#34;https://devblogs.microsoft.com/premier-developer/sonarqube-hosted-on-azure-app-service/&#34;&gt;Azure hosted Docker container&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Over the past few weeks we have been seeing intermittent occurrences of the &lt;code&gt;ProjectFileIndexer&lt;/code&gt; exception during the SonarQube analysis step in our Azure DevOps pipelines.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;##[error]java.lang.IllegalStateException: Unable to load component class org.sonar.scanner.scan.filesystem.ProjectFileIndexer
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When I looked closer at the exception stack, I could see at the bottom there was always a timeout error when trying to access the &lt;code&gt;project.protobuf&lt;/code&gt; file from SonarQube.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>We are running our <a href="https://www.sonarsource.com/products/sonarqube/">SonarQube</a> instance as an <a href="https://devblogs.microsoft.com/premier-developer/sonarqube-hosted-on-azure-app-service/">Azure hosted Docker container</a>.</p>
<p>Over the past few weeks we have been seeing intermittent occurrences of the <code>ProjectFileIndexer</code> exception during the SonarQube analysis step in our Azure DevOps pipelines.</p>
<pre tabindex="0"><code>##[error]java.lang.IllegalStateException: Unable to load component class org.sonar.scanner.scan.filesystem.ProjectFileIndexer
</code></pre><p>When I looked closer at the exception stack, I could see at the bottom there was always a timeout error when trying to access the <code>project.protobuf</code> file from SonarQube.</p>
<pre tabindex="0"><code>Caused by: java.lang.IllegalStateException: Fail to request url: https://sonarqube.mydomain.co.uk/batch/project.protobuf?key=mykey
</code></pre><p>If I tried to open the URL in a browser, authenticated with SonarQube, most of the time the expected <code>project.protobuf</code> file was returned, but sometimes it was not.</p>
<p>Also, at random times, the SonarQube UI would take many seconds to refresh when I selected a project.</p>
<h2 id="the-solution">The Solution</h2>
<p>In the past I have usually found any issues with SonarQube are <a href="https://blogs.blackmarble.co.uk/rfennell/its-the-sonarqube-indexes-again/">ElasticSearch related</a>, but not so this time.</p>
<p>The issue turned out to be a lack of SQL resources.</p>
<p>Our SonarQube instance was running on</p>
<ul>
<li>Azure Web App (Premium V3 P1V3 2vCPU &amp; 8Gb)</li>
<li>Azure SQL DB (Standard S2 50 DTU)</li>
<li>Logs/Config/ES indexes etc/ on Azure Standard General Purpose V2) File storage</li>
</ul>
<p>When I look at the <a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/query-performance-insight-use?view=azuresql">Azure SQL Query Performance Insights</a>, I could see that the SQL DTU resources were maxing out at 100% for long periods of time, this occurred especially when the SonarQube analysis was restarted.</p>
<p>I increased the SQL resource to a Standard S3 100 DTU, the scaling took 15 minutes or so. Once this was done, the issue seemed to be resolved, though it is always hard to be sure with intermittent issues.</p>
<p>My assumption is that there are some SQL queries that are slow to run and are potentially blocking to other queries. By increasing the SQL resources, the these queries could be processed more quickly, and the blocking reduced to a level where it was not an issue.</p>
<p>You can see the impact of the change in the SQL DTU usage graph below. After the increase in SQL resources, we are not maxing out the server DTUs, so the SonarQube analysis can running without issues.</p>
<p><img alt="SQL DTU usage graph" loading="lazy" src="/images/rfennell/sonarpref1.png"></p>
<p>I think the key take away is to make sure you have enough SQL resources, don&rsquo;t max out your assigned DTUs, as this will cause issues.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why is my SnipeIT instance suddenly slow?</title>
      <link>https://blog.richardfennell.net/posts/why-is-my-snipeit-instance-suddenly-slow/</link>
      <pubDate>Mon, 28 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-is-my-snipeit-instance-suddenly-slow/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;As I have &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/setting-up-snipe-it-on-azure/&#34;&gt;blogged previously&lt;/a&gt;, we run a SnipeIT instance to manage our IT assets, hosted in Azure using Docker.&lt;/p&gt;
&lt;p&gt;This has been working well for us for the past year, but recently we have noticed that the system has become very slow to respond.&lt;/p&gt;
&lt;p&gt;Looking on the Azure portal, we can see that around the 15th of October the web app&amp;rsquo;s response times have gone from milliseconds to 10s of seconds&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>As I have <a href="https://blogs.blackmarble.co.uk/rfennell/setting-up-snipe-it-on-azure/">blogged previously</a>, we run a SnipeIT instance to manage our IT assets, hosted in Azure using Docker.</p>
<p>This has been working well for us for the past year, but recently we have noticed that the system has become very slow to respond.</p>
<p>Looking on the Azure portal, we can see that around the 15th of October the web app&rsquo;s response times have gone from milliseconds to 10s of seconds</p>
<p><img alt="Slow web response times" loading="lazy" src="/images/rfennell/snipe-slow1.png"></p>
<h2 id="investigation">Investigation</h2>
<p>The first question we asked, as you always should, was what had changed?</p>
<p>The answer was nothing obvious.</p>
<ul>
<li>We had not changed the Azure Web App or MySQL SKUs</li>
<li>It is true that we had not updated the SnipeIT version since July, but we were on the current major and minor version and only a few patch versions behind.</li>
<li>We had not added any significant new assets or users in the last few months.</li>
</ul>
<p>So it all appeared strange what had changed?</p>
<p>We of course tried restarting the container instance, and cleaning down logfiles. We have seen slow performance in other systems when the logfiles get too large.</p>
<p>We also tried upping the SKU of the Azure Web App, but none of this made a difference.</p>
<p>We then set the SnipeIT log level to debug, this showed the high response times were not due to the web app being slow to respond, but the time taken to get data from the MySQL instance.</p>
<p>Looking again at the metrics in the Azure portal, I noticed that the MySQL instance started showing a higher CPU usage and increased DB connections around the same time as the web app started to slow down.</p>
<p><img alt="MySQL CPU" loading="lazy" src="/images/rfennell/snipe-slow2.png"></p>
<p>These new CPU and connection levels did not seem excessive, but they were higher than they had been previously. I then noticed in the Azure Portal a warning message, that we had used up all the <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model#b-series-cpu-credit-model">MySQL Burstable SKU credits</a>.</p>
<p><img alt="MySQL warning" loading="lazy" src="/images/rfennell/snipe-slow3.png"></p>
<h2 id="solution">Solution</h2>
<p>The solution was in fact simple, to move the MySQL instance to a higher SKU, from <code>Standard_B1s</code> to <code>Standard_B2ms</code>, increasing the running cost by only a few pence a month, from £2.17 to £2.25 PCM</p>
<p>This SKU change doubles the resources available to MySQL, but more importantly allows us to build up more <a href="https://learn.microsoft.com/en-us/azure/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model#b-series-cpu-credit-model">burstable CPU credit</a> for the times we need it in a month.</p>
<p>So now I have a working solution, it as all down to the fact that we had been running close to the limit of our MySQL SKU for a while, and it was a matter of time before the issue occurred.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Azure Service Connection names that are stored in variables group in Azure DevOps Pipeline</title>
      <link>https://blog.richardfennell.net/posts/using-azure-service-connection-names-that-are-stored-in-variables-group-ado-pipeline/</link>
      <pubDate>Mon, 21 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-azure-service-connection-names-that-are-stored-in-variables-group-ado-pipeline/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;If you are using staged deployment in Azure DevOps, you will probably have multiple Azure Service Connections. So, it makes sense that you might want to use a Service Connection name that is stored in a variable group as a parameter to a templated YAML pipeline.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c&#34;&gt;# the build pipeline&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;stages&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;stage&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;UAT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;jobs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;deployment&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;ARM_Provisioning&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;timeoutInMinutes&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;m&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;environment&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Staging&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;variables&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;group&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;UAT&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;pool&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;vmImage&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;windows-latest&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;strategy&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;runOnce&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;          &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;deploy&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;steps&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;template&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;YAMLTemplates\ProvisionUsingARM.yml&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;              &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;parameters&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;                &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;AzureResourceGroup&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;$(AzureResourceGroup)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;                &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;AzureServiceConnection&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;$(AzureServiceConnection)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;stage&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;PROD&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;jobs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;deployment&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;ARM_Provisioning&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;timeoutInMinutes&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;m&#34;&gt;0&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;environment&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Staging&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;variables&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;group&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;PROD&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;pool&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;vmImage&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;windows-latest&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;      &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;strategy&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;runOnce&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;          &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;deploy&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;steps&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;            &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;template&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;YAMLTemplates\ProvisionUsingARM.yml&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;              &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;parameters&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;                &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;AzureResourceGroup&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;$(AzureResourceGroup)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;                &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;AzureServiceConnection&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;$(AzureServiceConnection)&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;With a template &lt;code&gt;YAMLTemplates\ProvisionUsingARM.yml&lt;/code&gt; that uses the &lt;code&gt;AzureServiceConnection&lt;/code&gt; variable&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>If you are using staged deployment in Azure DevOps, you will probably have multiple Azure Service Connections. So, it makes sense that you might want to use a Service Connection name that is stored in a variable group as a parameter to a templated YAML pipeline.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># the build pipeline</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">stages</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">UAT</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">deployment</span><span class="p">:</span><span class="w"> </span><span class="l">ARM_Provisioning</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">timeoutInMinutes</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">environment</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Staging&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l">UAT</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;windows-latest&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">strategy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">runOnce</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">deploy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="l">YAMLTemplates\ProvisionUsingARM.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureResourceGroup</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureResourceGroup)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureServiceConnection</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureServiceConnection)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">PROD</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">deployment</span><span class="p">:</span><span class="w"> </span><span class="l">ARM_Provisioning</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">timeoutInMinutes</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">environment</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Staging&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l">PROD</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;windows-latest&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">strategy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">runOnce</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">deploy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="l">YAMLTemplates\ProvisionUsingARM.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureResourceGroup</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureResourceGroup)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureServiceConnection</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureServiceConnection)</span><span class="w">
</span></span></span></code></pre></div><p>With a template <code>YAMLTemplates\ProvisionUsingARM.yml</code> that uses the <code>AzureServiceConnection</code> variable</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">AzureResourceGroup</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">string </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">AzureServiceConnection</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">string</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">AzureResourceManagerTemplateDeployment@3</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName: &#39;Azure Deployment</span><span class="p">:</span><span class="w"> </span><span class="l">Create Or Update Resource Group&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">deploymentScope</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Resource Group&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">azureResourceManagerConnection</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;${{ parameters.AzureServiceConnection}}&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">action</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Create Or Update Resource Group&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">resourceGroupName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;${{parameters.azureResourceGroup}}&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="l">...</span><span class="w">
</span></span></span></code></pre></div><h2 id="the-issue">The Issue</h2>
<p>If I declared a pair of variables in the <code>UAT</code> and <code>PROD</code> variable groups with the names <code>AzureServiceConnection</code> and <code>AzureResourceGroup</code> I would expect each stage to pick up the appropriate variable group and use the correct service connection.</p>
<p>However, this is not the case. The pipeline will fail with the following error message when you validate or try to queue the pipeline.</p>
<blockquote>
<p>There was a resource authorization issue: &ldquo;The pipeline is not valid. Job ARM_Provisioning: Step input azureResourceManagerConnection references service connection $(AzureServiceConnection) which could not be found. The service connection does not exist, has been disabled or has not been authorized for use. For authorization details, refer to <a href="https://aka.ms/yamlauthz.%22">https://aka.ms/yamlauthz."</a></p></blockquote>
<p>Basically, the <code>$(AzureServiceConnection)</code> variable is not being expanded during template validation.</p>
<h2 id="analysis">Analysis</h2>
<h3 id="compile-and-runtime-variables">Compile and Runtime variables</h3>
<p>I tried all the <a href="https://blogs.blackmarble.co.uk/rfennell/getting-confused-over-azure-devops-pipeline-variable-evaluation/">options for the variable declaration in the YAML</a></p>
<ul>
<li>Standard Macro format <code>$(AzureResourceGroup)</code></li>
<li>Expression format <code>$[AzureResourceGroup]</code></li>
<li>Runtime expression format <code>${{ variables.AzureResourceGroup}}</code></li>
</ul>
<p>None of these helped.</p>
<h3 id="hardcoding-the-connection-name">Hardcoding the connection name</h3>
<p>If you hardcode the service connection name in the outer YAML pipeline file, the pipeline will work as expected.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># the build pipeline</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">UAT</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">deployment</span><span class="p">:</span><span class="w"> </span><span class="l">ARM_Provisioning</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">timeoutInMinutes</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">environment</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Staging&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l">UAT</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;windows-latest&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">strategy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">runOnce</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">deploy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="l">YAMLTemplates\ProvisionUsingARM.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureResourceGroup</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureResourceGroup)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureServiceConnection</span><span class="p">:</span><span class="w"> </span><span class="l">UAT</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">PROD</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">deployment</span><span class="p">:</span><span class="w"> </span><span class="l">ARM_Provisioning</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">timeoutInMinutes</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">environment</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Staging&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l">PROD</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;windows-latest&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">strategy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">runOnce</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">deploy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="l">YAMLTemplates\ProvisionUsingARM.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureResourceGroup</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureResourceGroup)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureServiceConnection</span><span class="p">:</span><span class="w"> </span><span class="l">PROD</span><span class="w">
</span></span></span></code></pre></div><p>But that is not at all what we are after, but it did at least show that it was possible to pass in the service connection name as a string parameter.</p>
<h3 id="using-a-global-variable">Using a global variable</h3>
<p>If I declared a global pipeline variable, at the top of the pipeline file, it could be validated and queued without any issues.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">AzureServiceConnection</span><span class="p">:</span><span class="w"> </span><span class="l">PROD</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">stages</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="l">....</span><span class="w">
</span></span></span></code></pre></div><p>But again this was still not what I was after.</p>
<h2 id="the-solution">The Solution</h2>
<p>In my project I have a number of variable groups, one for each stage of deployment. What I found I needed to do was to declare one of them as a global variable for the whole pipeline. This would allow the pipeline to be validated and queued without any issues, but in at each deployment stage this initial value is overridden by the variable group associated with the stage.</p>
<p>This does rely on the fact that the variable groups for each stage contain the same variable names.</p>
<p>The revised YAML looks as follows</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="c"># the build pipeline</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># we have to globally declare this variable group, though it is override at the deployment job level</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># if we don&#39;t do this the $(AzureServiceConnection) variable used by the Azure Resource deployment fails queue time validation </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l">UAT</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">stages</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">UAT</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">deployment</span><span class="p">:</span><span class="w"> </span><span class="l">ARM_Provisioning</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">timeoutInMinutes</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">environment</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Staging&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l">UAT</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;windows-latest&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">strategy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">runOnce</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">deploy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="l">YAMLTemplates\ProvisionUsingARM.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureResourceGroup</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureResourceGroup)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureServiceConnection</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureServiceConnection)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">PROD</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">deployment</span><span class="p">:</span><span class="w"> </span><span class="l">ARM_Provisioning</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">timeoutInMinutes</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">environment</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Staging&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l">PROD</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;windows-latest&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">strategy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">runOnce</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">deploy</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="l">YAMLTemplates\ProvisionUsingARM.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureResourceGroup</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureResourceGroup)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">AzureServiceConnection</span><span class="p">:</span><span class="w"> </span><span class="l">$(AzureServiceConnection)</span><span class="w">
</span></span></span></code></pre></div><p>So I think a valid workaround for another strange YAML variable expansion issue in Azure DevOps.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Editing multiple files in the Azure DevOps UI and committing them in a single commit</title>
      <link>https://blog.richardfennell.net/posts/editing-multiple-files-in-azdo-ui-in-a-single-commit/</link>
      <pubDate>Fri, 18 Oct 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/editing-multiple-files-in-azdo-ui-in-a-single-commit/</guid>
      <description>&lt;p&gt;One of the most useful, and it seems relatively unknown, features in the GitHub web UI is the &lt;a href=&#34;https://github.com/github/dev&#34;&gt;ability to edit multiple files in the UI and commit them in a single commit&lt;/a&gt;. This is done by loading VS Code in the browser when in the code view by pressing &lt;strong&gt;.&lt;/strong&gt; (the full stop)&lt;/p&gt;
&lt;p&gt;The reason I find this so useful is that it allows me to make a series of small related changes to a project without having to clone the repository or using a &lt;a href=&#34;https://github.com/features/codespaces&#34;&gt;CodeSpace&lt;/a&gt;, very useful when editing the related YAML files of reusable workflows in GitHub Actions.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One of the most useful, and it seems relatively unknown, features in the GitHub web UI is the <a href="https://github.com/github/dev">ability to edit multiple files in the UI and commit them in a single commit</a>. This is done by loading VS Code in the browser when in the code view by pressing <strong>.</strong> (the full stop)</p>
<p>The reason I find this so useful is that it allows me to make a series of small related changes to a project without having to clone the repository or using a <a href="https://github.com/features/codespaces">CodeSpace</a>, very useful when editing the related YAML files of reusable workflows in GitHub Actions.</p>
<p>I was recently working on a templated YAML Azure DevOps Pipeline project and pressed <strong>.</strong> our of habit and was really pleased to find that this <a href="https://code.visualstudio.com/docs/editor/vscode-web">VS Code in the browser feature</a> is also available for Azure DevOps.</p>
<p>So next time you need to make a series of small changes to a project in Azure DevOps why not give it a try.</p>
<p><img alt="Editing multiple files in the Azure DevOps UI and committing them in a single commit" loading="lazy" src="/images/rfennell/vscode-azdo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Generating Visual Studio SQL Database Projects from the command line</title>
      <link>https://blog.richardfennell.net/posts/generating-vs-sql-projects-from-the-command-line/</link>
      <pubDate>Fri, 27 Sep 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/generating-vs-sql-projects-from-the-command-line/</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This is is one of those posts I write so I remember how to do something in the future.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;I recently had a need to generate many Visual Studio SQL Database Projects from existing databases. Being a good &amp;rsquo;lazy developer&amp;rsquo; I wanted to do this from the command line so I could automate the process, but it took me far to long to work out how&lt;/p&gt;
&lt;h2 id=&#34;the-manual-way&#34;&gt;The Manual Way&lt;/h2&gt;
&lt;p&gt;If you only have one database to import you can do this manually by using the &lt;strong&gt;Import&lt;/strong&gt; option in Visual Studio for an individual SQL Database Project.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<blockquote>
<p><em>This is is one of those posts I write so I remember how to do something in the future.</em></p></blockquote>
<h2 id="background">Background</h2>
<p>I recently had a need to generate many Visual Studio SQL Database Projects from existing databases. Being a good &rsquo;lazy developer&rsquo; I wanted to do this from the command line so I could automate the process, but it took me far to long to work out how</p>
<h2 id="the-manual-way">The Manual Way</h2>
<p>If you only have one database to import you can do this manually by using the <strong>Import</strong> option in Visual Studio for an individual SQL Database Project.</p>
<ol>
<li>Open Visual Studio</li>
<li>Create a new project using the <strong>SQL Server</strong> -&gt; <strong>SQL Server Database Project</strong> template</li>
<li>Right click on the project in the Solution Explorer and select <strong>Import</strong> -&gt; <strong>Database</strong> and follow the wizard.</li>
</ol>
<p><img alt="Import Database" loading="lazy" src="/images/rfennell/ssdtimport.png"></p>
<h2 id="the-command-line-way">The Command Line Way</h2>
<p>For some reason I really struggled to find the command line syntax to perform the same action,  so I am documenting it here for future reference.</p>
<p>The key command is <a href="https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver16#command-line-syntax">SqlPackage.exe</a> and the syntax needed will be something similar to</p>
<pre tabindex="0"><code>sqlpackage /Action:Extract /SourceConnectionString:&#34;Server=tcp:{instance},{port};Initial Catalog={databasename};TrustServerCertificate=True;integrated security=true;&#34; /TargetFile:{projectname} /p:ExtractTarget=SchemaObjectType
</code></pre><p>This command will generate an database project in a folder of the name specified using <code>/TargetFile</code>, where each SQL object has it&rsquo;s own file containing the appropriate SQL CREATE script.</p>
<p>The &lsquo;magic&rsquo; that took me too long to fine was that the <code>/p:ExtractTarget=SchemaObjectType</code> parameter is required. This instructs <strong>SQLpackage</strong> to generate a project structure and not extract to a single SQL file or DACPAC.</p>
<p>I am not sure whether my internet search powers were weak, or if this usage is just poorly documented, but I hope this post save future me, and others, some time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Showing Bicep Linting Issues as Errors and Warnings in Azure DevOps Pipelines</title>
      <link>https://blog.richardfennell.net/posts/showing-bicep-linting-issues-as-errors-and-warnings-in-azdo-pipelines/</link>
      <pubDate>Tue, 03 Sep 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/showing-bicep-linting-issues-as-errors-and-warnings-in-azdo-pipelines/</guid>
      <description>&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;Previously &lt;a href=&#34;https://rikhepworth.com/post/2024/02/2024-02-05-importing-bicep-lint-output-as-test-results-in-azure-devops-pipelines/&#34;&gt;Rik Hepworth has posted on &amp;lsquo;Importing bicep lint output as test results in Azure DevOps pipelines&amp;rsquo;&lt;/a&gt;. In his post he showed how  you could move from using the &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/test-toolkit&#34;&gt;ARM-TTK&lt;/a&gt; to validate ARM templates to using the built in&lt;a href=&#34;https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter&#34;&gt;Bicep Linter&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Rik&amp;rsquo;s solution involved taking the Bicep Lint output and converting it via the .SARIF format to JUnit so that it could be published to an Azure DevOps pipeline run as a set of test results.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="introduction">Introduction</h1>
<p>Previously <a href="https://rikhepworth.com/post/2024/02/2024-02-05-importing-bicep-lint-output-as-test-results-in-azure-devops-pipelines/">Rik Hepworth has posted on &lsquo;Importing bicep lint output as test results in Azure DevOps pipelines&rsquo;</a>. In his post he showed how  you could move from using the <a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/test-toolkit">ARM-TTK</a> to validate ARM templates to using the built in<a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/linter">Bicep Linter</a>.</p>
<p>Rik&rsquo;s solution involved taking the Bicep Lint output and converting it via the .SARIF format to JUnit so that it could be published to an Azure DevOps pipeline run as a set of test results.</p>
<p><img alt="Linting as test results" loading="lazy" src="https://rikhepworth.com/post/2024/02/2024-02-05-importing-bicep-lint-output-as-test-results-in-azure-devops-pipelines/images/junit-test-results-incorrect.png"></p>
<p>The problem is are linting issues really failed tests?</p>
<h1 id="an-alternative-approach">An Alternative Approach</h1>
<p>The Bicep linter, based on its <code>bicepconfig.json</code> defined configuration, will return issues as either an <code>error</code> or a <code>warning</code> when a build is done.</p>
<p>Is it not a better solution to return these as errors and warnings in the pipeline, just as you would for MSbuild?</p>
<p>Returning the linting issues in this manner actually involves less steps than Rik&rsquo;s solution. Basically we run a Bicep build from within a <code>AzureCLI@2</code> task and then process the output to return any issues as Azure DevOps errors and warnings. The advantage of using the <code>AzureCLI@2</code> task is that it is automatically kept up to date to the current Bicep version.</p>
<p>The critical step that is easy to forget is to set the <code>powershellerrorActionPreference: 'continue'</code> so that the script does not fail on the first error</p>
<blockquote>
<p><strong>Note:</strong> the <code>2&gt;&amp;1</code> at the end of the <code>az bicep build</code> command to ensure that the error stream is captured in the <code>$output</code> variable.</p></blockquote>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">AzureCLI@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Build Bicep files&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">azureSubscription</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;My Azure Subscription&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">scriptType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;ps&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">scriptLocation</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inlineScript&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">useGlobalConfig</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">powershellerrorActionPreference</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;continue&#39;</span><span class="w"> </span><span class="c"># we handle the errors in the script</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">inlineScript</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        # create folder if it doesn&#39;t exist
</span></span></span><span class="line"><span class="cl"><span class="sd">        if (!(Test-Path -Path $(Build.SourcesDirectory)\Bicep\ARMOutput)) {
</span></span></span><span class="line"><span class="cl"><span class="sd">          New-Item -ItemType Directory -Path $(Build.SourcesDirectory)\Bicep\ARMOutput
</span></span></span><span class="line"><span class="cl"><span class="sd">        }
</span></span></span><span class="line"><span class="cl"><span class="sd">        write-host &#34;Build the Bicep file&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        $output = az bicep build --file $(Build.SourcesDirectory)\Bicep\RootTemplate-Main.bicep --outdir $(Build.SourcesDirectory)\Bicep\ARMOutput 2&gt;&amp;1
</span></span></span><span class="line"><span class="cl"><span class="sd">        write-host &#34;Process the output&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        $output | foreach-object {
</span></span></span><span class="line"><span class="cl"><span class="sd">           if ($_ -match &#39;Error&#39;) {
</span></span></span><span class="line"><span class="cl"><span class="sd">              Write-Host &#34;##vso[task.logissue type=error]$_&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">           } 
</span></span></span><span class="line"><span class="cl"><span class="sd">           if ($_ -match &#39;Warning&#39;) {
</span></span></span><span class="line"><span class="cl"><span class="sd">               Write-Host &#34;##vso[task.logissue type=warning]$_&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">           }
</span></span></span><span class="line"><span class="cl"><span class="sd">        }</span><span class="w">
</span></span></span></code></pre></div><h1 id="conclusion">Conclusion</h1>
<p>For many projects this solution will give a more actionable result than Rik&rsquo;s process.</p>
<p>The time that will not be true is if you wish to <a href="https://docs.sonarsource.com/sonarqube/latest/analyzing-source-code/importing-external-issues/importing-issues-from-sarif-reports/#:~:text=You%20can%20import%20Static%20Analysis%20Results">import the linting issues into a tool such as SonarQube</a>, where using the .SARIF format will be vital.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to run your own maintenance job on Azure DevOps pipelines (Revisited now using Workload Identity federation)</title>
      <link>https://blog.richardfennell.net/posts/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines-revisted/</link>
      <pubDate>Thu, 29 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines-revisted/</guid>
      <description>&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;Last year I &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines/&#34;&gt;posted on how to create your own Azure DevOps maintenance jobs&lt;/a&gt;. This solution has been working well for me, until the Azure DevOps Service connection&amp;rsquo;s Entra ID Service Principle secret expired.&lt;/p&gt;
&lt;p&gt;So, I thought it well worth revisiting the creation of this maintenance job but this time using &lt;a href=&#34;https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation&#34;&gt;Workload Identity federation&lt;/a&gt; to authenticate, and hence never again have to worry about the secret expiring.&lt;/p&gt;
&lt;h1 id=&#34;updated-setup-process&#34;&gt;Updated Setup Process&lt;/h1&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This is a modification to the creation of the service connection, but the core the maintenance job setup remains the same as &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines/&#34;&gt;in my original post&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="introduction">Introduction</h1>
<p>Last year I <a href="https://blogs.blackmarble.co.uk/rfennell/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines/">posted on how to create your own Azure DevOps maintenance jobs</a>. This solution has been working well for me, until the Azure DevOps Service connection&rsquo;s Entra ID Service Principle secret expired.</p>
<p>So, I thought it well worth revisiting the creation of this maintenance job but this time using <a href="https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation">Workload Identity federation</a> to authenticate, and hence never again have to worry about the secret expiring.</p>
<h1 id="updated-setup-process">Updated Setup Process</h1>
<blockquote>
<p><strong>Note:</strong> This is a modification to the creation of the service connection, but the core the maintenance job setup remains the same as <a href="https://blogs.blackmarble.co.uk/rfennell/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines/">in my original post</a></p></blockquote>
<ol>
<li>In the Azure DevOps select the Team Project that the maintenance job will run in, and create a new Service Connection of type <strong>Azure Resource Manager</strong> and the default type <strong>Workload Identity federation (automatic)</strong>. Pick the Azure Subscription that the Azure DevOps instance is linked to.</li>
</ol>
<p><img alt="Create Service Connection" loading="lazy" src="https://blog.richardfennell.net/images/rfennell/azdomaintjob1.png"></p>
<ol start="2">
<li>
<p>Once the Service Connection is created, in the Azure Portal check the Entra ID directory for the newest App Registration (Service Principle) that you own. Unfortunately using this setup method, you have no control over the name of this App Registration, but you can probably assume it will be the newest one you own. It will be in the from <strong>[AzDo Org]-[Team Project]-[Subscription ID]</strong> format e.g. <strong>blackmarble-source-bm-3ed896d7-9999-9999-99999999999</strong>.</p>
<blockquote>
<p><strong>Important:</strong> You could have multiple App Registrations with the same name, so you need to make a note of the <strong>Client ID</strong> in the <strong>Overview</strong> blade of the App Registration as you need it later</p></blockquote>
</li>
<li>
<p>Back at the Azure DevOps organisation level you need to add the newly created Service Principle as a known user. This is where knowing the <strong>Client ID</strong> is important, as you can use it to identify the correct entry because the <strong>Client ID</strong> is shown as a subscript on each entry. I granted my new Service Principle <strong>Basic</strong> level access to be able to query the AZ CLI.</p>
<p><img alt="Add User to AzDo" loading="lazy" src="https://blog.richardfennell.net/images/rfennell/azdomaintjob2.png"></p>
<blockquote>
<p><strong>Note:</strong> You might get away with <strong>Stakeholder</strong> access, but I knew my maintenance jobs needed source code access so I had to have <strong>Basic</strong> level</p></blockquote>
<blockquote>
<p><strong>Note:</strong></p>
<p>If you fail to perform this step you see the fairly unhelpful error</p>
<p><code>ERROR: TF401444: Please sign-in at least once as {GUID Identifier} in a web browser to enable access to the service.</code></p></blockquote>
</li>
<li>
<p>You can now ready to update your <a href="https://gist.github.com/rfennell/f2f691e130f5ecfab0ae006f4d6c3ae2">scheduled maintenance job trigger YAML</a> to use the new Service Connection, and then attempt it. You will probably get permission errors, these will vary depending on what you are trying to do. In my case I am trying to queue builds so the permissions I need to grant the Service Principle in Azure DevOps are</p>
<ul>
<li>At the Org level reader access to the Agent Pools
<img alt="Org level permissions" loading="lazy" src="https://blog.richardfennell.net/images/rfennell/azdomaintjob3.png"></li>
<li>At the Team Project level reader access</li>
<li>At the Team Project level permissions to queue builds and edit build queue configuration
<img alt="Project level permissions" loading="lazy" src="https://blog.richardfennell.net/images/rfennell/azdomaintjob4.png"></li>
</ul>
</li>
</ol>
<h1 id="summary">Summary</h1>
<p>Moving to Workload Identity federation is a great way to avoid the issue of Service Connection secrets expiring. It is a little more complex to setup, but once done it is a great way to authenticate your maintenance jobs or any other Azure DevOps to Azure interactions.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problem running Playwright UX tests on hosted Github Actions Runners</title>
      <link>https://blog.richardfennell.net/posts/problem-running-playwright-tests-with-github-actions/</link>
      <pubDate>Thu, 08 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problem-running-playwright-tests-with-github-actions/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;Whilst refreshing an end-to-end devops demo, one I use for both Azure DevOps and GitHub, I hit a problem. The new &lt;a href=&#34;https://playwright.dev/&#34;&gt;Playwright UX Tests&lt;/a&gt;, that were replacing old &lt;a href=&#34;https://www.selenium.dev/&#34;&gt;Selenium&lt;/a&gt; ones, were failing on the GitHub hosted runner.&lt;/p&gt;
&lt;p&gt;The strange thing was the same tests worked perfectly on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;My local development machine&lt;/li&gt;
&lt;li&gt;The Azure DevOps hosted runner&lt;/li&gt;
&lt;li&gt;And strangest of all, a &lt;a href=&#34;https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners&#34;&gt;GitHub self hosted runner&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;the-solution&#34;&gt;The Solution&lt;/h2&gt;
&lt;p&gt;Adding some logging to the tests showed the actual issue was that on the GitHub hosted runner the code to count the rows in an HTML table was always returning 0.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>Whilst refreshing an end-to-end devops demo, one I use for both Azure DevOps and GitHub, I hit a problem. The new <a href="https://playwright.dev/">Playwright UX Tests</a>, that were replacing old <a href="https://www.selenium.dev/">Selenium</a> ones, were failing on the GitHub hosted runner.</p>
<p>The strange thing was the same tests worked perfectly on:</p>
<ul>
<li>My local development machine</li>
<li>The Azure DevOps hosted runner</li>
<li>And strangest of all, a <a href="https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners">GitHub self hosted runner</a></li>
</ul>
<h2 id="the-solution">The Solution</h2>
<p>Adding some logging to the tests showed the actual issue was that on the GitHub hosted runner the code to count the rows in an HTML table was always returning 0.</p>
<p>I went down a few dead ends, looking at permissions and tooling versions, but the solution was simple. The Playwright tests were running too fast on the GitHub hosted runner.</p>
<p>So, at it&rsquo;s simplest just adding a wait before my table locator fixed the issue.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-csharp" data-lang="csharp"><span class="line"><span class="cl"><span class="k">await</span> <span class="n">page</span><span class="p">.</span><span class="n">waitForTimeout</span><span class="p">(</span><span class="m">5000</span><span class="p">);</span>
</span></span><span class="line"><span class="cl"><span class="kt">int</span> <span class="n">rowCount</span> <span class="p">=</span> <span class="k">await</span> <span class="n">Page</span><span class="p">.</span><span class="n">Locator</span><span class="p">(</span><span class="s">&#34;.dataTable&#34;</span><span class="p">).</span><span class="n">Locator</span><span class="p">(</span><span class="s">&#34;tr&#34;</span><span class="p">).</span><span class="n">CountAsync</span><span class="p">();</span>
</span></span></code></pre></div><p>However, spraying waits across the test codebase is not a great solution, but it worked and gave me a starting point to refactor the test to be more robust.</p>
<p>A better solution was to explicitly wait for the table to be visible before counting the rows.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-csharp" data-lang="csharp"><span class="line"><span class="cl"><span class="k">await</span> <span class="n">Expect</span><span class="p">(</span><span class="n">Page</span><span class="p">.</span><span class="n">Locator</span><span class="p">(</span><span class="s">&#34;.dataTable&#34;</span><span class="p">)).</span><span class="n">ToBeVisibleAsync</span><span class="p">();</span>
</span></span><span class="line"><span class="cl"><span class="kt">int</span> <span class="n">rowCount</span> <span class="p">=</span> <span class="k">await</span> <span class="n">Page</span><span class="p">.</span><span class="n">Locator</span><span class="p">(</span><span class="s">&#34;.dataTable&#34;</span><span class="p">).</span><span class="n">Locator</span><span class="p">(</span><span class="s">&#34;tr&#34;</span><span class="p">).</span><span class="n">CountAsync</span><span class="p">();</span>
</span></span></code></pre></div><h2 id="the-key-takeaway">The key takeaway</h2>
<p>I think the key here is not the quality of my UX tests, but that this issue only showed up on GitHub hosted agents even though they are <a href="https://github.com/actions/runner-images">built off the same image</a> as the Azure DevOps ones and use effectively the same agent.</p>
<p>The only difference is that Azure DevOps and GitHub hosted runners are provisioned with different virtual hardware specifications</p>
<ul>
<li>Azure DevOps: 2 vCPUs, 7 GB RAM</li>
<li>GitHub: 4 vCPUs, 16 GB RAM</li>
</ul>
<p>This can be important, as I found, not just for speed of build.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Powershell token replacement failing for MSDeploy in GitHub Action</title>
      <link>https://blog.richardfennell.net/posts/powershell-token-replacement-failing-for-msdeploy-in-github-action/</link>
      <pubDate>Tue, 06 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/powershell-token-replacement-failing-for-msdeploy-in-github-action/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;I have been recently refreshing a GitHub end-to-end demo I use for talks and workshops that I had not looked at for a while. It shows how legacy code bases can be deployed with GitHub Actions and Azure App Services. The demo uses MSDeploy to deploy a ASP.NET web application to Azure App Services. The MSDeploy package is created as part of the GitHub Action workflow.&lt;/p&gt;
&lt;p&gt;The workflow uses a PowerShell script to do the deployment using the following:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>I have been recently refreshing a GitHub end-to-end demo I use for talks and workshops that I had not looked at for a while. It shows how legacy code bases can be deployed with GitHub Actions and Azure App Services. The demo uses MSDeploy to deploy a ASP.NET web application to Azure App Services. The MSDeploy package is created as part of the GitHub Action workflow.</p>
<p>The workflow uses a PowerShell script to do the deployment using the following:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="p">-</span> <span class="n">name</span><span class="err">:</span> <span class="s1">&#39;Deploy web site with MSDeploy&#39;</span>
</span></span><span class="line"><span class="cl">  <span class="n">run</span><span class="err">:</span> <span class="p">|</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$publishProfile</span> <span class="p">=</span> <span class="n">az</span> <span class="n">webapp</span> <span class="n">deployment</span> <span class="nb">list-publishing</span><span class="n">-profiles</span> <span class="p">-</span><span class="n">-resource-group</span> <span class="p">${{</span> <span class="n">vars</span><span class="p">.</span><span class="n">AzureResourceGroup</span><span class="p">}}</span> <span class="p">-</span><span class="n">-name</span> <span class="p">${{</span> <span class="n">secrets</span><span class="p">.</span><span class="py">Sitename</span> <span class="p">}}</span> <span class="p">-</span><span class="n">-query</span> <span class="s2">&#34;[?publishMethod==&#39;MSDeploy&#39;]&#34;</span> <span class="p">-</span><span class="n">-subscription</span> <span class="s2">&#34;${{ secrets.AZURESUBSCRIPTION}}&#34;</span> <span class="p">|</span> <span class="nb">convertfrom-json</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$shortPath</span> <span class="p">=</span> <span class="nb">resolve-path</span> <span class="p">./</span><span class="n">webdeploy</span>
</span></span><span class="line"><span class="cl">    <span class="p">&amp;</span> <span class="s2">&#34;C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe&#34;</span> <span class="n">-verb:sync</span> <span class="n">-source:package</span><span class="p">=</span><span class="s2">&#34;</span><span class="nv">$shortpath</span><span class="s2">\FabrikamFiber.Web.zip&#34;</span> <span class="n">-setParamFile:</span><span class="s2">&#34;</span><span class="nv">$shortpath</span><span class="s2">\FabrikamFiber.Web.SetParameters.xml&#34;</span> <span class="n">-dest:auto</span><span class="p">,</span><span class="n">ComputerName</span><span class="p">=</span><span class="s2">&#34;https://</span><span class="p">$(</span><span class="nv">$publishProfile</span><span class="p">.</span><span class="n">msdeploySite</span><span class="p">)</span><span class="s2">.scm.azurewebsites.net/msdeploy.axd?site=</span><span class="p">$(</span><span class="nv">$publishProfile</span><span class="p">.</span><span class="n">msdeploySite</span><span class="p">)</span><span class="s2">&#34;</span><span class="p">,</span><span class="n">UserName</span><span class="p">=</span><span class="vm">$</span><span class="p">(</span><span class="nv">$publishProfile</span><span class="p">.</span><span class="n">userName</span><span class="p">),</span><span class="n">Password</span><span class="p">=</span><span class="vm">$</span><span class="p">(</span><span class="nv">$publishProfile</span><span class="p">.</span><span class="n">userPWD</span><span class="p">),</span><span class="n">AuthType</span><span class="p">=</span><span class="s1">&#39;Basic&#39;</span> <span class="n">-verbose</span> <span class="n">-debug</span> <span class="n">-disableLink:AppPoolExtension</span> <span class="n">-disableLink:ContentExtension</span> <span class="n">-disableLink:CertificateExtension</span>
</span></span><span class="line"><span class="cl">  <span class="n">shell</span><span class="err">:</span> <span class="n">pwsh</span>
</span></span></code></pre></div><p>When I last used this demo, over a year ago, it worked fine. However, when I tried to run it again, I found that the MSDeploy was failing with the following error:</p>
<blockquote>
<p>Microsoft.Web.Deployment.DeploymentException: Unrecognized argument &lsquo;&quot;-dest:auto,ComputerName=&ldquo;https://$($publishProfile.msdeploySite).scm.azurewebsites.net/msdeploy.axd?site=$($publishProfile.msdeploySite)&quot;,UserName=$($publishProfile.userName),***&lsquo;Basic&rsquo;&rdquo;&rsquo;. All arguments must begin with &ldquo;-&rdquo;.</p></blockquote>
<h2 id="the-solution">The Solution</h2>
<p>The difference between when this workflow last ran and now is that the Powershell Core version used in the GitHub Action has been updated from 6 to 7. It appears the token replacement in the <code>-dest</code> parameter is not working as expected in Powershell 7, though it work for Powershell 6.</p>
<p>Once I altered it to the following, putting the whole <code>dest</code> parameter in quotes, it worked:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="p">-</span> <span class="n">name</span><span class="err">:</span> <span class="s1">&#39;Deploy web site with MSDeploy&#39;</span>
</span></span><span class="line"><span class="cl">  <span class="n">run</span><span class="err">:</span> <span class="p">|</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$publishProfile</span> <span class="p">=</span> <span class="n">az</span> <span class="n">webapp</span> <span class="n">deployment</span> <span class="nb">list-publishing</span><span class="n">-profiles</span> <span class="p">-</span><span class="n">-resource-group</span> <span class="p">${{</span> <span class="n">vars</span><span class="p">.</span><span class="n">AzureResourceGroup</span><span class="p">}}</span> <span class="p">-</span><span class="n">-name</span> <span class="p">${{</span> <span class="n">secrets</span><span class="p">.</span><span class="py">Sitename</span> <span class="p">}}</span> <span class="p">-</span><span class="n">-query</span> <span class="s2">&#34;[?publishMethod==&#39;MSDeploy&#39;]&#34;</span> <span class="p">-</span><span class="n">-subscription</span> <span class="s2">&#34;${{ secrets.AZURESUBSCRIPTION}}&#34;</span> <span class="p">|</span> <span class="nb">convertfrom-json</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$shortPath</span> <span class="p">=</span> <span class="nb">resolve-path</span> <span class="p">./</span><span class="n">webdeploy</span>
</span></span><span class="line"><span class="cl">    <span class="p">&amp;</span> <span class="s2">&#34;C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe&#34;</span> <span class="n">-verb:sync</span> <span class="n">-source:package</span><span class="p">=</span><span class="s2">&#34;</span><span class="nv">$shortpath</span><span class="s2">\FabrikamFiber.Web.zip&#34;</span> <span class="n">-setParamFile:</span><span class="s2">&#34;</span><span class="nv">$shortpath</span><span class="s2">\FabrikamFiber.Web.SetParameters.xml&#34;</span> <span class="n">-dest:</span><span class="s2">&#34;auto,ComputerName=https://</span><span class="p">$(</span><span class="nv">$publishProfile</span><span class="p">.</span><span class="n">msdeploySite</span><span class="p">)</span><span class="s2">.scm.azurewebsites.net/msdeploy.axd?site=</span><span class="p">$(</span><span class="nv">$publishProfile</span><span class="p">.</span><span class="n">msdeploySite</span><span class="p">)</span><span class="s2">,UserName=</span><span class="p">$(</span><span class="nv">$publishProfile</span><span class="p">.</span><span class="n">userName</span><span class="p">)</span><span class="s2">,Password=</span><span class="p">$(</span><span class="nv">$publishProfile</span><span class="p">.</span><span class="n">userPWD</span><span class="p">)</span><span class="s2">,AuthType=&#39;Basic&#39;&#34;</span> <span class="n">-verbose</span> <span class="n">-debug</span> <span class="n">-disableLink:AppPoolExtension</span> <span class="n">-disableLink:ContentExtension</span> <span class="n">-disableLink:CertificateExtension</span>
</span></span><span class="line"><span class="cl">  <span class="n">shell</span><span class="err">:</span> <span class="n">pwsh</span>
</span></span></code></pre></div>]]></content:encoded>
    </item>
    <item>
      <title>Fixes for issues moving from on-premise Azure DevOps agents to Azure Managed DevOps Pool agents</title>
      <link>https://blog.richardfennell.net/posts/fixing-issues-moving-from-on-premise-azdo-agent-to-mdp-hosted-agents/</link>
      <pubDate>Fri, 02 Aug 2024 01:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixing-issues-moving-from-on-premise-azdo-agent-to-mdp-hosted-agents/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;Our on premise build agents, though using the same image as the Microsoft hosted agents (generated using &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/new-problem-when-generating-build-agents-using-packer/&#34;&gt;Packer, as I have previously posted about&lt;/a&gt;), have some extra setup done by &lt;a href=&#34;https://github.com/VirtualEngine/Lability&#34;&gt;Lability&lt;/a&gt; as they are deployed to a Hyper-V host.&lt;/p&gt;
&lt;p&gt;Most of this is specific to the needs of running on Hyper-V e.g. setting up networking, creating a 2nd disk to act as a working store for the build agent, and of course installing the Azure DevOps agent itself.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>Our on premise build agents, though using the same image as the Microsoft hosted agents (generated using <a href="https://blogs.blackmarble.co.uk/rfennell/new-problem-when-generating-build-agents-using-packer/">Packer, as I have previously posted about</a>), have some extra setup done by <a href="https://github.com/VirtualEngine/Lability">Lability</a> as they are deployed to a Hyper-V host.</p>
<p>Most of this is specific to the needs of running on Hyper-V e.g. setting up networking, creating a 2nd disk to act as a working store for the build agent, and of course installing the Azure DevOps agent itself.</p>
<p>However, it turns out some of the extra steps it was doing were a barrier to running some of our major software project&rsquo;s Azure DevOps Pipelines on Microsoft hosted agents.</p>
<p>This was something we had not worried too much about in the past. This was because the performance of the hosted agents was too slow compared to what we can provide on-premise using Hyper-V. However, with the advent of <a href="https://learn.microsoft.com/en-us/azure/devops/managed-devops-pools/quickstart-azure-portal?view=azure-devops">Azure Managed DevOps Pools</a> as discussed in <a href="https://blogs.blackmarble.co.uk/rfennell/a-first-look-at-using-azure-mdp/">my recent blog post</a>, with their greater options for scaling, this became an issue worth investigating.</p>
<p>So I looked to see how I could get all our pipelines to run on the Microsoft hosted agents and on Managed DevOps Pools.</p>
<h2 id="the-issues">The Issues</h2>
<h3 id="custom-capabilities">Custom Capabilities</h3>
<p>The first issue we hit was that the Microsoft hosted agents and Managed DevOps Pools do not support <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&amp;tabs=yaml%2Cbrowser#capabilities">custom capabilities</a>. With on-premise agents you can add any custom capability you wish, usually used to help route jobs to a suitable agent within an agent pool, but this is not possible with the Microsoft hosted agents or Managed DevOps Pools.</p>
<p>&lsquo;Under the hood&rsquo; a capability is just an environment variable on the VM that the agent uses to identify the capabilities of the agent. So, though not what they were really designed for, custom capabilities are a useful way to be able to inject an environment variable to all VMs in a pool without having to alter the VM image or add startup scripts.</p>
<p>Historically we have used this technique to add missing environment variables to the agents i.e. to set <code>JAVA</code> and <code>JDK</code> to match the already present <code>JAVA_HOME</code> value. We need to do this as some tools, such as the Azure DevOps <code>XamarinAndroid@1</code> task, only checks <code>JAVA</code> and <code>JDK</code>, and not <code>JAVA_HOME</code> to find the JDK location.</p>
<p>But, as I mentioned, there is no way to add a custom capability to a Microsoft hosted agent or a Managed DevOps Pool agent at the pool level. The only option is to set it as an environment variable within the VM image. Given we wanted, to minimise maintenance costs, to use the Microsoft hosted build agent, which we have no control over, this technique was not an option.</p>
<p>However, it turns out there is a much better solution in the <code>XamarinAndroid@1</code> task case, to just pass the JDK version as a parameter to the task. This is a better solution as it is more explicit and does not rely on the agent having the correct environment variables set.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">XamarinAndroid@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;Build Xamarin.Android Project&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">projectFile</span><span class="p">:</span><span class="w"> </span><span class="l">src/Droid.csproj</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">target</span><span class="p">:</span><span class="w"> </span><span class="l">Build</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">configuration</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(BuildConfiguration)&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">clean</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">msbuildVersionOption</span><span class="p">:</span><span class="w"> </span><span class="l">latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">msbuildArguments</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;-m&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">JDKVERSION</span><span class="p">:</span><span class="w"> </span><span class="m">1.11</span><span class="w">
</span></span></span></code></pre></div><p>So your first option should always be to check if you can use a task parameter as opposed to using custom capabilities (environment variables).</p>
<p>If you find that your task does not provide a parameter to set the value you need, then you can still &lsquo;inject&rsquo; an environment variable onto an agent by setting an Azure DevOps variable in your pipeline. Remember, this works as Azure DevOps variables at are passed on to agents as environment variables.</p>
<p>To <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/set-variables-scripts?view=azure-devops&amp;tabs=bash#set-an-output-variable-for-use-in-the-same-job">create a variable</a> the following command can be used in a script task.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w"> </span><span class="l">echo &#34;##vso[task.setvariable variable=MyEnvVar]MyValue&#34;</span><span class="w">
</span></span></span></code></pre></div><h3 id="missing-folders-in-path">Missing folders in PATH</h3>
<p>For historic reasons we add the <code>C:\Program Files (x86)\Windows Kits\10\bin</code> folder to the PATH on our on-premise agents. This is because we use the <code>signtool.exe</code> to sign our projects, and this is where it is located by default.</p>
<p>We could of course alter all our pipelines to pass the explicit path to the <code>signtool.exe</code> to the task, but this is a lot of work and would make the pipelines less portable. So we wanted a means to add the path dynamically.</p>
<p>You might think you could use <a href="https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/setx">SETX</a> or the <a href="https://stackoverflow.com/questions/32405213/replace-set-statements-to-setx-in-a-batch-script-using-powershell#:~:text=To%20just%20do%20the%20set%20for%20setx%20replacement%2C,ForEach-Object%20%7B%24_%20-replace%20%22set%22%2C%20%22setx%22%7D%20%7C%20Set-Content%20%24new_filename">PowerShell equivalent</a> to add a folder to the PATH, but this does not work. The path is added to the current CMD/PS session, but not to the next task run by the Azure DevOps agent.</p>
<p>The answer is to use some Azure DevOps magic I was not previously aware of</p>
<pre tabindex="0"><code>echo &#34;##vso[task.prependpath]&lt;path to add&gt;&#34;
</code></pre><p>I used the following PowerShell to check if the required path was present and if not add it. You could of course greatly simplify the script if you only ever run on hosted agents and knew the actual path was always going to be missing, but I might use the script against on-premise agent where the path was already present, hence the guard code</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">powershell</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    # Define the base path where signtool.exe is located
</span></span></span><span class="line"><span class="cl"><span class="sd">    $basePath = &#34;C:\Program Files (x86)\Windows Kits\10\bin&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">    # Filtering via version and architecture, could use just one of the these, depends on needs
</span></span></span><span class="line"><span class="cl"><span class="sd">    $preferredVersion = &#34;10.0.19041.0\x86&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    $envPath = [Environment]::GetEnvironmentVariable(&#34;PATH&#34;)
</span></span></span><span class="line"><span class="cl"><span class="sd">    $targetPath = join-path -path $basepath -childpath $preferredVersion
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    # Check to see if the folder is already in the path
</span></span></span><span class="line"><span class="cl"><span class="sd">    if ($envPath -split &#34;;&#34; -contains $targetPath) {
</span></span></span><span class="line"><span class="cl"><span class="sd">        Write-Host &#34;The path $targetPath is already present in the PATH environment variable.&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">    } else {
</span></span></span><span class="line"><span class="cl"><span class="sd">        Write-Host &#34;The path $targetPath is NOT present in the PATH environment variable.&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Get the matching signtool path (pick the last in the list if multiple returned)
</span></span></span><span class="line"><span class="cl"><span class="sd">        $signtoolPath = (Get-ChildItem -Path $basePath -Recurse -Filter &#34;signtool.exe&#34; -File | Where-Object { $_.FullName -like &#34;*\$preferredVersion\*&#34; })[-1] | Select-Object -ExpandProperty FullName
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Double check the access to the file 
</span></span></span><span class="line"><span class="cl"><span class="sd">        if (Test-Path -Path $signtoolPath) {
</span></span></span><span class="line"><span class="cl"><span class="sd">            write-host &#34;SignTool path is $signtoolPath&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">            write-host &#34;Updating PATH to include SignTool folder&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">            echo &#34;##vso[task.prependpath]$(Split-path -path $signtoolPath)&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        } else {
</span></span></span><span class="line"><span class="cl"><span class="sd">            write-error&#34;Cannot find SignTool that match the filter $preferredVerison&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        }
</span></span></span><span class="line"><span class="cl"><span class="sd">    }</span><span class="w">
</span></span></span></code></pre></div><p>Using this technique we can add any path required dynamically at the start of the pipeline, and it will be available to all tasks in the pipeline.</p>
<h2 id="timezone-and-language">Timezone and Language</h2>
<p>The Microsoft hosted agent images are set to the <code>UTC</code> timezone and <code>en-US</code> language. For us this is a problem for execution of some unit tests which have a dependency on <code>GMT Standard Time</code> and <code>en-GB</code>.</p>
<p>Maybe we should alter our tests to make them more robust, but we can address the issue by setting the correct values as part of the pipeline</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">powershell</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    $zone = Get-TimeZone
</span></span></span><span class="line"><span class="cl"><span class="sd">    if ($zone.Id -eq &#34;GMT Standard Time&#34;) {
</span></span></span><span class="line"><span class="cl"><span class="sd">    write-host &#34;Timezone is correctly set to $($zone.Id)&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">    } else {
</span></span></span><span class="line"><span class="cl"><span class="sd">        write-host &#34;Timezone is incorrectly set to $($zone.Id), setting to GMT Standard Time&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        Set-TimeZone -id &#34;GMT Standard Time&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">    }
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    $list = Get-WinUserLanguageList
</span></span></span><span class="line"><span class="cl"><span class="sd">    if ($list[0].LanguageTag -eq &#34;en-GB&#34;) {
</span></span></span><span class="line"><span class="cl"><span class="sd">      write-host &#34;Language is correctly set to $($list[0].LanguageTag)&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">    } else {
</span></span></span><span class="line"><span class="cl"><span class="sd">      write-host &#34;Language is incorrectly set to $($list[0].LanguageTag), setting to en-GB&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      $list[0] = &#34;en-GB&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Set-WinUserLanguageList -LanguageList $list -Confirm:$false -Force
</span></span></span><span class="line"><span class="cl"><span class="sd">      Set-ItemProperty -Path &#34;HKCU:\Control Panel\International&#34; -Name sCountry -Value &#34;United kingdom&#34;;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Set-ItemProperty -Path &#34;HKCU:\Control Panel\International&#34; -Name sLongDate -Value &#34;dd MMMM yyyy&#34;;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Set-ItemProperty -Path &#34;HKCU:\Control Panel\International&#34; -Name sShortDate -Value &#34;dd/MM/yyyy&#34;;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Set-ItemProperty -Path &#34;HKCU:\Control Panel\International&#34; -Name sShortTime -Value &#34;HH:mm&#34;;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Set-ItemProperty -Path &#34;HKCU:\Control Panel\International&#34; -Name sTimeFormat -Value &#34;HH:mm:ss&#34;;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Set-ItemProperty -Path &#34;HKCU:\Control Panel\International&#34; -Name sYearMonth -Value &#34;MMMM yyyy&#34;;
</span></span></span><span class="line"><span class="cl"><span class="sd">      Set-ItemProperty -Path &#34;HKCU:\Control Panel\International&#34; -Name iFirstDayOfWeek -Value 0;
</span></span></span><span class="line"><span class="cl"><span class="sd">    }</span><span class="w">
</span></span></span></code></pre></div><p>This pattern can be used more than once in a single pipeline run to test against multiple timezones and languages, but for us it is just a case of setting the GMT/UK values once.</p>
<h2 id="summary">Summary</h2>
<p>It can be seen these fixes are minor and so there should be no technical barrier to us using standard hosted agents, or MDP agents, in the future.</p>
<p>I think that many of these &lsquo;fixes&rsquo; will be needed in most of our pipelines. So, it is probably a good idea to have a YAML template that is included at the start of all builds. One that sets the settings above, in effect making hosted agents identical to our on-premise ones.</p>
<p>So moving to hosted agents or MDPs, becomes a question of cost and not technology. One that is a much harder question to answer, balancing capital and servicing costs for on-premise agents against operational costs for the Azure hosted agents.</p>
<p>The reality is everyone&rsquo;s &lsquo;milage will vary&rsquo;, but I have a feeling in our case the dynamic scaling of <a href="https://learn.microsoft.com/en-us/azure/devops/managed-devops-pools/quickstart-azure-portal?view=azure-devops">MDP agents</a>, and the management cost reductions this can bring, will make all the difference</p>
]]></content:encoded>
    </item>
    <item>
      <title>A first look at using Azure Managed DevOps Pools</title>
      <link>https://blog.richardfennell.net/posts/a-first-look-at-using-azure-mdp/</link>
      <pubDate>Thu, 01 Aug 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-first-look-at-using-azure-mdp/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;We have for many years based our Azure DevOps build and release pipelines on on-premise agents. This was done to provide a means to scale up each agent using &amp;lsquo;bigger &amp;amp; faster&amp;rsquo; local VMs then the Microsoft Hosted agents provided, and to utilise spare Hyper-V hosts we had after moving services such as SharePoint to Office 365.&lt;/p&gt;
&lt;p&gt;To provide consistency of build agent experience to our developers, we decided we wished to use the same VM images on our on-premise agents as used by the Microsoft&amp;rsquo;s hosted agent pools. I have &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/new-problem-when-generating-build-agents-using-packer/&#34;&gt;blogged previously&lt;/a&gt; on the process of creating these images with Packer and deploying them to Hyper-V with Lability. This is a process we still follow every couple of months to keep the images reasonably up to date.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>We have for many years based our Azure DevOps build and release pipelines on on-premise agents. This was done to provide a means to scale up each agent using &lsquo;bigger &amp; faster&rsquo; local VMs then the Microsoft Hosted agents provided, and to utilise spare Hyper-V hosts we had after moving services such as SharePoint to Office 365.</p>
<p>To provide consistency of build agent experience to our developers, we decided we wished to use the same VM images on our on-premise agents as used by the Microsoft&rsquo;s hosted agent pools. I have <a href="https://blogs.blackmarble.co.uk/rfennell/new-problem-when-generating-build-agents-using-packer/">blogged previously</a> on the process of creating these images with Packer and deploying them to Hyper-V with Lability. This is a process we still follow every couple of months to keep the images reasonably up to date.</p>
<h2 id="enter-azure-managed-devops-pools">Enter Azure Managed DevOps Pools</h2>
<p><a href="https://learn.microsoft.com/en-us/azure/devops/managed-devops-pools/?view=azure-devops">Azure Managed DevOps Pools (MDP)</a> offer an alternative to our on premise agents and have just gone into public preview.</p>
<p>MDP are the logical successor to <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops">Azure Virtual Machine Scale Set Agent Pools (VMSS)</a>. VMSS are still an option as they are not going away, but MDP are easier to setup and have enhanced features. The key ones being:</p>
<ul>
<li>You can pick your Agent Images
<ul>
<li>You can use the same image as the Microsoft Hosted Agent, but with no need to create them yourselves as we do via Packer (<strong>a feature not available with VMSS</strong>)</li>
<li>You can use standard Azure VM images.</li>
<li>Or any VM image you create yourself and place in an Azure Image Gallery.</li>
</ul>
</li>
<li>You can pick the VM size and performance, you are not limited to the fixed performance of the Microsoft hosted pool agents</li>
<li>You can choose how your pool scales
<ul>
<li>Maximum number of agent VMs</li>
<li>Minimum number of agent VMs ready to accept work (can be zero to minimise cost, more VMs are created on-demand with a short start up delay)</li>
</ul>
</li>
<li><strong>Maybe most important</strong> - You can join your MDP to your own Azure VNET, so your VMs can access your private Azure resources (<strong>avoiding the need to deploy self managed agents for deployment jobs</strong>)</li>
<li>You only pay for cost of the running agent VMs</li>
<li>They are licensed using your on-premise agent parallel job licenses</li>
</ul>
<p>We have been lucky enough to be part of the private preview and have been able to test them out, with great success.</p>
<h2 id="who-are-mdps-for">Who are MDPs for?</h2>
<h3 id="people-who-need-to-connect-to-private-resources-on-a-private-vnet">People who need to connect to private resources on a private VNET</h3>
<p>There are many use cases for MDPs, but the one I think we will see most is people who want the ease of hosted agents, but need to connect to private resources on a private VNET to do part of their CI/CD process.</p>
<p>This has been a big win for us, as we, and our clients, have a lot of private resources in secure hybrid Azure architectures.</p>
<h3 id="people-who-need-to-scale-up-their-agents">People who need to scale up their agents</h3>
<p>MDPs will also allow us to consider how we will replace our on-premise agents when their host Hyper-V servers reach their end of life.</p>
<p>It will be much easier to manage and scale MDP based agents, as there is no need to rebuild the on premise images every couple of months; each new MDP agent can get an up to date Microsoft hosted agent image upon startup.</p>
<p>For this use case, any decision will be based on the cost of the MDP VMs vs the cost of the time to maintain the images and run them on-premises. From initial estimates we think it will be a close call, we need to do some more detailed analysis as to the actual utilisation of the agents, how many are actually active at any given time, and how long they are active for. However, a key factor will be that MDPs mean we can scale down the number of VMs to zero, so we only pay for what we use.</p>
<p>But we have to remember everyone&rsquo;s milage will vary as to whether MDPs are cheaper than running on-premise agents over a multi-year purchase cycle, but my suspicion is that for many the MDPs will be the cheaper option.</p>
<h2 id="summing-up">Summing Up</h2>
<p>So I urge you irrespective of how you use Azure DevOps build agents to take a look at the <a href="https://learn.microsoft.com/en-us/azure/devops/managed-devops-pools/quickstart-azure-portal?view=azure-devops">Managed DevOps Pools</a>. They could be a game changer for you in both functionality and cost management.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Watch out that Azure DevOps Server 2022.2 drops support for SQL2017</title>
      <link>https://blog.richardfennell.net/posts/watch-out-that-azdo2022.2-drops-support-for-sql2017/</link>
      <pubDate>Fri, 26 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/watch-out-that-azdo2022.2-drops-support-for-sql2017/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2022u2?view=azure-devops&#34;&gt;Azure DevOps Server 2022.2 has recently been released&lt;/a&gt;. Unexpectedly this minor 2022.2 release includes a change in supported versions of SQL,  SQL2017 support has been dropped. So, if someone attempted an upgrade from 2022.1 to 2022.2 they will get the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;TF255146: The SQL Server instance you specified (nnnnn) is version ‘SQL Server 2017 RTM’, which is not supported by this version of Azure DevOps Server. For more information about supported versions of SQL Server, visit &lt;a href=&#34;https://www.visualstudio.com/docs/setup-admin/requirements&#34;&gt;https://www.visualstudio.com/docs/setup-admin/requirements&lt;/a&gt; &amp;quot;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2022u2?view=azure-devops">Azure DevOps Server 2022.2 has recently been released</a>. Unexpectedly this minor 2022.2 release includes a change in supported versions of SQL,  SQL2017 support has been dropped. So, if someone attempted an upgrade from 2022.1 to 2022.2 they will get the error</p>
<blockquote>
<p>&ldquo;TF255146: The SQL Server instance you specified (nnnnn) is version ‘SQL Server 2017 RTM’, which is not supported by this version of Azure DevOps Server. For more information about supported versions of SQL Server, visit <a href="https://www.visualstudio.com/docs/setup-admin/requirements">https://www.visualstudio.com/docs/setup-admin/requirements</a> &quot;</p></blockquote>
<p>This issue has been detailed on the <a href="https://developercommunity.visualstudio.com/">Visual Studio Developer Community site</a>, and Microsoft have retrospectively updated the <a href="https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2022u2?view=azure-devops">product documentation.</a></p>
<p>However, it is fair to say Microsoft have not done a great job with the messaging here</p>
<ul>
<li>Unusually, there was no prior announcement that the system requirements were changing with a minor .X update. In the past supported version changes have been made only with a major update.</li>
<li>There was also nothing in the 2022.2 official release notes about the SQL change. Only, belatedly, in the primary product documentation, a place easily missed when doing an upgrade.</li>
<li>The information in the primary documentation is only partially correct. It states SQL 2017 does not work with 2022.X, but it does for 2022 RTM and 2022 Update 1. This is confusing.</li>
</ul>
<p>So the key point to take away is that any future updates from Azure DevOps 2022.1 will require a migration to a newer SQL version than 2017.</p>
<p>You will want to consider this SQL change if you are planning to migrate to Azure DevOps Services as only the current and current -1 version of Azure DevOps Server are supported, hence this SQL version limitation will be important.</p>
<p>The key point is that the clock is ticking before the migration tools from 2022.1 are retired and you are forced to move to 2022.2 or later prior to yur migration. On past update history my guess is you have around 6 months before this occurs.</p>
<p>So if a migration is on your horizon, and you are on Azure DevOps Server 2022.1 you might want to start planning it now.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Yet more Azure DevOps pipeline variable expansion strangeness</title>
      <link>https://blog.richardfennell.net/posts/yet-more-azure-devops-pipeline-variable-expansion-strangeness/</link>
      <pubDate>Tue, 23 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/yet-more-azure-devops-pipeline-variable-expansion-strangeness/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;Yesterday I posted about &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/getting-parameters-out-of-arm-deloyments/&#34;&gt;converting ARM output variables to Azure DevOps pipeline variables&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Whilst using the pattern I discussed we hit an interesting problem. On my test pipeline I had the following YAML and it was working as expected.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;PowerShell@2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;Obtain Azure Deployment outputs&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;targetType&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;inline&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;script&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;sd&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;      if (![string]::IsNullOrEmpty( $env:deploymentOutputs )) {
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        $DeploymentOutputs = convertfrom-json $env:deploymentOutputs
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        $DeploymentOutputs.PSObject.Properties | ForEach-Object {
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;            $keyname = $_.Name
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;            $value = $_.Value.value
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;            Write-Host &amp;#34;The value of [$keyName] is [$value]&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;            Write-Host &amp;#34;##vso[task.setvariable variable=$keyname]$value&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        }
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;      }      &lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;However, on the first production project I tried this on, the script ran but did not create the expected variables. The issue was that the variable &lt;code&gt;$env:deploymentOutputs&lt;/code&gt; was empty, even though the ARM deployment had completed successfully and the outputs were available in the pipeline debug logs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>Yesterday I posted about <a href="https://blogs.blackmarble.co.uk/rfennell/getting-parameters-out-of-arm-deloyments/">converting ARM output variables to Azure DevOps pipeline variables</a></p>
<p>Whilst using the pattern I discussed we hit an interesting problem. On my test pipeline I had the following YAML and it was working as expected.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PowerShell@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Obtain Azure Deployment outputs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">targetType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inline&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      if (![string]::IsNullOrEmpty( $env:deploymentOutputs )) {
</span></span></span><span class="line"><span class="cl"><span class="sd">        $DeploymentOutputs = convertfrom-json $env:deploymentOutputs
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        $DeploymentOutputs.PSObject.Properties | ForEach-Object {
</span></span></span><span class="line"><span class="cl"><span class="sd">            $keyname = $_.Name
</span></span></span><span class="line"><span class="cl"><span class="sd">            $value = $_.Value.value
</span></span></span><span class="line"><span class="cl"><span class="sd">            Write-Host &#34;The value of [$keyName] is [$value]&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">            Write-Host &#34;##vso[task.setvariable variable=$keyname]$value&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        }
</span></span></span><span class="line"><span class="cl"><span class="sd">      }      </span><span class="w">
</span></span></span></code></pre></div><p>However, on the first production project I tried this on, the script ran but did not create the expected variables. The issue was that the variable <code>$env:deploymentOutputs</code> was empty, even though the ARM deployment had completed successfully and the outputs were available in the pipeline debug logs.</p>
<h1 id="the-solution">The Solution</h1>
<p>Turns out the problem was the type of pipeline agent being used. The test pipeline was using a Windows agent, the production pipeline an Ubuntu agent.</p>
<p>The fix was to swap to use the <code>$(deploymentOutputs)</code> syntax, as opposed to the <code>$env:deploymentOutputs</code> syntax, as shown below</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PowerShell@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Obtain Azure Deployment outputs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">targetType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inline&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      if (![string]::IsNullOrEmpty( &#39;$(deploymentOutputs)&#39; )) {
</span></span></span><span class="line"><span class="cl"><span class="sd">        $DeploymentOutputs = convertfrom-json &#39;$(deploymentOutputs)&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        $DeploymentOutputs.PSObject.Properties | ForEach-Object {
</span></span></span><span class="line"><span class="cl"><span class="sd">            $keyname = $_.Name
</span></span></span><span class="line"><span class="cl"><span class="sd">            $value = $_.Value.value
</span></span></span><span class="line"><span class="cl"><span class="sd">            Write-Host &#34;The value of [$keyName] is [$value]&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">            Write-Host &#34;##vso[task.setvariable variable=$keyname]$value&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        }
</span></span></span><span class="line"><span class="cl"><span class="sd">      }</span><span class="w">
</span></span></span></code></pre></div><p>Interestingly, I had assumed the issue was actually whether PowerShell or PowerShell Core was being used, but this was not the case. Both versions of PowerShell correctly resolved the <code>$env:deploymentOutputs</code> syntax on a Windows agent.</p>
<p>So it was the agent OS type that was the issue. Yet another strange quirk of Azure DevOps pipeline variable expansion.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting parameters out of ARM/BICEP Deployments</title>
      <link>https://blog.richardfennell.net/posts/getting-parameters-out-of-arm-deloyments/</link>
      <pubDate>Mon, 22 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-parameters-out-of-arm-deloyments/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;Historically, we have used &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=keesschollaart.arm-outputs&#34;&gt;Kees Schollaart&amp;rsquo;s ARM Outputs Azure DevOps task&lt;/a&gt; to convert the output from an ARM template deployment into a variable that can be used in a subsequent Azure DevOps pipeline task, using the general form&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;AzureResourceManagerTemplateDeployment@3&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;Deploy the main template&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;deploymentScope&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Resource Group&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;azureResourceManagerConnection&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;ARMConnEndpoint&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;subscriptionId&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;$(SubscriptionId)&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;action&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Create Or Update Resource Group&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;resourceGroupName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;$(ResourceGroup)&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;location&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;$(AzureRegion)&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;templateLocation&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Linked artifact&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;csmFile&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;$(Pipeline.Workspace)/ARMtemplates/azuredeploy.json&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;overrideParameters&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;&amp;gt;-&lt;/span&gt;&lt;span class=&#34;sd&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;      -staticSitelocation &amp;#34;westeurope&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;      -projectName &amp;#34;$(projectName)&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;      -env &amp;#34;$(environment)&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;deploymentMode&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Incremental&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;ARM Outputs@6&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;Obtain outputs from the deployment of the Main Deploy&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;name&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;MainDeployOutput&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;ConnectedServiceNameSelector&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;ConnectedServiceNameARM&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;ConnectedServiceNameARM&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;ARMConnEndpoint&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;resourceGroupName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;$(ResourceGroup)&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;whenLastDeploymentIsFailed&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;fail&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;This process has been working well until we &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#create-an-azure-resource-manager-service-connection-using-workload-identity-federation&#34;&gt;upgraded our service connections to workload identity federation&lt;/a&gt;. As soon as we did this the &lt;code&gt;ARM Outputs@6&lt;/code&gt; task started failing with the error message&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>Historically, we have used <a href="https://marketplace.visualstudio.com/items?itemName=keesschollaart.arm-outputs">Kees Schollaart&rsquo;s ARM Outputs Azure DevOps task</a> to convert the output from an ARM template deployment into a variable that can be used in a subsequent Azure DevOps pipeline task, using the general form</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">AzureResourceManagerTemplateDeployment@3</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Deploy the main template</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">deploymentScope</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Resource Group&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">azureResourceManagerConnection</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;ARMConnEndpoint&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">subscriptionId</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(SubscriptionId)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">action</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Create Or Update Resource Group&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">resourceGroupName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(ResourceGroup)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">location</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(AzureRegion)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">templateLocation</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Linked artifact&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">csmFile</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(Pipeline.Workspace)/ARMtemplates/azuredeploy.json&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">overrideParameters</span><span class="p">:</span><span class="w"> </span><span class="p">&gt;-</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      -staticSitelocation &#34;westeurope&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      -projectName &#34;$(projectName)&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      -env &#34;$(environment)&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">deploymentMode</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Incremental&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">ARM Outputs@6</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Obtain outputs from the deployment of the Main Deploy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;MainDeployOutput&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">ConnectedServiceNameSelector</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;ConnectedServiceNameARM&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">ConnectedServiceNameARM</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;ARMConnEndpoint&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">resourceGroupName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(ResourceGroup)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">whenLastDeploymentIsFailed</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;fail&#39;</span><span class="w">
</span></span></span></code></pre></div><p>This process has been working well until we <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#create-an-azure-resource-manager-service-connection-using-workload-identity-federation">upgraded our service connections to workload identity federation</a>. As soon as we did this the <code>ARM Outputs@6</code> task started failing with the error message</p>
<blockquote>
<p>Logging in using ApplicationTokenCredentials, authScheme is &lsquo;WorkloadIdentityFederation&rsquo;<br>
Unhandled exception during ARM Outputs Task Error: secret must be a non empty string.</p></blockquote>
<p>A quick look on the <a href="https://github.com/keesschollaart81/vsts-arm-outputs">task&rsquo;s GitHub repository</a> showed we were not alone in seeing this error, but also that the task had not been updated in a good while.</p>
<h1 id="to-fork-or-not-to-fork">To Fork or not to Fork</h1>
<p>This task, like so many others in the Azure DevOps marketplace, has ceased to be under active support.</p>
<p>Our first thought was to fork the repository and fix the issue ourselves. However, after a bit of research we found the fix was non-trivial and would probably need a complete swap out of the authentication libraries to update to ones that supported <code>WorkloadIdentityFederation</code> as the existing libraries were deprecated.</p>
<p>However, more than the work involved, there was also the issue of what to do with any fix. We could see that, as the task was no longer under support, no PRs had been merged into the original repo for a number of years. So we had no mechanism to get our fix out to the community via the original GitHub repo and the Azure DevOps Marketplace listing. Our only option was republishing the task under a new ID. Not an easily discoverable solution or desirable as this confuses the marketplace and makes it harder for users to find the right task.</p>
<p>Also we did not really have the appetite to maintain the fork of the task, and keep it up to date with the latest changes in the Azure DevOps API. This lack of support commitment is just the problem that got us to the current position.</p>
<h1 id="the-solution">The Solution</h1>
<p>The solution was a pattern we find ourselves using more and more i.e. Don&rsquo;t create an Azure DevOps task and publish it via the Azure DevOps Marketplace, but rather use a generic script task and some BASH/PowerShell to get the job done within the code the team has direct access too in their repos. Whether it be a project repo or some shared internal repo that is used by many projects.</p>
<p>So the above pipeline was replaced with the following</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">AzureResourceManagerTemplateDeployment@3</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Deploy the main template</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">deploymentScope</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Resource Group&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">azureResourceManagerConnection</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;ARMConnEndpoint&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">subscriptionId</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(SubscriptionId)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">action</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Create Or Update Resource Group&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">resourceGroupName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(ResourceGroup)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">location</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(AzureRegion)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">templateLocation</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Linked artifact&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">csmFile</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(Pipeline.Workspace)/ARMtemplates/azuredeploy.json&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">overrideParameters</span><span class="p">:</span><span class="w"> </span><span class="p">&gt;-</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      -staticSitelocation &#34;westeurope&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      -projectName &#34;$(projectName)&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      -env &#34;$(environment)&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">deploymentMode</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Incremental&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">deploymentOutputs</span><span class="p">:</span><span class="w"> </span><span class="l">deploymentOutputs</span><span class="w"> </span><span class="c"># need this parameter adding to get the outputs as a JSON string in an environment variable</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># Then process the outputs from the ARM deployment and promote them to pipeline variables</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PowerShell@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Obtain Azure Deployment outputs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">targetType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inline&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      if (![string]::IsNullOrEmpty( &#39;$(deploymentOutputs)&#39; )) {
</span></span></span><span class="line"><span class="cl"><span class="sd">        $DeploymentOutputs = convertfrom-json &#39;$(deploymentOutputs)&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        $DeploymentOutputs.PSObject.Properties | ForEach-Object {
</span></span></span><span class="line"><span class="cl"><span class="sd">            $keyname = $_.Name
</span></span></span><span class="line"><span class="cl"><span class="sd">            $value = $_.Value.value
</span></span></span><span class="line"><span class="cl"><span class="sd">            Write-Host &#34;The value of [$keyName] is [$value]&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">            Write-Host &#34;##vso[task.setvariable variable=$keyname]$value&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        }
</span></span></span><span class="line"><span class="cl"><span class="sd">      }</span><span class="w">
</span></span></span></code></pre></div><blockquote>
<p><strong>Updated 23rd July</strong> In the inline PowerShell the <code>$(deploymentOutputs)</code> variable syntax is required, as opposed to the <code>$env:deploymentOutputs</code> syntax, if you are using an Ubuntu agent. So it is probably best to always use the <code>$(deploymentOutputs)</code> syntax to ensure cross platform compatibility.</p></blockquote>
<p>This solution works, is maintainable and is under our control. Hope the same YAML unblocks some other teams out there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>More on when Azure DevOps variables are available in pipeline runs</title>
      <link>https://blog.richardfennell.net/posts/more-on-when-azure-devops-variables-available/</link>
      <pubDate>Thu, 11 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-on-when-azure-devops-variables-available/</guid>
      <description>&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;I have previously blogged a good deal on Azure DevOps variable evaluation, see &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/getting-confused-over-azure-devops-pipeline-variable-evaluation/&#34;&gt;here&lt;/a&gt; and &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/&#34;&gt;here&lt;/a&gt;, but the saga continues&amp;hellip;&lt;/p&gt;
&lt;h1 id=&#34;when-variables-exist&#34;&gt;When variables exist&lt;/h1&gt;
&lt;p&gt;Today I realised, something I guess should have been obvious, that when you manually queue a run, pre-defined variables such as &lt;code&gt;$(Build.SourceBranchName)&lt;/code&gt; are not available until the pipeline is compiled and starts running. This is because though there is a value in UI branch combo but this value is not in &lt;code&gt;$(Build.SourceBranchName)&lt;/code&gt; until the pipeline starts running.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="introduction">Introduction</h1>
<p>I have previously blogged a good deal on Azure DevOps variable evaluation, see <a href="https://blogs.blackmarble.co.uk/rfennell/getting-confused-over-azure-devops-pipeline-variable-evaluation/">here</a> and <a href="https://blogs.blackmarble.co.uk/rfennell/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/">here</a>, but the saga continues&hellip;</p>
<h1 id="when-variables-exist">When variables exist</h1>
<p>Today I realised, something I guess should have been obvious, that when you manually queue a run, pre-defined variables such as <code>$(Build.SourceBranchName)</code> are not available until the pipeline is compiled and starts running. This is because though there is a value in UI branch combo but this value is not in <code>$(Build.SourceBranchName)</code> until the pipeline starts running.</p>
<p>This would seem unimportant, and it is, except in the case when you are manually picking which stages to run via the run pipeline dialog. Only stages that have no conditional expressions, or ones where the <code>$(Build.SourceBranchName)</code> name is empty are shown in the list of stages to run.</p>
<blockquote>
<p><strong>Updated 22 July 2024</strong> - <a href="https://blogs.blackmarble.co.uk/rhepworth/">Rik Hepworth</a> pointed out to me that a good use of the Stage Picker UI is to do a &lsquo;richer&rsquo; validation of the YAML pipeline.</p>
<p>This is useful as though the Azure DevOps UI does a validation of any YAML edits, it only shows compile time issues, mostly typos and indenting issues, it does not validate any runtime values.</p>
<p>Rik&rsquo;s suggested that he had found loading the Stage Picker UI is a good way to do further validation of &ldquo;some of the runtime values&rdquo;. This is a good suggestion, it is not perfect validation, but allows you to do a bit more checking before queuing a run.</p></blockquote>
<p><img alt="Run Pipeline Dialog" loading="lazy" src="https://blog.richardfennell.net/images/rfennell/branchconditions1.png"></p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">trigger</span><span class="p">:</span><span class="w"> </span><span class="l">none</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">stages</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">No_Conditions</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="l">Job1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="l">echo &#39;Running on $(Build.SourceBranchName)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">${{ if eq(variables[&#39;Build.SourceBranchName&#39;], &#39;main&#39;) }}:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">BranchName_is_main</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="l">Job2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="l">echo &#39;Running on $(Build.SourceBranchName)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">${{ if or(eq(variables[&#39;Build.SourceBranchName&#39;], &#39;main&#39;), eq(variables[&#39;Build.SourceBranchName&#39;], &#39;&#39;)) }}:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">BanchName_is_main_or_empty</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="l">Job3</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="l">echo &#39;Running on $(Build.SourceBranchName)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">Using_Conditions</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">condition</span><span class="p">:</span><span class="w"> </span><span class="l">eq(variables[&#39;Build.SourceBranchName&#39;], &#39;main&#39;)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="l">Job4</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="l">echo &#39;Running on $(Build.SourceBranchName)&#39;</span><span class="w">
</span></span></span></code></pre></div><blockquote>
<p>Remember if you wish to if a <code>{{if }}</code> expression to actually filter on a branch name you need to use an <code>Or</code> to handle both the UI and runtime conditions</p></blockquote>
<p>Basically, you cannot rely on the value of <code>Build.SourceBranchName</code> and <code>-{{ if }}</code> expressions to trim the list of available stages in the run build UI. In my opinion, best to avoid even trying to do this.</p>
<h1 id="what-can-i-do-with-conditions">What can I do with conditions?</h1>
<p>If you want to be able to pick the stages to run in the UI, and need checks against items like branch names, the simple answer is to use conditions on the stages. These are only evaluated at the moment the stage is started, unlike expressions which are evaluated when the run is queued.</p>
<p>But how can we give users an indication of what will be run, or not, as soon as the run is started as opposed to only finding out as it completes?</p>
<p>Variables and expressions are the answer here. Using expressions we can set a variable based on the branch name at queue time. This can be used in a displayname for the stage so in the UI it shows if a stage will be run or not</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">${{ if eq(variables[&#39;Build.SourceBranchName&#39;], &#39;main&#39;) }}:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">willdeployToProdString</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">value</span><span class="p">:</span><span class="w"> </span><span class="kc">Yes</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="l">${{ if ne(variables[&#39;Build.SourceBranchName&#39;], &#39;main&#39;) }}:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">willdeployToProdString</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">value</span><span class="p">:</span><span class="w"> </span><span class="kc">No</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">stages</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="c"># other stages shown above left of for clarity of the example</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="l">Using_Conditions</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Will deploy ${{variables.willdeployToProdString}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">condition</span><span class="p">:</span><span class="w"> </span><span class="l">eq(variables[&#39;Build.SourceBranchName&#39;], &#39;main&#39;)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="l">Job4</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="l">echo &#39;Running on $(Build.SourceBranchName)&#39;</span><span class="w">
</span></span></span></code></pre></div><p><img alt="Run Pipeline Dialog" loading="lazy" src="https://blog.richardfennell.net/images/rfennell/branchconditions2.png"></p>
<blockquote>
<p>Remember if you are skipping steps with conditions, you may need to set the <code>condition</code> on subsequent stages to run. The default (if no condition is set) is to only run on success of the previous stage. See <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops">the official docs for more details on your options</a></p></blockquote>
<h1 id="conclusion">Conclusion</h1>
<p>As I have written about before, use compile time expressions in Azure DevOps Pipelines with care, their evaluation can get complex.</p>
<p>So a bit of a niche blog post, hope it is of use to someone</p>
]]></content:encoded>
    </item>
    <item>
      <title>Checking out Git submodules when Azure DevOps Protected Access to repos is enabled</title>
      <link>https://blog.richardfennell.net/posts/checking-out-git-submodules-when-protected-access-enabled/</link>
      <pubDate>Wed, 10 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/checking-out-git-submodules-when-protected-access-enabled/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;Whilst working on an Azure DevOps YAML pipeline for a solution that used &lt;a href=&#34;https://git-scm.com/book/en/v2/Git-Tools-Submodules&#34;&gt;Git Submodules&lt;/a&gt; we hit a problem with the checkout of the repo and submodule using the YAML&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nt&#34;&gt;jobs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;job&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Build&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;steps&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;checkout&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;self&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;submodules&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;It got the main Git repo, but failed with the following error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;git submodule sync
git --config-env=http.https://myorg@dev.azure.com.extraheader=env_var_http.https://myorg@dev.azure.com.extraheader submodule update --init --force
Submodule &amp;#39;Library&amp;#39; (https://myorg@dev.azure.com/myorg/myproject/_git/Library) registered for path &amp;#39;Library&amp;#39;
Cloning into &amp;#39;D:/a/1/s/Library&amp;#39;...
remote: TF401019: The Git repository with name or identifier Library does not exist or you do not have permissions for the operation you are attempting.
&lt;/code&gt;&lt;/pre&gt;&lt;h1 id=&#34;the-analysis&#34;&gt;The Analysis&lt;/h1&gt;
&lt;p&gt;The issue is that the build agent access token was scoped to only the repo containing the YAML pipeline and not the submodule repo, even though they are in the same Azure DevOps Team Project.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>Whilst working on an Azure DevOps YAML pipeline for a solution that used <a href="https://git-scm.com/book/en/v2/Git-Tools-Submodules">Git Submodules</a> we hit a problem with the checkout of the repo and submodule using the YAML</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Build&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">checkout</span><span class="p">:</span><span class="w"> </span><span class="l">self</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">submodules</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span></code></pre></div><p>It got the main Git repo, but failed with the following error</p>
<pre tabindex="0"><code>git submodule sync
git --config-env=http.https://myorg@dev.azure.com.extraheader=env_var_http.https://myorg@dev.azure.com.extraheader submodule update --init --force
Submodule &#39;Library&#39; (https://myorg@dev.azure.com/myorg/myproject/_git/Library) registered for path &#39;Library&#39;
Cloning into &#39;D:/a/1/s/Library&#39;...
remote: TF401019: The Git repository with name or identifier Library does not exist or you do not have permissions for the operation you are attempting.
</code></pre><h1 id="the-analysis">The Analysis</h1>
<p>The issue is that the build agent access token was scoped to only the repo containing the YAML pipeline and not the submodule repo, even though they are in the same Azure DevOps Team Project.</p>
<p>This was not because of a lack of permissions for the build service account, but that the Azure DevOps Team Project pipelines setting <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/security/secure-access-to-repos?view=azure-devops&amp;tabs=yaml">&lsquo;Protect access to repositories in YAML pipelines&rsquo;</a> was enabled. This setting restricts the access of the build agent to only the repo containing the YAML pipeline, or ones referenced explicitly in the pipeline.</p>
<h1 id="the-solution">The Solution</h1>
<p>The obvious solution is to disable the &lsquo;Protect access to repositories in YAML pipelines&rsquo; setting for the Team Project. However, this may not be possible due to security requirements. In our case this setting was being enforced at the Team Project Collection level and we could not alter it.</p>
<p>But luckily there is a workaround.</p>
<ol>
<li>
<p>In the YAML reference the Git Submodule and use the <code>checkout</code> task to clone into some folder (the folder is unimportant as we will never actually use it).</p>
</li>
<li>
<p>You can then call the main <code>checkout</code> with the <code>submodules</code> parameter set to <code>true</code>. This will now work because the agent access token is now scoped to both the repo the YAML is in and the submodule you manually referenced</p>
</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">$(Build.DefinitionName)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">trigger</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">branches</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">include</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w"> </span><span class="l">main ] </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">resources</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">repositories</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">repository</span><span class="p">:</span><span class="w"> </span><span class="l">Library</span><span class="w"> </span><span class="c"># the submodule</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">git</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Library</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="l">windows-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">stages</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">stage</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Build_Packages&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">job</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Build&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">checkout</span><span class="p">:</span><span class="w"> </span><span class="l">Library</span><span class="w"> </span><span class="c"># the throwaway checkout</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;./s/SomeTmpPath/Library&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">checkout</span><span class="p">:</span><span class="w"> </span><span class="l">self</span><span class="w"> </span><span class="c"># the main checkout</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">submodules</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span></code></pre></div><blockquote>
<p><strong>Note:</strong> You can&rsquo;t seem to get away with just adding the reference, the explicit checkout of the referenced submodule repo is required, else you still get the permissions error.</p></blockquote>
<p>So, a hacky solution, but it works!</p>
]]></content:encoded>
    </item>
    <item>
      <title>So my Azure DevOps TF30063 error was down to DNS again</title>
      <link>https://blog.richardfennell.net/posts/so-my-tf30063-error-is-just-dns-again/</link>
      <pubDate>Tue, 09 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/so-my-tf30063-error-is-just-dns-again/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;I recently upgraded a clients Azure DevOps server from 2019 to 2022. This required a new application tier VM due to the change in supported versions of Windows Server between the two versions.&lt;/p&gt;
&lt;p&gt;Unfortunately, the client&amp;rsquo;s developers had always accessed the old Azure DevOps Server using the VMs FQDN, as opposed to a DNS managed alias. Hence, given they wanted to minimise change, the plan was to create a DNS Alias so they could continue to use the same URLs and TFVC workspaces mappings.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>I recently upgraded a clients Azure DevOps server from 2019 to 2022. This required a new application tier VM due to the change in supported versions of Windows Server between the two versions.</p>
<p>Unfortunately, the client&rsquo;s developers had always accessed the old Azure DevOps Server using the VMs FQDN, as opposed to a DNS managed alias. Hence, given they wanted to minimise change, the plan was to create a DNS Alias so they could continue to use the same URLs and TFVC workspaces mappings.</p>
<p>The upgrade went well, and once complete I could access the upgraded server using the URLs http://localhost:8080/tfs or http://newserver:8080/tfs.</p>
<p>However, once the DNS Alias (an A record) was added and we tried to access http://oldserver:8080/tfs the developers were being shown a login dialog and when they entered their valid credentials were getting a TF30063 error.</p>
<h1 id="the-solution-well-sort-of">The Solution (well sort of)</h1>
<p>I went down a rabbit hole of <a href="https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/accessing-server-locally-with-fqdn-cname-alias-denied">loopback check settings</a>, but to no avail.</p>
<p>I then tried adding an entry in the local host file for my new server VM on my test client. Strangely this worked.</p>
<p>So my working assumption was either</p>
<ol>
<li>The DNS Alias was not working correctly, maybe a CNAME record is required as opposed to an A record?</li>
<li>As the DNS Alias is replacing what was a &lsquo;real&rsquo; server name, had we missed some special settings?</li>
</ol>
<p>To work through these ideas, I got a new CNAME added to the DNS &lsquo;devops&rsquo; that pointed to the new server (arguably a CNAME that should have been in use all along). For some reason this DNS change took much longer to propagate than the A record, but once it did the TF30063 error had completely gone. I could now access the new server using the URLs</p>
<ul>
<li>http://localhost:8080/tfs</li>
<li>http://newserver:8080/tfs</li>
<li>http://oldserver:8080/tfs (the DNS A record)</li>
<li>and http://devops:8080/tfs (the DNS CNAME)</li>
</ul>
<p>So as usual the problem was DNS, I assume due to DNS propagation delays and cache timeouts.</p>
<p>Why is it always DNS?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where has the staging URL PR comment generated by my GitHub Actions workflow gone?</title>
      <link>https://blog.richardfennell.net/posts/where-has-the-staging-url-gone-from-my-github-actions-workflow/</link>
      <pubDate>Tue, 02 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-has-the-staging-url-gone-from-my-github-actions-workflow/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;Last week I noticed that the staging URL that is normally output as a comment was missing from new GitHub PRs. Previously, this URL was added automatically by the &lt;code&gt;Azure/static-web-apps-deploy&lt;/code&gt; GitHub Action for PRs in our &lt;a href=&#34;https://gohugo.io/&#34;&gt;Hugo based websites&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;PR Comment&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/images/rfennell/pr-url-comment.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;After a bit of digging, I noticed a warning message in the logs of the Action that said:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;hellip;&lt;br&gt;
Done Zipping App Artifacts&lt;br&gt;
Uploading build artifacts.&lt;br&gt;
Finished Upload. Polling on deployment.&lt;br&gt;
Status: InProgress. Time: 0.178533(s)&lt;br&gt;
Status: Succeeded. Time: 15.3731517(s)&lt;br&gt;
Deployment Complete :)&lt;br&gt;
Visit your site at: &lt;a href=&#34;https://white-glacier-0d2380f03-300.westeurope.2.azurestaticapps.net&#34;&gt;https://white-glacier-0d2380f03-300.westeurope.2.azurestaticapps.net&lt;/a&gt;
&lt;strong&gt;Unexectedly failed to add GitHub comment.&lt;/strong&gt;&lt;br&gt;
Thanks for using Azure Static Web Apps!&lt;br&gt;
Exiting&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>Last week I noticed that the staging URL that is normally output as a comment was missing from new GitHub PRs. Previously, this URL was added automatically by the <code>Azure/static-web-apps-deploy</code> GitHub Action for PRs in our <a href="https://gohugo.io/">Hugo based websites</a>.</p>
<p><img alt="PR Comment" loading="lazy" src="https://blog.richardfennell.net/images/rfennell/pr-url-comment.png"></p>
<p>After a bit of digging, I noticed a warning message in the logs of the Action that said:</p>
<blockquote>
<p>&hellip;<br>
Done Zipping App Artifacts<br>
Uploading build artifacts.<br>
Finished Upload. Polling on deployment.<br>
Status: InProgress. Time: 0.178533(s)<br>
Status: Succeeded. Time: 15.3731517(s)<br>
Deployment Complete :)<br>
Visit your site at: <a href="https://white-glacier-0d2380f03-300.westeurope.2.azurestaticapps.net">https://white-glacier-0d2380f03-300.westeurope.2.azurestaticapps.net</a>
<strong>Unexectedly failed to add GitHub comment.</strong><br>
Thanks for using Azure Static Web Apps!<br>
Exiting</p></blockquote>
<h2 id="the-solution">The Solution</h2>
<p>Initially I thought the problem might be a change in functionality of the <a href="https://github.com/Azure/static-web-apps-deploy"><code>Azure/static-web-apps-deploy</code> action</a>. However, it turns out it has not altered since May 2021.</p>
<p>So next I tried to add my own PR comment using the <a href="https://github.com/actions/github-script">actions/github-script action</a></p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">actions/github-script@v6</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">if</span><span class="p">:</span><span class="w"> </span><span class="l">github.event_name == &#39;pull_request&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      github.rest.issues.createComment({
</span></span></span><span class="line"><span class="cl"><span class="sd">        issue_number: context.issue.number,
</span></span></span><span class="line"><span class="cl"><span class="sd">        owner: context.repo.owner,
</span></span></span><span class="line"><span class="cl"><span class="sd">        repo: context.repo.repo,
</span></span></span><span class="line"><span class="cl"><span class="sd">        body: &#39;Azure Static Web Apps: Your staging site is ready at: ${{ steps.builddeploy.outputs.static_web_app_url }}&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">      })</span><span class="w">
</span></span></span></code></pre></div><p>This failed with a 403 error, so I realised my problem was missing permissions. So added a permissions block to the job</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">build_and_deploy_job</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">if</span><span class="p">:</span><span class="w"> </span><span class="l">github.event_name == &#39;schedule&#39; || github.event_name == &#39;push&#39; || (github.event_name == &#39;pull_request&#39; &amp;&amp; github.event.action != &#39;closed&#39;)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">permissions</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">contents</span><span class="p">:</span><span class="w"> </span><span class="l">read  </span><span class="w"> </span><span class="c"># This is required read the repo</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">pull-requests</span><span class="p">:</span><span class="w"> </span><span class="l">write </span><span class="w"> </span><span class="c"># This is required to comment on the PR</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="l">...</span><span class="w">
</span></span></span></code></pre></div><blockquote>
<p><strong>Note:</strong> As soon as you set any permissions you have to set all the ones you need, as setting a permission removes the defaults. So in this case, if you just set the <code>pull-requests: write</code> permission but not the <code>contents: read</code> permission, the workflow would not be able to clone the repo</p></blockquote>
<p>This worked, but then it occured to me, was the original error just permissions related?</p>
<p>So I removed the <code>actions/github-script</code> action but left the  permissions block and as I hoped the staging URL appeared in the PR comment.</p>
<p>So my assumption is that default permissions have recently changed. It just shows it is always a good idea to be explicit with permissions in your GitHub Actions workflows.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Don&#39;t call your Azure DevOps YAML Deployment stage &#39;deployment&#39; - strange things happen</title>
      <link>https://blog.richardfennell.net/posts/dont-call-your-azure-devops-deployment-stage-deploy/</link>
      <pubDate>Mon, 01 Jul 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dont-call-your-azure-devops-deployment-stage-deploy/</guid>
      <description>&lt;p&gt;Just a little reminder, probably to my future self, to not call your Azure DevOps YAML Deployment stage &amp;lsquo;deployment&amp;rsquo;.&lt;/p&gt;
&lt;p&gt;If you forget you can expect to waste plenty of time, like we did last week, with environment agents not picking up queued jobs with no diagnostic logging messages to give you a clue as to what is going on.&lt;/p&gt;
&lt;p&gt;This &lt;a href=&#34;https://stackoverflow.com/questions/61977588/how-can-you-target-environments-in-a-azure-yaml-pipeline-via-deployment-job&#34;&gt;issue has been reported on StackOverflow&lt;/a&gt; where it was pointed out that the official documentation used &amp;lsquo;deployment&amp;rsquo; as the stage name. The good news is that at least the documentation is now fixed, but there is still the chance you can make this naming mistake all on your own.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just a little reminder, probably to my future self, to not call your Azure DevOps YAML Deployment stage &lsquo;deployment&rsquo;.</p>
<p>If you forget you can expect to waste plenty of time, like we did last week, with environment agents not picking up queued jobs with no diagnostic logging messages to give you a clue as to what is going on.</p>
<p>This <a href="https://stackoverflow.com/questions/61977588/how-can-you-target-environments-in-a-azure-yaml-pipeline-via-deployment-job">issue has been reported on StackOverflow</a> where it was pointed out that the official documentation used &lsquo;deployment&rsquo; as the stage name. The good news is that at least the documentation is now fixed, but there is still the chance you can make this naming mistake all on your own.</p>
<p>So, future me, I hope this post save you some time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>It&#39;s the SonarQube Elasticsearch indexes again</title>
      <link>https://blog.richardfennell.net/posts/its-the-sonarqube-indexes-again/</link>
      <pubDate>Fri, 28 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/its-the-sonarqube-indexes-again/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;I just upgraded our &lt;a href=&#34;https://devblogs.microsoft.com/premier-developer/sonarqube-hosted-on-azure-app-service/&#34;&gt;Azure/Docker container hosted SonarQube instance&lt;/a&gt; from 10.5.1 to 10.6.0. This was partly due to our usual upgrade process, we try to upgrade within a week or so of a new release, but also to address a specific Java issue we started to see when using the SonarQube@6 Azure DevOps tasks.&lt;/p&gt;
&lt;p&gt;The error was the SonarQube analysis completed successfully, but the clean up process failed with this JVM error.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>I just upgraded our <a href="https://devblogs.microsoft.com/premier-developer/sonarqube-hosted-on-azure-app-service/">Azure/Docker container hosted SonarQube instance</a> from 10.5.1 to 10.6.0. This was partly due to our usual upgrade process, we try to upgrade within a week or so of a new release, but also to address a specific Java issue we started to see when using the SonarQube@6 Azure DevOps tasks.</p>
<p>The error was the SonarQube analysis completed successfully, but the clean up process failed with this JVM error.</p>
<pre tabindex="0"><code>INFO: ------------------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: ------------------------------------------------------------------------
INFO: Total time: 4:20.150s
INFO: Final Memory: 55M/194M
INFO: ------------------------------------------------------------------------
##[error]ERROR: Cleanup during JVM shutdown failed
ERROR: Cleanup during JVM shutdown failed
##[error]java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError
</code></pre><h2 id="the-issue">The Issue</h2>
<p>The upgraded appeared to go well</p>
<ul>
<li>The container restarted</li>
<li>The DB schema was updated</li>
<li>The SonarQube instance started and became available.</li>
<li>As expected, a background re-index started on our 89 projects</li>
</ul>
<p>88 were re-indexed without issue, but one failed with the following error.</p>
<pre tabindex="0"><code>Error Details: A Customer Function [Project Data Reload]
Error Details
java.lang.IllegalStateException: Unrecoverable indexing failures: 7 errors among 7 requests. Check Elasticsearch logs for further details.
</code></pre><p>The Elasticsearch logs showed nothing more.</p>
<h2 id="solution">Solution</h2>
<p>A search of the ever useful <a href="https://community.sonarsource.com/">SonarSource Community forums</a> suggested this error can be seen when there is a lack of disk space. However given we are running on Azure Storage and are using a tiny fraction of our available 100Tb of disk space, that did not seem to be the likely issue.</p>
<p>So I adopted my usual approach for SonarQube issues involving Elasticsearch. I stopped the container, deleted the index <code>sonarqube-data/es6</code> folder, restarted the container and let SonarQube rebuild the index to see of this fixed the problem.</p>
<p>An as is so often the case, this fixed the issue, all the projects were re-indexed successfully.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Passing dynamically sized object parameters to Azure DevOps Pipeline templates</title>
      <link>https://blog.richardfennell.net/posts/passing-object-parameters-to-azure-devops-pipeline-template/</link>
      <pubDate>Mon, 24 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/passing-object-parameters-to-azure-devops-pipeline-template/</guid>
      <description>&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;I have an Azure DevOps YAML template that does some deployment actions using PowerShell. This template is used multiple locations. The problem is that the PowerShell step within the template needs a variable number of environment variables, set from values stored in an Azure DevOps variable Group. The number of variables is not fixed because they depend on the underlying commands the PowerShell script is triggering to do the deployment.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-problem">The Problem</h1>
<p>I have an Azure DevOps YAML template that does some deployment actions using PowerShell. This template is used multiple locations. The problem is that the PowerShell step within the template needs a variable number of environment variables, set from values stored in an Azure DevOps variable Group. The number of variables is not fixed because they depend on the underlying commands the PowerShell script is triggering to do the deployment.</p>
<p>Hence, the problem was how to pass the values of these variables to the template in a way that was easy to maintain and understand, as this is not something that is well documented.</p>
<h1 id="the-solution">The Solution</h1>
<p>The solution, as you might expect if you have used YAML templates, is to use the <code>object</code> type for the parameter in the template. This allows you to pass a single object to the template that contains all the values you need.</p>
<p>This can be seen in this simple example</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">envparams</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">object</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">default</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PowerShell@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">targetType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inline&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      Get-ChildItem -Path Env:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">env</span><span class="p">:</span><span class="w">  </span><span class="l">${{ parameters.envparams }}</span><span class="w">
</span></span></span></code></pre></div><p>The question then becomes, how to you wire this up in the calling pipeline?</p>
<p>The answer is you pass a JSON block to the template. Just make sure you get the escaping correct, I have found this is the most common reason for a pipeline to fail to run.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">trigger</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="l">main</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">vmImage</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># Link to the variable group</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">group</span><span class="p">:</span><span class="w"> </span><span class="l">ExpandTest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="l">base.yml</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># match the variable group values with name of the environment variables you wish to create</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">envparams</span><span class="p">:</span><span class="w"> </span>{<span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">&#39;MY-PARAM1&#39;</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(MY-PARAM1)&#39;</span><span class="p">,</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">&#39;MY-PARAM2&#39;</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(MY-PARAM2)&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>}<span class="w">
</span></span></span></code></pre></div>]]></content:encoded>
    </item>
    <item>
      <title>Why are my Azure DevOps Pipeline cache hits missing</title>
      <link>https://blog.richardfennell.net/posts/why-is-my-azure-devops-pipeline-cache-missing/</link>
      <pubDate>Wed, 05 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-is-my-azure-devops-pipeline-cache-missing/</guid>
      <description>&lt;p&gt;I have blogged in the past about &lt;a href=&#34;https://blog.richardfennell.net/posts/caching-nvd-dependancies/&#34;&gt;Caching NVD Vulnerability Dependency data on hosted Azure DevOps Pipeline agents&lt;/a&gt;. Using the cache is a great way to speed up slow builds.&lt;/p&gt;
&lt;p&gt;However, today I was surprised to find I was getting cache misses on my pipeline, even though I was sure the cache should have been hit.&lt;/p&gt;
&lt;p&gt;There are rules over how the cache is used:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The cache is specific to a pipeline definition, so there is no sharing of the cache between pipeline definitions&lt;/li&gt;
&lt;li&gt;The cache is only created if the pipeline is successful (running the post run tasks)&lt;/li&gt;
&lt;li&gt;The cache only lasts 7 days&lt;/li&gt;
&lt;li&gt;but what I had not realised was the cache is also specific to the branch in a not so obvious way.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;My pipeline was triggered off a PR, so the cache was being created on the &amp;lsquo;branch&amp;rsquo; PR #123. This was working as expected, all runs of the PR triggered build used the cache after the initial run. However, if I manually triggered pipeline run of the same branch as the PR was using, there was a cache miss.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have blogged in the past about <a href="https://blog.richardfennell.net/posts/caching-nvd-dependancies/">Caching NVD Vulnerability Dependency data on hosted Azure DevOps Pipeline agents</a>. Using the cache is a great way to speed up slow builds.</p>
<p>However, today I was surprised to find I was getting cache misses on my pipeline, even though I was sure the cache should have been hit.</p>
<p>There are rules over how the cache is used:</p>
<ul>
<li>The cache is specific to a pipeline definition, so there is no sharing of the cache between pipeline definitions</li>
<li>The cache is only created if the pipeline is successful (running the post run tasks)</li>
<li>The cache only lasts 7 days</li>
<li>but what I had not realised was the cache is also specific to the branch in a not so obvious way.</li>
</ul>
<p>My pipeline was triggered off a PR, so the cache was being created on the &lsquo;branch&rsquo; PR #123. This was working as expected, all runs of the PR triggered build used the cache after the initial run. However, if I manually triggered pipeline run of the same branch as the PR was using, there was a cache miss.</p>
<p>As far as the Azure DevOps cache task is concerned, the branch used for PR #123 and a manual run off the underlying branch used in the PR are different things, so the cache created by the PR was not being used by the manual branch build.</p>
<p>That explains a few slow builds I have had.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Still time to register for the Global DevOps Experience</title>
      <link>https://blog.richardfennell.net/posts/still-time-to-register-for-the-global-devops-experience/</link>
      <pubDate>Mon, 03 Jun 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/still-time-to-register-for-the-global-devops-experience/</guid>
      <description>&lt;p&gt;There is still time to register for the free &lt;a href=&#34;https://www.globaldevopsx.com/&#34;&gt;Global DevOps Experience&lt;/a&gt; which is being run on the 15th of June at many &lt;a href=&#34;https://www.globaldevopsx.com/#participants&#34;&gt;venues around the world&lt;/a&gt;, including Black Marble&amp;rsquo;s offices.&lt;/p&gt;
&lt;p&gt;The Global DevOps Experience is a day full of learning and fun, as we immerse you in the world of DevOps and AI. You will learn about the latest trends and technologies, and work in a team to solve challenging exercises based around a realistic business scenario.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is still time to register for the free <a href="https://www.globaldevopsx.com/">Global DevOps Experience</a> which is being run on the 15th of June at many <a href="https://www.globaldevopsx.com/#participants">venues around the world</a>, including Black Marble&rsquo;s offices.</p>
<p>The Global DevOps Experience is a day full of learning and fun, as we immerse you in the world of DevOps and AI. You will learn about the latest trends and technologies, and work in a team to solve challenging exercises based around a realistic business scenario.</p>
<p>You can <a href="https://www.globaldevopsx.com/#participants">register for free on the GDEX site</a>, hope to see you on the 15th</p>
]]></content:encoded>
    </item>
    <item>
      <title>Azure DevOps pipeline jobs failing to start</title>
      <link>https://blog.richardfennell.net/posts/azure-devops-pipeline-jobs-failing-to-start/</link>
      <pubDate>Wed, 29 May 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/azure-devops-pipeline-jobs-failing-to-start/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;Whilst migrating some Azure DevOps classic pipelines to templated multi-stage YAML, I hit a problem that a job running on a self-hosted agent would not start.&lt;/p&gt;
&lt;p&gt;The YAML stage, which required approval, would be queued and approved, but the agent would just sit there with the message &lt;code&gt;Starting job&lt;/code&gt; but would never start. The strange thing was even though the job was not started, the pipeline instantly showed as failed with no error message.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>Whilst migrating some Azure DevOps classic pipelines to templated multi-stage YAML, I hit a problem that a job running on a self-hosted agent would not start.</p>
<p>The YAML stage, which required approval, would be queued and approved, but the agent would just sit there with the message <code>Starting job</code> but would never start. The strange thing was even though the job was not started, the pipeline instantly showed as failed with no error message.</p>
<h2 id="the-solution">The Solution</h2>
<p>After much fiddling, as there was no diagnostic information in the logs, I found the issue was a YAML syntax error, I had made an error that not want picked up at queue time or run time.</p>
<p>I had written in a template file</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">download</span><span class="p">:</span><span class="w"> </span><span class="l">self</span><span class="w">
</span></span></span></code></pre></div><p>When it should have been</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">download</span><span class="p">:</span><span class="w"> </span><span class="l">current</span><span class="w">
</span></span></span></code></pre></div><p>Once I fixed this the jobs started as expected. I am not sure why this issue did not give a clearer error message, but hope this post save someone else a bit of time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Failing client connections with SQL OPENJSON Incorrect Syntax error after upgrading Identity Server 6 to 7</title>
      <link>https://blog.richardfennell.net/posts/failing-client-connections-after-upgrading-identity-server-6-to-7/</link>
      <pubDate>Wed, 29 May 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/failing-client-connections-after-upgrading-identity-server-6-to-7/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;Whilst updating a client&amp;rsquo;s &lt;a href=&#34;https://duendesoftware.com/products/identityserver&#34;&gt;Duende Identity Server&lt;/a&gt; from versions 6 to 7, we experienced a problem. We followed the &lt;a href=&#34;https://docs.duendesoftware.com/identityserver/v7/upgrades/&#34;&gt;upgrade steps&lt;/a&gt; and all was working fine against our development instance i.e. the Identity Server Db was upgraded to the current schema and we could login from our test MVC web client without issues.&lt;/p&gt;
&lt;p&gt;The problem occured when we started to test using our UAT (a production replica) DB. On loading the Identity Server, it did the expected EF migrations and appeared to start, but when a client tried to connect we got an exception in the Identity Server logs in the form&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>Whilst updating a client&rsquo;s <a href="https://duendesoftware.com/products/identityserver">Duende Identity Server</a> from versions 6 to 7, we experienced a problem. We followed the <a href="https://docs.duendesoftware.com/identityserver/v7/upgrades/">upgrade steps</a> and all was working fine against our development instance i.e. the Identity Server Db was upgraded to the current schema and we could login from our test MVC web client without issues.</p>
<p>The problem occured when we started to test using our UAT (a production replica) DB. On loading the Identity Server, it did the expected EF migrations and appeared to start, but when a client tried to connect we got an exception in the Identity Server logs in the form</p>
<pre tabindex="0"><code>OPENJSON - Incorrect syntax near the keyword &#39;$&#39;
</code></pre><h2 id="the-solution">The Solution</h2>
<p>The issue it turns out was the SQL DB Compatibility Level as <a href="https://learn.microsoft.com/en-us/sql/relational-databases/json/convert-json-data-to-rows-and-columns-with-openjson-sql-server?view=sql-server-2017#openjson-requires-compatibility-level-130">OPENJSON requires Compatibility Level 130</a>. The UAT (and production) DB&rsquo;s compatibility level had not been updated when the SQL server instance was upgraded, but it had not been an issue until now.</p>
<p>Once the DB compatibility level was updated, the Identity Server started working as expected. We did this via SQL Management Studio, right-clicking on the DB, selecting properties, and then setting the compatibility level to 150 (SQL 2019), but you could use the command</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-sql" data-lang="sql"><span class="line"><span class="cl"><span class="k">ALTER</span><span class="w"> </span><span class="k">DATABASE</span><span class="w"> </span><span class="n">IdentityServer</span><span class="w"> </span><span class="k">SET</span><span class="w"> </span><span class="n">COMPATIBILITY_LEVEL</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="mi">150</span><span class="w">
</span></span></span></code></pre></div>]]></content:encoded>
    </item>
    <item>
      <title>Fix for SonarQube recoverable indexing failures error</title>
      <link>https://blog.richardfennell.net/posts/fix-for-sonarqube-recoverable-indexing-failures/</link>
      <pubDate>Thu, 16 May 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-sonarqube-recoverable-indexing-failures/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;Within one of our Azure DevOps builds we today started to see the following error when running the SonarQube analysis step:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-plaintext&#34; data-lang=&#34;plaintext&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;##[error][SQ] Task failed with status FAILED, Error message: Unrecoverable indexing failures: 1 errors among 1 requests. Check Elasticsearch logs for further details.
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Nothing obvious pointed to why this should have started to occur, the SonarQube logs showed nothing more than a longer version of the same message.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>Within one of our Azure DevOps builds we today started to see the following error when running the SonarQube analysis step:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-plaintext" data-lang="plaintext"><span class="line"><span class="cl">##[error][SQ] Task failed with status FAILED, Error message: Unrecoverable indexing failures: 1 errors among 1 requests. Check Elasticsearch logs for further details.
</span></span></code></pre></div><p>Nothing obvious pointed to why this should have started to occur, the SonarQube logs showed nothing more than a longer version of the same message.</p>
<h2 id="the-solution">The Solution</h2>
<p>The solution, <a href="https://blog.richardfennell.net/posts/sonarqube-container-will-not-start/">as it has been previously</a>, was to clear the SonarQube index.</p>
<p>This can be done by deleting the <code>ES8</code> folder from the <code>data</code> folder of the SonarQube installation. In my case this was withi an Azure storage account.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Announcing a Global DevOps Experience venue at Black Marble</title>
      <link>https://blog.richardfennell.net/posts/announcing-global-devops-experience-at-blackmarble/</link>
      <pubDate>Mon, 13 May 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/announcing-global-devops-experience-at-blackmarble/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;In past years, pre Covid, Black Marble hosted a venue for the Global DevOps Bootcamp, a community run in person hackathon event.&lt;/p&gt;
&lt;p&gt;After a short hiatus, I am pleased to be able to say this event has a successor, the &lt;a href=&#34;https://www.globaldevopsx.com/&#34;&gt;Global DevOps Experience&lt;/a&gt; which is being run on the 15th of June at many &lt;a href=&#34;https://www.globaldevopsx.com/#participants&#34;&gt;venues around the world&lt;/a&gt;, including Black Marble&amp;rsquo;s offices.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Global DevOps Experience&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/images/rfennell/gdex.png&#34;&gt;&lt;/p&gt;
&lt;h2 id=&#34;what-will-be-involved&#34;&gt;What will be involved?&lt;/h2&gt;
&lt;p&gt;The Global DevOps Experience is a day full of learning and fun, as we immerse you in the world of DevOps and AI. You will learn about the latest trends and technologies, and work in a team to solve challenging exercises based around a realistic business scenario.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>In past years, pre Covid, Black Marble hosted a venue for the Global DevOps Bootcamp, a community run in person hackathon event.</p>
<p>After a short hiatus, I am pleased to be able to say this event has a successor, the <a href="https://www.globaldevopsx.com/">Global DevOps Experience</a> which is being run on the 15th of June at many <a href="https://www.globaldevopsx.com/#participants">venues around the world</a>, including Black Marble&rsquo;s offices.</p>
<p><img alt="Global DevOps Experience" loading="lazy" src="https://blog.richardfennell.net/images/rfennell/gdex.png"></p>
<h2 id="what-will-be-involved">What will be involved?</h2>
<p>The Global DevOps Experience is a day full of learning and fun, as we immerse you in the world of DevOps and AI. You will learn about the latest trends and technologies, and work in a team to solve challenging exercises based around a realistic business scenario.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/tai97aYnm1o?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<ul>
<li><strong>Learn about the latest trends.</strong> We will offer you an environment to learn about the latest technologies and trends.</li>
<li><strong>Get help.</strong> We make sure you don’t get stuck. It is all about learning and fun, so proctors will be around and the system will guide you.</li>
<li><strong>Work in a team.</strong> We will make sure you are able to work together in a team of peers.</li>
<li><strong>Learn about GitHub.</strong> Experience a fully provisioned GitHub environment and see how you can benefit from all its features.</li>
<li><strong>Explore the use of AI.</strong> Both within the develop tools, and from a developer point of view.</li>
<li><strong>Have fun.</strong> Most importantly, have fun by working together on challenging exercises.</li>
</ul>
<h3 id="how-do-you-get-involved">How do you get involved?</h3>
<p>As this is a free event, all you have do do is <a href="https://www.globaldevopsx.com/venues/black-marble">register for the Black Marble venue</a> and turn up on the day.</p>
<p>Or, of course, you can register for any other <a href="https://www.globaldevopsx.com/#participants">venue around the world</a></p>
<p>Hope to see you on the 15th</p>
]]></content:encoded>
    </item>
    <item>
      <title>Building Reporting Service .RPTProj files in Visual Studio 2022 from the command line</title>
      <link>https://blog.richardfennell.net/posts/building-reporting-service-rptproj-files-on-vs2022/</link>
      <pubDate>Fri, 03 May 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/building-reporting-service-rptproj-files-on-vs2022/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;We have a number of projects that use the old style SQL Server Reporting Services (SSRS) &lt;code&gt;.RPTProj&lt;/code&gt; project format. These projects are not supported in Visual Studio 2022 out of the box, but there is an &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=ProBITools.MicrosoftReportProjectsforVisualStudio2022&#34;&gt;extension in the Marketplace&lt;/a&gt; that adds the functionality back so you can build them in the IDE.&lt;/p&gt;
&lt;p&gt;However, we want to build our RDL files as part of our CI process, and this is where we hit a problem. When we attempt a build with MSBuild it fails with an error about missing .NET 4.0 SDK/Targeting Pack.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>We have a number of projects that use the old style SQL Server Reporting Services (SSRS) <code>.RPTProj</code> project format. These projects are not supported in Visual Studio 2022 out of the box, but there is an <a href="https://marketplace.visualstudio.com/items?itemName=ProBITools.MicrosoftReportProjectsforVisualStudio2022">extension in the Marketplace</a> that adds the functionality back so you can build them in the IDE.</p>
<p>However, we want to build our RDL files as part of our CI process, and this is where we hit a problem. When we attempt a build with MSBuild it fails with an error about missing .NET 4.0 SDK/Targeting Pack.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-plaintext" data-lang="plaintext"><span class="line"><span class="cl">&#34;C:\projects\src\Reports.sln&#34; (default target) (1) -&gt;
</span></span><span class="line"><span class="cl">&#34;C:\projects\src\Reporting\Reporting.rptproj&#34; (default target) (2) -&gt;
</span></span><span class="line"><span class="cl">(GetReferenceAssemblyPaths target) -&gt;
</span></span><span class="line"><span class="cl">  C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Current\Bin\Microsoft.Common.CurrentVersion.targets(12
</span></span><span class="line"><span class="cl">29,5): error MSB3644: The reference assemblies for .NETFramework,Version=v4.0 were not found. To resolve this, install th
</span></span><span class="line"><span class="cl">e Developer Pack (SDK/Targeting Pack) for this framework version or retarget your application. You can download .NET Fram
</span></span><span class="line"><span class="cl">ework Developer Packs at https://aka.ms/msbuild/developerpacks [C:\projects\src\Reporting\Reporting.rptproj]
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    0 Warning(s)
</span></span><span class="line"><span class="cl">    1 Error(s)
</span></span></code></pre></div><p>Interestingly, if we try to build from the command line on a developer PC we get the same error, so it is not that the reporting service VS extension is not installed on the build agent, it is that the required MSBuild targets are not installed.</p>
<h2 id="the-solution">The Solution</h2>
<p>You would assume the solution would be to install the missing SDK/Targeting Pack on the agent. However, there is no C# compilation in our <code>.RPTProj</code> files, so why do we even need the SDK/Targeting Pack?</p>
<p>The fix was to just update the .RPTProj file with a <code>TargetFrameworkVersion&gt;v4.8&lt;/TargetFrameworkVersion&gt;</code> block for each <code>&lt;PropertyGroup Condition= ....</code> i.e. to tell MSBuild to use the .NET version installed.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl">  <span class="nt">&lt;PropertyGroup</span> <span class="na">Condition=</span><span class="s">&#34; &#39;$(Configuration)&#39; == &#39;Release&#39; &#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;FullPath&gt;</span>Release<span class="nt">&lt;/FullPath&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;OutputPath&gt;</span>bin\Release<span class="nt">&lt;/OutputPath&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;ErrorLevel&gt;</span>2<span class="nt">&lt;/ErrorLevel&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;OverwriteDatasets&gt;</span>False<span class="nt">&lt;/OverwriteDatasets&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;OverwriteDataSources&gt;</span>False<span class="nt">&lt;/OverwriteDataSources&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;TargetServerVersion&gt;</span>SSRS2008R2<span class="nt">&lt;/TargetServerVersion&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;Platform&gt;</span>Win32<span class="nt">&lt;/Platform&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;TargetReportFolder&gt;</span>Reporting<span class="nt">&lt;/TargetReportFolder&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;TargetDatasetFolder&gt;</span>Datasets<span class="nt">&lt;/TargetDatasetFolder&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;TargetDatasourceFolder&gt;</span>Data Sources<span class="nt">&lt;/TargetDatasourceFolder&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;TargetReportPartFolder&gt;</span>Report Parts<span class="nt">&lt;/TargetReportPartFolder&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;TargetFrameworkVersion&gt;</span>v4.8<span class="nt">&lt;/TargetFrameworkVersion&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/PropertyGroup&gt;</span>
</span></span></code></pre></div><p>Once this was done the command line build worked fine on both a developers PC and Microsoft hosted build agent. So, avoiding the need to install an out of support version of .NET.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Creating an undo PowerShell Script for batch updates of Azure DevOps Work Items</title>
      <link>https://blog.richardfennell.net/posts/creating-an-undo-script-for-azure-devops-workitems/</link>
      <pubDate>Tue, 16 Apr 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/creating-an-undo-script-for-azure-devops-workitems/</guid>
      <description>&lt;h2 id=&#34;problem&#34;&gt;Problem&lt;/h2&gt;
&lt;p&gt;One of my client&amp;rsquo;s recently had a problem that a large number of &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/boards/backlogs/office/bulk-add-modify-work-items-excel?view=azure-devops&#34;&gt;Azure DevOps work items had been updated via Excel&lt;/a&gt; in error.&lt;/p&gt;
&lt;p&gt;They asked if there was a means to undo these edits. Unfortunately, a feature Azure DevOps does not provide.&lt;/p&gt;
&lt;h2 id=&#34;solution&#34;&gt;Solution&lt;/h2&gt;
&lt;p&gt;So, I wrote a PowerShell script to do it. The script&amp;hellip;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Gets a list of work items edited today by a specific user, the one who did the problematic bulk update, using a Work Item Query&lt;/li&gt;
&lt;li&gt;Get the last update of each work item and check it was made by the user who did the bulk edit, incase someone manually fixed the problematic update already.&lt;/li&gt;
&lt;li&gt;For a limited list of fields, revert the change to the value prior to the last update&lt;/li&gt;
&lt;li&gt;Save the updated work item, or if the &lt;code&gt;-whatif&lt;/code&gt; flag is set just validate the update against the Azure DevOps instance&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For my client, the script worked well enough, reverting over 1000 work items in about 5 minutes. The few work items it could not revert were fixed manually.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="problem">Problem</h2>
<p>One of my client&rsquo;s recently had a problem that a large number of <a href="https://learn.microsoft.com/en-us/azure/devops/boards/backlogs/office/bulk-add-modify-work-items-excel?view=azure-devops">Azure DevOps work items had been updated via Excel</a> in error.</p>
<p>They asked if there was a means to undo these edits. Unfortunately, a feature Azure DevOps does not provide.</p>
<h2 id="solution">Solution</h2>
<p>So, I wrote a PowerShell script to do it. The script&hellip;</p>
<ol>
<li>Gets a list of work items edited today by a specific user, the one who did the problematic bulk update, using a Work Item Query</li>
<li>Get the last update of each work item and check it was made by the user who did the bulk edit, incase someone manually fixed the problematic update already.</li>
<li>For a limited list of fields, revert the change to the value prior to the last update</li>
<li>Save the updated work item, or if the <code>-whatif</code> flag is set just validate the update against the Azure DevOps instance</li>
</ol>
<p>For my client, the script worked well enough, reverting over 1000 work items in about 5 minutes. The few work items it could not revert were fixed manually.</p>
<p>The common factor in the work items that it could not revert was that they all had rich text/HTML based descriptions, though so did many that successfully reverted.</p>
<p>I suspect there is an edge case related to the encoding of some character(s) content. However, I have not been able to reproduce the problem as yet on my test rig.</p>
<p>The good news is that if the revert of a work item does fail, the target work item is left unchanged. So allowing either repeated revert attempts with an updated version of the script, or for a manual fix can be done.</p>
<p>You can find the PowerShell script here as a <a href="https://gist.github.com/rfennell/3f4dc4c3ac6cb5d5f1d2032211e1c0ae">Gist</a></p>
<script src="https://gist.github.com/rfennell/3f4dc4c3ac6cb5d5f1d2032211e1c0ae.js"></script>
]]></content:encoded>
    </item>
    <item>
      <title>Personal Access Tokens (PATs) are not your friends</title>
      <link>https://blog.richardfennell.net/posts/pats-are-not-your-friends/</link>
      <pubDate>Fri, 22 Mar 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pats-are-not-your-friends/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;Programmatic connection to Azure DevOps cannot be done with your Active Directory credentials. This is because this involves a dialog being shown, and these days usually an MFA check too.&lt;/p&gt;
&lt;p&gt;Historically, the solution to this problem was to enable &lt;a href=&#34;https://aka.ms/vstspolicyaltauth&#34;&gt;Alternate Credentials&lt;/a&gt;, which could be passed as username and password, without the dialog being shown. However, the use of these has been deprecated since 2020, and &lt;a href=&#34;https://devblogs.microsoft.com/devops/final-notice-of-alternate-credentials-deprecation/&#34;&gt;they have been completely removed since Jan 2024&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>Programmatic connection to Azure DevOps cannot be done with your Active Directory credentials. This is because this involves a dialog being shown, and these days usually an MFA check too.</p>
<p>Historically, the solution to this problem was to enable <a href="https://aka.ms/vstspolicyaltauth">Alternate Credentials</a>, which could be passed as username and password, without the dialog being shown. However, the use of these has been deprecated since 2020, and <a href="https://devblogs.microsoft.com/devops/final-notice-of-alternate-credentials-deprecation/">they have been completely removed since Jan 2024</a>.</p>
<p>So for a number of years the recommended way to programmatically authenticate with Azure DevOps has been to use a <a href="https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&amp;tabs=Windows">Personal Access Token (PAT)</a>. However, PATs are not without their own issues.</p>
<h2 id="the-problem-with-pats">The problem with PATs</h2>
<p>The most critical problem is that PATs expire. By default they expire after 30 days, and the longest you can set them to expire after is a maximum of 1 year. This means that you have to remember to renew them, and if you forget then your automation that rely on them will stop working.</p>
<p>Obviously it is very tempting to just set the expiry to 1 year, but this is not the best idea. The longer the expiry the more time there is for the PAT to be compromised. Just like SSL certificates, or any access token, the shorter period before the expiry the better.</p>
<p>Of course like any authentication and permission model you can grant too many permissions to a PAT, and then if it is compromised you have a problem, but that is on you not following the principle of least privilege, not a fundamental issue of PATs.</p>
<h2 id="but-what-is-the-alternative">But what is the alternative?</h2>
<p>Azure DevOps now offers support for <a href="https://learn.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/service-principal-managed-identity?view=azure-devops">Managed Identity &amp; Service Principals</a>. These are a much better way to provide a means to programmatically authenticate. Critically, they do not expire, and the tokens they generate for the actual connections are short lived, and you can of course grant them privileges in line with the principle of least privilege.</p>
<p>When used to access Azure DevOps resource, the only downside is they consume a basic user license.</p>
<p>This form of authentication is perfect for use in automation from locations like <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-identity-based-connections-tutorial">Azure Functions</a>, or from the <a href="https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli-managed-identity">Az CLI</a>.</p>
<h2 id="what-about-github">What about GitHub?</h2>
<p>The issue of PATs is not just one for Azure DevOps, the same problems are true for GitHub. On GitHub the solution is to use a <a href="https://devopsjournal.io/blog/2022/01/03/GitHub-Tokens">GitHub App to authenticate with the GitHub API</a>, and then use the short term token it can generate to authenticate with the GitHub API.</p>
<h2 id="what-about-other-outgoing-services-from-azure-devops">What about other outgoing services from Azure DevOps?</h2>
<p>Another place you will see a move away from expiring secrets in Azure DevOps are Service Connections to services to such as Azure. You can now use <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#create-an-azure-resource-manager-service-connection-using-workload-identity-federation">Workload Identity Federation</a> to authenticate with Azure.</p>
<p>Converting to use Workload Identity Federation is simple, you just press a button in the Azure DevOps UI, and it will create a Service Principal in Azure AD, and then update the Service Connection to use this to authenticate with Azure.</p>
<p><img alt="Swap to Workload Identity Federation" loading="lazy" src="/images/rfennell/pat-serviceprinciple.png"></p>
<p>Once this is done there will be no need to remember to renew the Service Connection, as it will use the Service Principal to authenticate, and this does not expire.</p>
<h2 id="summary">Summary</h2>
<p>So, when using Azure DevOps in many cases you can now avoid the use of PATs, and hence the need to remember to renew them. Giving you one less thing to remember to do, and one less thing to go wrong.</p>
]]></content:encoded>
    </item>
    <item>
      <title>It is really time to get off Azure DevOps TFVC source control</title>
      <link>https://blog.richardfennell.net/posts/it-is-really-time-to-get-off-tfvc/</link>
      <pubDate>Thu, 21 Mar 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/it-is-really-time-to-get-off-tfvc/</guid>
      <description>&lt;h2 id=&#34;a-history-lesson&#34;&gt;A History Lesson&lt;/h2&gt;
&lt;p&gt;Team Foundation Version Control (TFVC) has been around since 2005, since the first release of Team Foundation Server (TFS) in 2005. In 2013, over 10 years ago, Microsoft added Git support to TFS (later renamed as Azure DevOps), Git had already been around for 8 years at that point.&lt;/p&gt;
&lt;p&gt;10 years is a long time in the software industry, I always think of &amp;lsquo;IT years&amp;rsquo; like &amp;lsquo;dog years&amp;rsquo; i.e. 7 to 1, so over 70 years has passed, a lifetime. Over this period Git has become the de facto standard for source control. So if you are still using TFVC as your source control system you really need to ask yourself why?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="a-history-lesson">A History Lesson</h2>
<p>Team Foundation Version Control (TFVC) has been around since 2005, since the first release of Team Foundation Server (TFS) in 2005. In 2013, over 10 years ago, Microsoft added Git support to TFS (later renamed as Azure DevOps), Git had already been around for 8 years at that point.</p>
<p>10 years is a long time in the software industry, I always think of &lsquo;IT years&rsquo; like &lsquo;dog years&rsquo; i.e. 7 to 1, so over 70 years has passed, a lifetime. Over this period Git has become the de facto standard for source control. So if you are still using TFVC as your source control system you really need to ask yourself why?</p>
<h2 id="the-advantages-of-git-over-tfvc">The advantages of Git over TFVC</h2>
<p>The advantages of Git over TFVC are numerous, but here are a few:</p>
<ul>
<li>Git is distributed, meaning you can work offline and commit changes locally</li>
<li>Git has better branching and merging (pull request) support</li>
<li>Many newer features of Azure DevOps are Git only i.e YAML based pipelines</li>
<li>Cloud services like Azure Functions, Azure App Service, Azure Logic Apps, Azure API Management etc. all have built-in support for Git</li>
<li>Git repositories are portable, they are easy to migrate to new systems e.g GitHub, GitLab, Bitbucket etc.</li>
<li>Any new hires/students will probably have Git experience</li>
</ul>
<p>From talking to clients who are still on TFVC, the main reason they are still using TFVC is just because they have always used it. There is a fear a change, something you might not expect to see in the technology sector, but all too common.</p>
<p>This fear might be wrappered in the perceived complexity of migrating from TFVC to Git, or that an audit or regulatory requirement means they can&rsquo;t change processes. But these are just excuses. Updates to process and tooling cannot be ignored, you don&rsquo;t have to be the first to move, but the longer you leave it the harder it will be to change.</p>
<p>As an example, one of the most obvious limitations for me of TFVC is how it works with Azure DevOps Pipelines. It is true you have to use the &lsquo;classic&rsquo; pipelines, which are not as flexible as the YAML based pipelines, and have reached <a href="https://devblogs.microsoft.com/devops/disable-creation-of-classic-pipelines/">a &lsquo;done&rsquo; state</a> and so will get no further development. However, more me the big limitations are not the editor experience but that:</p>
<ul>
<li>For each TFVC branch you have to create/clone a new pipeline</li>
<li>You cannot make CI/CD part of your branch protection</li>
<li>You cannot source control your pipeline definitions and review them via a PR</li>
</ul>
<p>These reasons alone should be enough to make you want to move to Git from TFVC.</p>
<h2 id="so-how-can-i-migrate">So how can I migrate?</h2>
<p>You have three basic options to migrate from TFVC to Git:</p>
<ul>
<li>&lsquo;Tip&rsquo; migration: This is the simplest, you just migrate the latest version of your code to a new Git repository. This is the quickest and easiest, but you lose all the history of your code.</li>
<li><a href="https://learn.microsoft.com/en-us/azure/devops/repos/git/import-git-repository?view=azure-devops">Azure DevOps Importer</a>: This is a tool that can be used to import a TFVC repository into a new Git repository, with the last 180 days of history. Why you would want neither none of the history or all of the history has always been beyond me!</li>
<li><a href="https://github.com/git-tfs/git-tfs">Git TFS</a>: A command line extension for Git that can be used to migrate a TFVC repository to a new Git repository, with all the history. A powerful and flexible tool, but it can be complex to use, and is &lsquo;brittle&rsquo; in that it can fail if the TFVC repository has a complex history of branching and renames.</li>
</ul>
<p>I would always favour the first option. It is the simplest and quickest, and you can always keep the old TFVC repository around for historical purposes. Also remember Git is not TFVC, so what is a good structure in TFVC might not be the best in Git, so you might want to take the opportunity to restructure your codebase.</p>
<p>The other important factor to consider in any migration is that of training. You need to make sure your team is comfortable with Git <strong>before</strong> you start the migration. This is not just about the commands, but also the concepts of distributed source control, and the different branching and merging strategies that Git enables.</p>
<p>A TFVC to Git migration will be a major change and should be considered as a project in its own right, with planning, training, pilot studies as well as the main migration itself. But remember you don&rsquo;t have to move all your TFVC source control to Git at once, you can do it incrementally, maybe as you start a new version of a product.</p>
<h2 id="summary">Summary</h2>
<p>So if you are still using TFVC I urge you to consider migrating to Git. It is not as hard as you might think, and the benefits are numerous.</p>
<p>Happy to discuss this further, please reach out to me via the usual channels</p>
]]></content:encoded>
    </item>
    <item>
      <title>Don&#39;t forget to commit your configuration file</title>
      <link>https://blog.richardfennell.net/posts/dont-forget-to-commit-your-configuration-file/</link>
      <pubDate>Wed, 20 Mar 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dont-forget-to-commit-your-configuration-file/</guid>
      <description>&lt;p&gt;It is a major effort, often unfortunately ignored, keeping the dependencies in an open source project up to date. This was highlighted in &lt;a href=&#34;https://dev.to/jessehouwing/security-state-of-the-azure-devops-marketplace-5bil&#34;&gt;Jesse Houwing&amp;rsquo;s post on the state of the Azure DevOps Marketplace&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Since reading this I have made much more of an effort to keep &lt;a href=&#34;https://marketplace.visualstudio.com/search?term=fennell&amp;amp;target=AzureDevOps&amp;amp;category=All%20categories&amp;amp;sortBy=Relevance&#34;&gt;my Azure DevOps Extensions&lt;/a&gt; up to date. &lt;a href=&#34;https://docs.github.com/en/code-security/dependabot/working-with-dependabot/managing-pull-requests-for-dependency-updates&#34;&gt;Dependabot&lt;/a&gt; generated PRs have been a great help in this regard. Creating PRs for vulnerabilities and out of date dependencies.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is a major effort, often unfortunately ignored, keeping the dependencies in an open source project up to date. This was highlighted in <a href="https://dev.to/jessehouwing/security-state-of-the-azure-devops-marketplace-5bil">Jesse Houwing&rsquo;s post on the state of the Azure DevOps Marketplace</a>.</p>
<p>Since reading this I have made much more of an effort to keep <a href="https://marketplace.visualstudio.com/search?term=fennell&amp;target=AzureDevOps&amp;category=All%20categories&amp;sortBy=Relevance">my Azure DevOps Extensions</a> up to date. <a href="https://docs.github.com/en/code-security/dependabot/working-with-dependabot/managing-pull-requests-for-dependency-updates">Dependabot</a> generated PRs have been a great help in this regard. Creating PRs for vulnerabilities and out of date dependencies.</p>
<p>However, no level of AI can protect you from stupidity. Recently, as part of addressing a vulnerability via a refactoring to change testing framework, I wasted far too long trying to work out why all my test passed in my Codespace, but failed in the CI build. I had forgotten to commit the <code>jest.config.js</code> file as my <code>.gitignore</code>was set to ignore all <code>.JS</code> files as I was working in TypeScript.</p>
<p>Very frustration, but a good reminder to always check the files you are committing are the ones you expect.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Parsing error with Azure Bicep files in SonarQube</title>
      <link>https://blog.richardfennell.net/posts/parsing-error-with-azure-bicep-files-insonarqube/</link>
      <pubDate>Tue, 19 Mar 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/parsing-error-with-azure-bicep-files-insonarqube/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;We saw an issue with our SonarQube 10.3 Developer Edition (that is running as a Docker image hosted in Azure) when it was doing the analysis of a project that included Azure Bicep files.&lt;/p&gt;
&lt;p&gt;The Azure DevOps pipeline that triggered the SonarQube analysis was not failing, but within the SonarQube analysis step an error was reported in the task log&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;INFO: Sensor IaC AzureResourceManager Sensor is restricted to changed files only
INFO: 1 source file to be analyzed
##[error]ERROR: Cannot parse &amp;#39;AzureServices/QueryPack.bicep:89:1&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;the-solution&#34;&gt;The Solution&lt;/h2&gt;
&lt;p&gt;Turns out the problem was related to parsing Bicep files for App Insights Query packs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>We saw an issue with our SonarQube 10.3 Developer Edition (that is running as a Docker image hosted in Azure) when it was doing the analysis of a project that included Azure Bicep files.</p>
<p>The Azure DevOps pipeline that triggered the SonarQube analysis was not failing, but within the SonarQube analysis step an error was reported in the task log</p>
<pre tabindex="0"><code>INFO: Sensor IaC AzureResourceManager Sensor is restricted to changed files only
INFO: 1 source file to be analyzed
##[error]ERROR: Cannot parse &#39;AzureServices/QueryPack.bicep:89:1&#39;
</code></pre><h2 id="the-solution">The Solution</h2>
<p>Turns out the problem was related to parsing Bicep files for App Insights Query packs.</p>
<p>If the Bicep resource for the query contains a <code>body</code> property that starts with a comment e.g.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bicep" data-lang="bicep"><span class="line"><span class="cl"><span class="kd">resource</span><span class="w"> </span><span class="nv">querypacks_DefaultQueryPack</span><span class="w"> </span><span class="s">&#39;microsoft.operationalInsights/querypacks/queries@2019-09-01-preview&#39;</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nv">parent</span><span class="p">:</span><span class="w"> </span><span class="nv">QueryPack</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nv">name</span><span class="p">:</span><span class="w"> </span><span class="p">...</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nv">properties</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nv">displayName</span><span class="p">:</span><span class="w"> </span><span class="p">...</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nv">description</span><span class="p">:</span><span class="w"> </span><span class="p">..</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nv">body</span><span class="p">:</span><span class="w"> </span><span class="s">&#39;// 35 is ABC\r\n// 40 is XYZ \r\nrequests\r\n| where name has &#34;myfacade.svc&#34;\r\n| order by timestamp desc\r\n| where name !has &#34;GET&#34;\r\n| summarize count() by name, resultCode\r\n| render columnchart&#39;</span><span class="w">
</span></span></span></code></pre></div><p>We get the error <code>##[error]ERROR: Cannot parse 'AzureServices/QueryPack.bicep:89:1</code></p>
<p>We can fix this by not starting the <code>body</code> with a comment, just moving the comment to the end of the <code>body</code> i.e.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bicep" data-lang="bicep"><span class="line"><span class="cl"><span class="kd">resource</span><span class="w"> </span><span class="nv">querypacks_DefaultQueryPack</span><span class="w"> </span><span class="s">&#39;microsoft.operationalInsights/querypacks/queries@2019-09-01-preview&#39;</span><span class="w"> </span><span class="p">=</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nv">parent</span><span class="p">:</span><span class="w"> </span><span class="nv">QueryPack</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nv">name</span><span class="p">:</span><span class="w"> </span><span class="p">...</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nv">properties</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nv">displayName</span><span class="p">:</span><span class="w"> </span><span class="p">...</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nv">description</span><span class="p">:</span><span class="w"> </span><span class="p">..</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nv">body</span><span class="p">:</span><span class="w"> </span><span class="s">&#39;requests\r\n| where name has &#34;myfacade.svc&#34;\r\n| order by timestamp desc\r\n| where name !has &#34;GET&#34;\r\n| summarize count() by name, resultCode\r\n| render columnchart\r\n// 35 is ABC\r\n// 40 is XYZ&#39;</span><span class="w">
</span></span></span></code></pre></div>]]></content:encoded>
    </item>
    <item>
      <title>GitHub Events not being triggered for auto-merged Dependabot PRs</title>
      <link>https://blog.richardfennell.net/posts/github-events-not-being-triggered-for-auto-merged-dependabot-prs/</link>
      <pubDate>Mon, 18 Mar 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/github-events-not-being-triggered-for-auto-merged-dependabot-prs/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;I have an &lt;a href=&#34;https://www.microsoft.com/en-gb/industry/blog/technetuk/2023/02/01/using-github-actions-to-deploy-an-azure-static-web-app/&#34;&gt;Azure Static Website that is built from a GitHub hosted repo using the default Action Workflow automation created by Azure when setting up the static site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A really nice feature of this configuration is that when a PR is created in GitHub a test static website environment site is built in Azure to review the changes. When the PR is closed the test environment site is deleted.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>I have an <a href="https://www.microsoft.com/en-gb/industry/blog/technetuk/2023/02/01/using-github-actions-to-deploy-an-azure-static-web-app/">Azure Static Website that is built from a GitHub hosted repo using the default Action Workflow automation created by Azure when setting up the static site</a>.</p>
<p>A really nice feature of this configuration is that when a PR is created in GitHub a test static website environment site is built in Azure to review the changes. When the PR is closed the test environment site is deleted.</p>
<p>This functionality is working fine for manually created PRs irrespective of whether the PR is manually merged and closed, or completed using the Github auto-merged feature.</p>
<h2 id="issue">Issue</h2>
<p>The problem occurs if <a href="https://docs.github.com/en/code-security/dependabot/working-with-dependabot">Github Dependabot</a> and Actions based automation is used.</p>
<p>If a PR is created by Dependabot and manually merged, all the workflows trigger correctly to deploy the updated site and delete the test environment, as expected.</p>
<p>However, if I use a version of the Github documented <a href="https://docs.github.com/en/code-security/dependabot/working-with-dependabot">&lsquo;approve a Dependabot pull request&rsquo; action workflow</a> to approve any Dependabot created PRs, as follows</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Dependabot auto-merge</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">on</span><span class="p">:</span><span class="w"> </span><span class="l">pull_request</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">permissions</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">pull-requests</span><span class="p">:</span><span class="w"> </span><span class="l">write</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">contents</span><span class="p">:</span><span class="w"> </span><span class="l">write</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">dependabot</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">if</span><span class="p">:</span><span class="w"> </span><span class="l">${{ github.actor == &#39;dependabot[bot]&#39; }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Dependabot metadata</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">id</span><span class="p">:</span><span class="w"> </span><span class="l">metadata</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">dependabot/fetch-metadata@v1.1.1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">github-token</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${{ secrets.GITHUB_TOKEN }}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Enable auto-merge for Dependabot PRs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">if</span><span class="p">:</span><span class="w"> </span><span class="l">${{steps.metadata.outputs.update-type == &#39;version-update:semver-patch&#39;}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l">| </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="l">gh pr review --approve &#34;$PR_URL&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="l">gh pr merge --auto --merge &#34;$PR_URL&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">env</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">PR_URL</span><span class="p">:</span><span class="w"> </span><span class="l">${{github.event.pull_request.html_url}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">GITHUB_TOKEN</span><span class="p">:</span><span class="w"> </span><span class="l">${{secrets.GITHUB_TOKEN}}</span><span class="w">
</span></span></span></code></pre></div><p>it is not working as expected. What is happening is the PR is merged, but other workflows are not triggered, so the site is not redeployed off main and the test environment is not deleted, so I soon reach the limit for the maximum number of test environments I can have, so future PRs fail through lack of available Azure environment slots.</p>
<h2 id="analysis">Analysis</h2>
<p>The issue was the use of the <code>GITHUB_TOKEN</code>, the automatically generated action session token. I had not considered that GitHub Actions triggers do not happen when an operation is performed with the <code>GITHUB_TOKEN</code>, <a href="https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow">as documented by Github</a></p>
<p>This is still true then the action is triggered by the <code>gh</code> command line tool, as in the above example. It is the use of the <code>GITHUB_TOKEN</code> that is the key factor.</p>
<blockquote>
<p>Thanks to <a href="https://www.linkedin.com/in/bosrob/">Rob Bos</a> for reminding of this constraint, and also letting me know I was not the only one to have been recently caught out by this, as can be seen in <a href="https://wbrawner.com/2024/03/15/github-actions-not-triggering-on-pr-merge/">William Brawner&rsquo;s recent blog post</a>.</p></blockquote>
<h2 id="solution">Solution</h2>
<p>Armed with this information and using <a href="https://devopsjournal.io/blog/2022/01/03/GitHub-Tokens">Rob&rsquo;s blog post on using Github Apps for authentication</a>, I performed the following steps:</p>
<ol>
<li>
<p>Created a Github App with the following permissions</p>
<ul>
<li>Repo - Contents: Read &amp; Write</li>
<li>Repo - Metadata: Read-only (automatically selected)</li>
<li>Repo - Pull Requests: Read &amp; Write</li>
</ul>
</li>
<li>
<p>Created secrets (Settings &gt; Secrets &amp; Variables &gt; Dependabot) to provide the Dependabot triggered workflow with the App ID and Key need to get a short-lived token for the workflow to use.</p>
<blockquote>
<p>Remember Dependabot cannot access Actions secrets or variables, it has its own set of secrets to minimise any potential secret leakage</p></blockquote>
</li>
<li>
<p>Updated the workflow to use a token generated from a GitHub App</p>
</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yml" data-lang="yml"><span class="line"><span class="cl"><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Dependabot auto-merge</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">on</span><span class="p">:</span><span class="w"> </span><span class="l">pull_request</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">permissions</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">pull-requests</span><span class="p">:</span><span class="w"> </span><span class="l">write</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">contents</span><span class="p">:</span><span class="w"> </span><span class="l">write</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">dependabot</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">if</span><span class="p">:</span><span class="w"> </span><span class="l">${{ github.actor == &#39;dependabot[bot]&#39; }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Get Token</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">id</span><span class="p">:</span><span class="w"> </span><span class="l">get_workflow_token</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">peter-murray/workflow-application-token-action@v3</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">application_id</span><span class="p">:</span><span class="w"> </span><span class="l">${{ secrets.GH_APP_ID }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">application_private_key</span><span class="p">:</span><span class="w"> </span><span class="l">${{ secrets.GH_APP_KEY }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Dependabot metadata</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">id</span><span class="p">:</span><span class="w"> </span><span class="l">metadata</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">dependabot/fetch-metadata@v1.1.1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">github-token</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${{ steps.get_workflow_token.outputs.token }}&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Enable auto-merge for Dependabot PRs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">if</span><span class="p">:</span><span class="w"> </span><span class="l">${{steps.metadata.outputs.update-type == &#39;version-update:semver-patch&#39;}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l">| </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="l">gh pr review --approve &#34;$PR_URL&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="l">gh pr merge --auto --merge &#34;$PR_URL&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">env</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">PR_URL</span><span class="p">:</span><span class="w"> </span><span class="l">${{github.event.pull_request.html_url}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">GITHUB_TOKEN</span><span class="p">:</span><span class="w"> </span><span class="l">${{ steps.get_workflow_token.outputs.token }}</span><span class="w">
</span></span></span></code></pre></div><p>Once all these changes were made the workflows operated as expected. The test environment was deleted and the site redeployed off main as soon as the PR was closed, whether merged or not.</p>
<p>So an interesting little learning experience. I hope the steps in this post helps others who have been caught out by this issue.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Site fails to render when updating Hugo version</title>
      <link>https://blog.richardfennell.net/posts/site-fails-to-render-when-updating-hugo-version/</link>
      <pubDate>Fri, 16 Feb 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/site-fails-to-render-when-updating-hugo-version/</guid>
      <description>&lt;h1 id=&#34;the-issue&#34;&gt;The Issue&lt;/h1&gt;
&lt;p&gt;This site was built using &lt;a href=&#34;https://gohugo.io/&#34;&gt;Hugo, a static site generator&lt;/a&gt;. I recently tried to do a long overdue update the version of Hugo from 0.108 to the current 0.122 version.&lt;/p&gt;
&lt;p&gt;I had not expected any problems, but found that the site failed to render, but with no error message all I saw was&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;hugo.exe server --logLevel info
Start building sites …
hugo v0.122.0-b9a03bd59d5f71a529acb3e33f995e0ef332b3aa+extended windows/amd64 BuildDate=2024-01-26T15:54:24Z VendorInfo=gohugoio

INFO  copy static: syncing static files to \
INFO  build: running step &amp;#34;process&amp;#34; duration &amp;#34;97.3263ms&amp;#34;
INFO  build: running step &amp;#34;assemble&amp;#34; duration &amp;#34;335.1476ms&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;I normally would have expected to see a &lt;code&gt;INFO  build: running step &amp;quot;render&amp;quot;&lt;/code&gt; line or an error, but got nothing, irrespective of the log level I set.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-issue">The Issue</h1>
<p>This site was built using <a href="https://gohugo.io/">Hugo, a static site generator</a>. I recently tried to do a long overdue update the version of Hugo from 0.108 to the current 0.122 version.</p>
<p>I had not expected any problems, but found that the site failed to render, but with no error message all I saw was</p>
<pre tabindex="0"><code>hugo.exe server --logLevel info
Start building sites …
hugo v0.122.0-b9a03bd59d5f71a529acb3e33f995e0ef332b3aa+extended windows/amd64 BuildDate=2024-01-26T15:54:24Z VendorInfo=gohugoio

INFO  copy static: syncing static files to \
INFO  build: running step &#34;process&#34; duration &#34;97.3263ms&#34;
INFO  build: running step &#34;assemble&#34; duration &#34;335.1476ms&#34;
</code></pre><p>I normally would have expected to see a <code>INFO  build: running step &quot;render&quot;</code> line or an error, but got nothing, irrespective of the log level I set.</p>
<h1 id="the-solution">The Solution</h1>
<p>As I had no error message, and found nothing of use in the Hugo resources, I decide my only option was to roll back to a previous version of Hugo until I could get the site to render.</p>
<p>So I downloaded various versions and in the end found that this site rendered on 0.116 but not 0.117.</p>
<p>At this point I knew that some check being done in 0.117 was the issue, and this had to be due to my page content.</p>
<p>So the next step was to use a bisect test pattern to find the problem pages i.e. delete half the pages, if the problem does not occur you know the problem exists in the other half, if it is still present it is in the half you deleted.</p>
<p>By repeating this process I was able to find the problem pages. The issue turned out to be remotely hosted images that could not be found referenced in older blog posts. It seem that the Hugo render now checks for the existence of images in the content, and if they are missing the site fails to render.</p>
<p>Anyway after editing the pages to remove the missing image markdown the site rendered correctly in 0.117 and 0.122.</p>
<p>It is just a shame the logging was not more helpful.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting a BadGateway error when trying to create Azure DevOps Work items using Power Automated</title>
      <link>https://blog.richardfennell.net/posts/getting-a-badgateway-error-when-trying-to-create-azure-devops-work-items-using-power-automated/</link>
      <pubDate>Tue, 13 Feb 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-a-badgateway-error-when-trying-to-create-azure-devops-work-items-using-power-automated/</guid>
      <description>&lt;p&gt;I was recently trying to create an Azure DevOps work item when an email is received using the &lt;a href=&#34;https://powerautomate.microsoft.com/en-us/templates/details/29ef92b0630d11e6ac13ff9c624ade25/create-an-azure-devops-work-item-when-email-arrives-with-bug-in-subject/&#34;&gt;Power Automate &amp;lsquo;Create an Azure DevOps work item when email arrives with &amp;lsquo;Bug&amp;rsquo; in subject&amp;rsquo; template&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The flow created without issue, and all the drop downs were correctly populated with O365 and Azure DevOps values as expected.&lt;/p&gt;
&lt;p&gt;However, when the flow ran, on receiving an email to the correct inbox, it failed with a BadGateway error.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was recently trying to create an Azure DevOps work item when an email is received using the <a href="https://powerautomate.microsoft.com/en-us/templates/details/29ef92b0630d11e6ac13ff9c624ade25/create-an-azure-devops-work-item-when-email-arrives-with-bug-in-subject/">Power Automate &lsquo;Create an Azure DevOps work item when email arrives with &lsquo;Bug&rsquo; in subject&rsquo; template</a>.</p>
<p>The flow created without issue, and all the drop downs were correctly populated with O365 and Azure DevOps values as expected.</p>
<p>However, when the flow ran, on receiving an email to the correct inbox, it failed with a BadGateway error.</p>
<p>The problem turned out to be that the Work Item type I had set the flow to create was not available in the project I was trying to create it in (it was disabled in the process template).</p>
<p>So, if you get a BadGateway (502) error in Power Automated Flow, do not assume Azure DevOps is down, but rather you have a bad/invalid payload in the call being made to the Azure DevOps API. I think what is happening is the Azure DevOps API is returning a 4xx bad data error (which may have a bit more detail, but I doubt it from my past REST API experience), but you cannot see it as Power Automate is hiding that error with its own 502 BadGateway error.</p>
<p>So the tip is check the details of your payload and ensure it is valid for the API you are calling.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I can&#39;t get git.exe installed on my corporate PC</title>
      <link>https://blog.richardfennell.net/posts/i-cant-get-git.exe-installed-on-my-pc/</link>
      <pubDate>Fri, 02 Feb 2024 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-cant-get-git.exe-installed-on-my-pc/</guid>
      <description>&lt;h1 id=&#34;issue&#34;&gt;Issue&lt;/h1&gt;
&lt;p&gt;Many development tools rely on the fact that git.exe is installed to perform source control operations e.g. &lt;a href=&#34;https://code.visualstudio.com/&#34;&gt;VSCode&lt;/a&gt;. However, a common problem I have seen is that security settings on many corporate Windows devices do not allow the installation of &lt;a href=&#34;https://git-scm.com/download/win&#34;&gt;git CLI using an MSI file&lt;/a&gt; by the user. VSCode is an approved application, or installed as a user application as opposed to a system one, so is available but limited by it&amp;rsquo;s lack of source control features.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="issue">Issue</h1>
<p>Many development tools rely on the fact that git.exe is installed to perform source control operations e.g. <a href="https://code.visualstudio.com/">VSCode</a>. However, a common problem I have seen is that security settings on many corporate Windows devices do not allow the installation of <a href="https://git-scm.com/download/win">git CLI using an MSI file</a> by the user. VSCode is an approved application, or installed as a user application as opposed to a system one, so is available but limited by it&rsquo;s lack of source control features.</p>
<p>In some cases, you can get around this problem if you have a means to install a ‘corporate approved application’ from a location such as the Microsoft Store that includes git.exe as part of it&rsquo;s bundle. A good candidate is the <a href="https://desktop.github.com/">GitHub Desktop  </a></p>
<h1 id="process">Process</h1>
<h2 id="find-the-gitexe">Find the git.exe</h2>
<p>In this scenario GitHub Desktop will probably be installed as a user application, not a system application i.e. the files are in the <code>c:\users</code> folder structure.</p>
<p>To find the actual location do the following</p>
<ol>
<li>Open Windows Explorer</li>
<li>In the address bar type <code>%appdata%</code> this is the shortcut code to the current users local storage</li>
<li>Windows Explorer will open the folder <code>C:\Users\&lt;yourname&gt;\AppData\Roaming</code></li>
<li>There is a GitHub Desktop subfolder in this folder, but it is just the local cache. For the executables you need to change folder to <code>C:\Users\&lt;yourname&gt;\AppData\Local\GitHubDesktop</code></li>
<li>Change into the newest version folder and keep going down to the <code>git.exe</code> version folder <code>C:\Users\&lt;yourname&gt;\AppData\Local\GitHubDesktop\app-3.3.8\resources\app\git\mingw64\bin</code> (you may have more than one version installed, so pick the newest)</li>
</ol>
<h2 id="adding-git-support-to-vscode">Adding Git Support to VSCode</h2>
<p>VSCode finds it’s copy of the git.exe (to provide source control) using a value in its settings file. To update this value</p>
<ol>
<li>Open VSCode</li>
<li>Open the settings (menu File → Preferences → Settings)</li>
<li>Use the search option to search for <code>git.path</code></li>
<li>Edit the settings file to add a path to the git.exe</li>
</ol>
<p>Note: This path must use double \ and ends with the name of the executable git.exe</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl">    <span class="p">{</span>
</span></span><span class="line"><span class="cl">    
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;git.enabled&#34;</span><span class="p">:</span> <span class="kc">true</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;git.path&#34;</span><span class="p">:</span> <span class="s2">&#34; C:\\Users\\&lt;yourname&gt;\\AppData\\Local\\GitHubDesktop\\app-3.3.8\\resources\\app\\git\\mingw64\\bin\\git.exe&#34;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">        <span class="err">…</span> <span class="err">Other</span> <span class="err">settings</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span></code></pre></div><ol start="5">
<li>Save the changes to the settings file and restart VSCode</li>
<li>If you open VSCode on a folder that is a git repo the Source Control options should be enabled</li>
</ol>
<h2 id="adding-git-support-to-the-command-line">Adding Git support to the command line</h2>
<p>In many cases being able to run git commands at the command line, in a command prompt, PowerShell window or VSCode terminal is advantageous.</p>
<p>All command prompt types in Windows find executables by the use of a <code>PATH</code> value. To set a user defined path to the git.exe do the following</p>
<ol>
<li>
<p>Press the Windows key to start a search and type settings to load the settings panel</p>
</li>
<li>
<p>In the search box type <code>environment</code> and pick <code>Edit environment variables for your account</code></p>
</li>
<li>
<p>Select the Path entry in the user variables, an editor will open</p>
</li>
<li>
<p>In the editor add a new row with the value</p>
<p><code>C:\Users\&lt;yourname&gt;\AppData\Local\GitHubDesktop\app-3.3.8\resources\app\git\mingw64\bin</code></p>
<blockquote>
<p><strong>Note:</strong> This is a path to the folder, not the git.exe and needs only single \</p></blockquote>
</li>
<li>
<p>Save the changes</p>
</li>
<li>
<p>Close and reopen any command prompts. You should now be able to issue git commands</p>
</li>
</ol>
<h2 id="posh-git">POSH Git</h2>
<p><a href="https://github.com/dahlbyk/posh-git">PowerShell Git (POSHGit)</a> is a tool I find very useful that provides information on the state of a git repo within the command prompt.</p>
<p>Usually it is installed to a system adding using a tool such as Chocolatey or PowerShellGet. Details of how to run these commands (as a non administrator) are detailed in the <a href="https://github.com/dahlbyk/posh-git">project homepage</a>.</p>
<p>However, again corporate security may block these options. Luckily as POSHGit is just a PowerShell module there are also instructions for a manual install i.e</p>
<ol>
<li>
<p>Download the current POSHGit PowerShell module from its GitHub Release page  and place in a folder on your device</p>
</li>
<li>
<p>Open a PowerShell prompt</p>
</li>
<li>
<p>Edit the PowerShell settings, to do this at the prompt type <code>notepad $profile</code></p>
</li>
<li>
<p>Add the following line with the correct path to where you saved the <code>post-git.psd1</code> file</p>
<p><code>Import-Module 'C:\tools\posh-git\src\posh-git.psd1'</code></p>
</li>
<li>
<p>Save the file and restart the PowerShell prompt (this will load the module)</p>
</li>
<li>
<p>If you change directory to a git repo folder and you should see the prompt change based on the state of the git repo</p>
</li>
</ol>
<h1 id="summary">Summary</h1>
<p>So, by using the git.exe from a corporate approved application such as GitHub Desktop, and setting the PATH and VSCode settings to point to this version, you can get around the problem of not being able to install git.exe on your corporate device using an MSI.</p>
<p>Thus getting the advantages of the CLI whilst remaining within the corporate security guidelines.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Change in Edge/Chromium profile settings broke my Azure Entra ID SSO</title>
      <link>https://blog.richardfennell.net/posts/change-in-browser-profile-settings-broke-my-sso/</link>
      <pubDate>Thu, 21 Dec 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/change-in-browser-profile-settings-broke-my-sso/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;I have recently hit a problem whilst developing some training material on Single Sign on (SSO) in GitHub Enterprise. This training is to be delivered in a training instance of GHE that is configured to use a training instance of Azure Entra ID as the SAML identity provider.&lt;/p&gt;
&lt;p&gt;To make my life easier, so I am not logging in and out of my work and test Azure Entra ID directories, I have been using &lt;a href=&#34;https://support.microsoft.com/en-us/topic/sign-in-and-create-multiple-profiles-in-microsoft-edge-df94e622-2061-49ae-ad1d-6f0e43ce6435&#34;&gt;profiles in my Chromium based Edge browser&lt;/a&gt;. This means I have two copies of Edge running, one for my &amp;lsquo;real&amp;rsquo; work directory and the other for the training directory.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>I have recently hit a problem whilst developing some training material on Single Sign on (SSO) in GitHub Enterprise. This training is to be delivered in a training instance of GHE that is configured to use a training instance of Azure Entra ID as the SAML identity provider.</p>
<p>To make my life easier, so I am not logging in and out of my work and test Azure Entra ID directories, I have been using <a href="https://support.microsoft.com/en-us/topic/sign-in-and-create-multiple-profiles-in-microsoft-edge-df94e622-2061-49ae-ad1d-6f0e43ce6435">profiles in my Chromium based Edge browser</a>. This means I have two copies of Edge running, one for my &lsquo;real&rsquo; work directory and the other for the training directory.</p>
<p>I thought I had left my training GHE instance SSO configured and working, but when I came to test it I found that I could not login. I found the following:</p>
<ol>
<li>In my Edge browser (training profile) I would go to the GHE login page and login</li>
<li>I would be prompted for the normal GitHub 2FA login</li>
<li>I would be asked if I wished to do the SSO Azure Entra ID login, which I confirmed</li>
<li>I would expect to be redirected to the Azure Entra ID login page in the current Edge profile. However, it opened in my &lsquo;real&rsquo; work Edge profile instance with a 404 error</li>
</ol>
<h2 id="the-solution">The Solution</h2>
<p>The solution was to change the Edge profile settings</p>
<ol>
<li>In the training Edge instance open the settings</li>
<li>Select Settings &gt; Profile &gt; Profile Preferences</li>
<li>Turn off the &lsquo;Account based profile switching&rsquo; option</li>
</ol>
<p>If I then retried my SSO login, the Entra ID tab opened in the correct profile and I was able to login.</p>
<p>As I am sure this was working in the recent past, I assume this is a change in the Edge profile default settings. Hopefully this post will mean other people don&rsquo;t waste time trying to work out what is going on.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Caching NVD Vulnerability Dependency data on hosted Azure DevOps Pipeline agents</title>
      <link>https://blog.richardfennell.net/posts/caching-nvd-dependancies/</link>
      <pubDate>Wed, 06 Dec 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/caching-nvd-dependancies/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;On some projects we use &lt;a href=&#34;https://github.com/jeremylong/DependencyCheck&#34;&gt;Jeremy Long&amp;rsquo;s DependencyCheck&lt;/a&gt; tool, via the &lt;a href=&#34;https://github.com/dependency-check/azuredevops&#34;&gt;Azure DevOps task&lt;/a&gt;, to scan our code for known vulnerabilities. This tool uses the &lt;a href=&#34;https://nvd.nist.gov/&#34;&gt;National Vulnerability Database (NVD)&lt;/a&gt; to get its data. This data is downloaded on demand from the NVD site but the DependencyCheck tool.&lt;/p&gt;
&lt;p&gt;Since the recent API changes on the NVD site, &lt;a href=&#34;https://github.com/jeremylong/DependencyCheck#900-upgrade-notice&#34;&gt;as supported by DependencyCheck 9.0.x&lt;/a&gt;, the downloading of the current vulnerability data has slowed from about 3 minutes to around 15 minutes, even with a valid &lt;a href=&#34;https://nvd.nist.gov/developers/request-an-api-key&#34;&gt;NVD API Key&lt;/a&gt;. So effectively slowing all our pipeline builds by 15 minutes, a very significant change if the rest of the build only takes a few seconds!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>On some projects we use <a href="https://github.com/jeremylong/DependencyCheck">Jeremy Long&rsquo;s DependencyCheck</a> tool, via the <a href="https://github.com/dependency-check/azuredevops">Azure DevOps task</a>, to scan our code for known vulnerabilities. This tool uses the <a href="https://nvd.nist.gov/">National Vulnerability Database (NVD)</a> to get its data. This data is downloaded on demand from the NVD site but the DependencyCheck tool.</p>
<p>Since the recent API changes on the NVD site, <a href="https://github.com/jeremylong/DependencyCheck#900-upgrade-notice">as supported by DependencyCheck 9.0.x</a>, the downloading of the current vulnerability data has slowed from about 3 minutes to around 15 minutes, even with a valid <a href="https://nvd.nist.gov/developers/request-an-api-key">NVD API Key</a>. So effectively slowing all our pipeline builds by 15 minutes, a very significant change if the rest of the build only takes a few seconds!</p>
<h2 id="solution-for-on-premise-agents">Solution for on premise agents</h2>
<p>For our on premise pipeline agents, this is not as bigger problem as it initially sounds. Each agent caches the NVD data locally, so once the first build has downloaded the data, subsequent builds use the local cache, updating it as needed.</p>
<p>To further reduce the impact to developers of getting the updated NVD data, we have a scheduled build running on each on premises agent to make sure we update the cache at least once a day. I <a href="https://blogs.blackmarble.co.uk/rfennell/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines/">previously posted</a> on how to create your own Azure DevOps maintenance jobs for just this type of requirement.</p>
<h2 id="but-what-about-hosted-agents">But what about hosted agents?</h2>
<p>Unfortunately we cannot use the same approach for hosted agents. The problem is that after each pipeline run the hosted agent is destroyed, so the cache is lost. This means that each pipeline run has to download the NVD data.</p>
<p>But there is a solution, to use the <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops">Cache Pipeline task</a>. This task allows any user defined agent data to be cached between pipeline runs, even on hosted agents. The limitations of the cache are that:</p>
<ul>
<li>The cache is specific to a pipeline definition, so there is no sharing of the cache between pipeline definitions</li>
<li>And that the cache only lasts 7 days.</li>
</ul>
<p>But even with these limitations it is still a big improvement over downloading the data on each and every build. For a given pipeline definition that is run regularly i.e. a multiple times a week, the cache will be used for all but the first run.</p>
<p>The YAML to setup the cache is as follows. Note the key here is to find the location of the NVD cache using some PowerShell as we don&rsquo;t know the exact path as it depends on the version of the DependencyCheck task being used.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># find the current location of the NVD cache (it is task version specific)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="w"> </span><span class="nt">powershell</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">     $nvdcachepath = $(get-childitem &#34;$(Agent.WorkFolder)\_tasks\dependency-check-build-task*\*.*.*\dependency-check\data&#34;).FullName
</span></span></span><span class="line"><span class="cl"><span class="sd">     echo &#34;##vso[task.setvariable variable=nvdcachepath;]$nvdcachepath&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Find the NVD Cache path</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># create the cache</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="w"> </span><span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">Cache@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">key</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;&#34;NVDCache&#34; | &#34;$(Agent.OS)&#34;&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">restoreKeys</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">         NVDCache | &#34;$(Agent.OS)&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">         NVDCache</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l">$(nvdcachepath)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">NVD Cache</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># No changes required from the standard task</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="w"> </span><span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">dependency-check-build-task@6</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;Vunerability Scan Exploited Vulnerabilities update check&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">projectName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Maintainance&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">scanPath</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;.&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">format</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;HTML&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">additionalArguments</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;--nvdApiKey $(nvdapikey)&#39;</span><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># a special end of run task is automatically added at runtime to save the cache</span><span class="w">
</span></span></span></code></pre></div><blockquote>
<p><strong>Note added 31-Oct-2025</strong></p>
<p>This solution can have unexpected side effects if the <strong>nvdcachepath</strong> is not set, see <a href="/posts/interesting-side-effect-with-azdo-cache/">this post</a> for more details and mitigation you may wish to consider.</p></blockquote>
<p>So now we have a solution that works for both on premise and hosted agents. Hopefully saving 15 minutes on all but your first pipeline runs of a given pipeline definition.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New problem when generating build agents using Packer</title>
      <link>https://blog.richardfennell.net/posts/new-problem-when-generating-build-agents-using-packer/</link>
      <pubDate>Thu, 19 Oct 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-problem-when-generating-build-agents-using-packer/</guid>
      <description>&lt;h2 id=&#34;the-problem&#34;&gt;The Problem&lt;/h2&gt;
&lt;p&gt;I have been using Packer to generate our Azure DevOps Build agent VHD images for a while now, but when I came to regenerate them this time I hit a problem.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/you-need-to-pass-a-github-pat-to-create-azure-devops-agent-images-using-packer/&#34;&gt;As I have documented previously&lt;/a&gt;, our process is that we update our fork of the &lt;a href=&#34;https://github.com/actions/runner-images&#34;&gt;Microsoft repository&lt;/a&gt; and then merge the newest changes into our long lived branch that contains our customisations. I then use Packer to generate a new generalised VHD which has all the same features as the Microsoft hosted agents. I then use this VHD to create our new Hyper-V based self hosted Azure DevOps agent VMs using &lt;a href=&#34;https://github.com/VirtualEngine/Lability&#34;&gt;Lability&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-problem">The Problem</h2>
<p>I have been using Packer to generate our Azure DevOps Build agent VHD images for a while now, but when I came to regenerate them this time I hit a problem.</p>
<p><a href="https://blogs.blackmarble.co.uk/rfennell/you-need-to-pass-a-github-pat-to-create-azure-devops-agent-images-using-packer/">As I have documented previously</a>, our process is that we update our fork of the <a href="https://github.com/actions/runner-images">Microsoft repository</a> and then merge the newest changes into our long lived branch that contains our customisations. I then use Packer to generate a new generalised VHD which has all the same features as the Microsoft hosted agents. I then use this VHD to create our new Hyper-V based self hosted Azure DevOps agent VMs using <a href="https://github.com/VirtualEngine/Lability">Lability</a>.</p>
<p>The problem is since the last time we ran this process a couple of months ago, Microsoft have swapped the Packer Builder option they are using in their definitions.</p>
<ul>
<li>They used to use the <code>vhd</code> option in the <code>azure-arm</code> builder. This generated a generalised VHD in Azure Blob Storage.</li>
<li>They are now using the <code>image</code> option, also to be found in the <code>azure-arm</code> builder, this generates a VM Image in Azure.</li>
</ul>
<p>I can see that this is a better option for Microsoft, or anyone using Azure hosted agents, as it means the generated image is ready and waiting to be used to create an Azure hosted agent VM or VM Scale Set.</p>
<p>However, it is not what we need. We need a local VHD that we can use with Hyper-V/Lability.</p>
<p>For us, the key problem is that there does not appear to be a way to directly download the generalized VHD from the VM Image. You either have to</p>
<ul>
<li>Create an Azure hosted VM from the image, then re-SysPrep the VM and then download the now generalised VHD.</li>
<li>Or export the images into a Azure Computer Gallery, from which you can download the VHD.</li>
</ul>
<p>Both options add more slow steps, so initially I looked for another option, a way to keep doing what I had done before.</p>
<h2 id="a-partial-solution---doing-it-the-old-way">A Partial Solution - doing it the old way</h2>
<p>The key point to note is that all the critical Packer definition changes are in the builder block and associated variables. The end target for the built image is changed, not what is installed on the image i.e. the Packer provisioners that install the features/tools are unchanged.</p>
<p>So I thought, what happens if we just revert the builder block of JSON?</p>
<p>So I did the following</p>
<ol>
<li>Copied the <code>{..}</code> variables block from the previous version of the Packer JSON file that generated a VHD to a file</li>
<li>Copied the <code>{..}</code> builder block (from inside the <code>[..]</code> array) from the previous version of the Packer JSON file to a file</li>
<li>Update our fork/branch of my copy of the repo to get the updated Packer definitions that build an image.</li>
<li>Ran a script that replaces the variables block and build block in the array with the historic ones from a our previously saved files</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="c"># Load the images\win\windows20xx.json</span>
</span></span><span class="line"><span class="cl"><span class="nv">$target</span> <span class="p">=</span> <span class="nb">get-content</span> <span class="nv">$targetFile</span> <span class="p">|</span> <span class="nb">Convertfrom-Json</span>
</span></span><span class="line"><span class="cl"><span class="c"># Load the variables json block and builders blocks</span>
</span></span><span class="line"><span class="cl"><span class="nv">$variables</span> <span class="p">=</span> <span class="nb">get-content</span> <span class="nv">$variablesFile</span> <span class="p">|</span> <span class="nb">ConvertFrom-Json</span>
</span></span><span class="line"><span class="cl"><span class="nv">$builders</span> <span class="p">=</span> <span class="nb">get-content</span> <span class="nv">$buildersFile</span> <span class="p">|</span> <span class="nb">Convertfrom-Json</span>
</span></span><span class="line"><span class="cl"><span class="c"># Replacing Microsoft Variables block with our values</span>
</span></span><span class="line"><span class="cl"><span class="nv">$target</span><span class="p">.</span><span class="py">variables</span> <span class="p">=</span> <span class="nv">$variables</span>
</span></span><span class="line"><span class="cl"><span class="c"># Replacing Microsoft Builders jsonblock with our values</span>
</span></span><span class="line"><span class="cl"><span class="nv">$target</span><span class="p">.</span><span class="py">builders</span> <span class="p">=</span> <span class="vm">@</span><span class="p">(</span><span class="nv">$builders</span><span class="p">)</span> <span class="c"># make sure we force to be an array</span>
</span></span><span class="line"><span class="cl"><span class="c"># Writing Out the modified Windows 20xx JSON file</span>
</span></span><span class="line"><span class="cl"><span class="nv">$target</span> <span class="p">|</span> <span class="nb">ConvertTo-Json</span> <span class="n">-depth</span> <span class="mf">100</span> <span class="p">|</span> <span class="nb">out-file</span> <span class="nv">$targetFile</span> <span class="n">-encoding</span> <span class="n">ascii</span>
</span></span></code></pre></div><p>I then ran Packer as normal, and for the Windows 2022 definition, after the usual multi-hour wait, got the expected VHD.</p>
<p>However, for a Windows 2019 based image I got an error installing the .NET Framework 4.5 feature. This problem did not occur when we built the Packer definition as a <code>image</code> as opposed to a <code>vhd</code>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">vhd: Provisioning with powershell script: C:\azure-pipelines-virtual-environments\images\win&gt;/scripts/Installers/Install-WindowsFeatures.ps1 
</span></span><span class="line"><span class="cl">vhd: Activating Windows Feature &#39;NET-Framework-Features&#39;...
</span></span><span class="line"><span class="cl">vhd: Windows Feature &#39;NET-Framework-Features&#39; was activated successfully
</span></span><span class="line"><span class="cl">vhd: Activating Windows Feature &#39;NET-Framework-45-Features&#39;...
</span></span><span class="line"><span class="cl">vhd: Install-WindowsFeature : The request to add or remove features on the specified server failed.
</span></span><span class="line"><span class="cl">vhd: The operation cannot be completed, because the server that you specified requires a restart.
</span></span></code></pre></div><p>Try as I might by re-ordering steps in the build, I could not get this to work. As soon as I fixed one issue another appeared.</p>
<p>I have no idea why changing the builder should cause such issues, but I needed a solution that did not require so much editing, so had to look for another option.</p>
<h2 id="a-better-solution---via-an-azure-compute-gallery">A Better Solution - via an Azure Compute Gallery</h2>
<p>It was obvious I had to use the new way of using Packer. I quickly discarded the idea of creating an Azure hosted VM from the Packer generated VM image and then re-SysPrep&rsquo;ing it. This would have been a slow process and would have required a lot of manual steps.</p>
<p>The best of the alternatives I could find was to build a VM image, using the new Packer definition, then clone it to an Azure Computer Gallery. From where I could download it as a generalised VHD.</p>
<p>The complete process is as follows:</p>
<ol>
<li>
<p><strong>[Done once]</strong> Create an Azure Computer Gallery instance in your subscription.</p>
</li>
<li>
<p>Run Packer to generate your generalised VM Image</p>
</li>
<li>
<p>In the Azure Portal view the newly created VM Image and select &lsquo;Clone to a VM Image&rsquo;</p>
<ul>
<li>Select the previously created Azure Computer Gallery</li>
<li>Provide a version number, we are using one based on the OS of the images e.g. 2022.0.1</li>
<li>If it is the first time you are cloning a VM image, create a new &lsquo;Target VM Image definition&rsquo; with a suitable name, for all subsequent clones just select the existing definition target</li>
<li>Pick the replication rules that meet your needs, I used local replication on premium storage.</li>
</ul>
<p>The cloning of the image version takes around 30 minutes for the 250Gb image.</p>
<blockquote>
<p>If you don&rsquo;t want to use the portal, you could use the <a href="https://learn.microsoft.com/en-us/cli/azure/sig/image-version?view=azure-cli-latest">Azure CLI</a>, using the command <code>az sig image-version</code> commands i.e</p>
<p><code>az sig image-version create --gallery-name MyPackerGallery --resource-group myrg --gallery-image-definition BuildAgent2022 --gallery-image-version 2.0.1 --managed-image /subscriptions/&lt;GUID&gt;/resourceGroups/BMPACKER/providers/Microsoft.Compute/images/buildagent --target-regions westeurope=1=premium_lrs --location westeurope</code></p>
<p>You need the <code>--location</code> else your subscriptions default location is used, which may not be where your VM Image is, resulting in the somewhat confusing error given you have provided a full URL for the image and your source and target are in the same region and resource group.</p>
<p><code>(InvalidParameter) Gallery image version publishing profile regions 'westeurope' must contain the location of image version 'North Europe'.</code></p></blockquote>
</li>
<li>
<p>Once the replication has completed, consider deleting the VM Image that was created by Packer as it is no longer needed</p>
</li>
<li>
<p>You can then download the generalised VHD using PowerShell.</p>
</li>
</ol>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="k">param</span> <span class="p">(</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$subscription</span>  <span class="c"># &#34;My Subscription&#34;,</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$rgName</span> <span class="c"># &#34;packer&#34;,</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$galleryName</span>  <span class="c"># &#34;PackerGallery&#34;,</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$galleryDefintionName</span>  <span class="c"># &#34;BuildAgent2022&#34;,</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$galleryImageVersion</span> <span class="c"># &#34;2022.0.1&#34;,</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$targetDir</span> <span class="c"># &#34;c:\download&#34;</span>
</span></span><span class="line"><span class="cl"><span class="p">)</span>
</span></span><span class="line"><span class="cl"><span class="nb">write-host</span> <span class="s2">&#34;This script uses the Az PowerShell module&#34;</span>
</span></span><span class="line"><span class="cl"><span class="nb">write-host</span> <span class="s2">&#34;    Install-Module -Name Az -Repository&#34;</span> <span class="nb">write-host</span> <span class="s2">&#34;It also assumed that AZCOPY.EXE is in the current folder</span><span class="se">`n`n`&#34;</span><span class="s2"> 
</span></span></span><span class="line"><span class="cl"><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2">write-host &#34;</span><span class="n">Connect</span> <span class="n">to</span> <span class="n">Azure</span> <span class="n">subscription</span> <span class="s1">&#39;$subscription&#39;</span><span class="s2">&#34; 
</span></span></span><span class="line"><span class="cl"><span class="s2">Connect-AzAccount -Subscription </span><span class="nv">$subscription</span><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2">write-host &#34;</span><span class="n">Connecting</span> <span class="n">to</span> <span class="n">Azure</span> <span class="n">subscript</span> <span class="s1">&#39;$subscription&#39;</span><span class="s2">&#34;
</span></span></span><span class="line"><span class="cl"><span class="s2">select-AzSubscription </span><span class="nv">$subscription</span><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$imgver</span><span class="s2"> = Get-AzGalleryImageVersion -ResourceGroupName </span><span class="nv">$rgName</span><span class="s2"> -GalleryName </span><span class="nv">$galleryName</span><span class="s2">  -GalleryImageDefinitionName </span><span class="nv">$galleryDefintionName</span><span class="s2"> -Name </span><span class="nv">$galleryImageVersion</span><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2">write-host &#34;</span><span class="n">Downloading</span> <span class="n">VHD</span> <span class="k">for</span> <span class="nv">$galleryDefintionName</span> <span class="nv">$galleryImageVersion</span><span class="s2">&#34;
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$imgver</span><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$galleryImageVersionID</span><span class="s2"> = </span><span class="nv">$imgver</span><span class="s2">.Id
</span></span></span><span class="line"><span class="cl"><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2">write-host &#34;</span><span class="n">Creating</span> <span class="n">temporary</span> <span class="n">disk</span><span class="s2">&#34;
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$diskName</span><span class="s2"> = &#34;</span><span class="n">tmpOSDisk</span><span class="s2">&#34;
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$imageOSDisk</span><span class="s2"> = @{Id = </span><span class="nv">$galleryImageVersionID</span><span class="s2">}
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$OSDiskConfig</span><span class="s2"> = New-AzDiskConfig -Location </span><span class="nv">$imgver</span><span class="s2">.location -CreateOption &#34;</span><span class="n">FromImage</span><span class="s2">&#34; -GalleryImageReference </span><span class="nv">$imageOSDisk</span><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$osd</span><span class="s2"> = New-AzDisk -ResourceGroupName </span><span class="nv">$rgName</span><span class="s2"> -DiskName </span><span class="nv">$diskName</span><span class="s2"> -Disk </span><span class="nv">$OSDiskConfig</span><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$downloadPath</span><span class="s2"> = </span><span class="nv">$targetDir</span><span class="s2"> + &#34;</span><span class="p">\</span><span class="s2">&#34; + </span><span class="nv">$galleryImageVersion</span><span class="s2"> + &#34;</span><span class="p">.</span><span class="n">vhd</span><span class="s2">&#34;
</span></span></span><span class="line"><span class="cl"><span class="s2">write-host &#34;</span><span class="n">Granting</span> <span class="n">acces</span> <span class="n">to</span> <span class="n">temporary</span> <span class="n">disk</span><span class="s2">&#34;
</span></span></span><span class="line"><span class="cl"><span class="s2"></span><span class="nv">$sas</span><span class="s2"> = Grant-AzDiskAccess -ResourceGroupName </span><span class="nv">$rgName</span><span class="s2"> -DiskName </span><span class="nv">$osd</span><span class="s2">.Name -Access &#34;</span><span class="n">Read</span><span class="s2">&#34; -DurationInSecond 18000
</span></span></span><span class="line"><span class="cl"><span class="s2">
</span></span></span><span class="line"><span class="cl"><span class="s2"># We need to up the timeout else only get 35Gb of the 250Gb VHD
</span></span></span><span class="line"><span class="cl"><span class="s2">write-host &#34;</span><span class="n">Downloading</span> <span class="n">VHD</span> <span class="n">to</span> <span class="nv">$downloadpath</span> <span class="p">-</span> <span class="n">this</span> <span class="n">will</span> <span class="n">take</span> <span class="n">some</span> <span class="n">time</span><span class="s2">&#34;
</span></span></span><span class="line"><span class="cl"><span class="s2">.\azcopy cp </span><span class="nv">$sas</span><span class="s2">.AccessSAS </span><span class="nv">$downloadPath</span><span class="s2">
</span></span></span></code></pre></div><p>Once the VHD is on our local network I was able to build my self hosted agents as normal using Lability on Hyper-V</p>
<h2 id="conclusion">Conclusion</h2>
<p>In the end the change of Packer target was not as big a problem as I first thought. The extra clone step adds about 30 minutes to the process, but given the whole process of VHD creation and copying takes best part of 24 hours another 30 minutes is neither here nor there.</p>
<p>It will be interesting to see how I can use the Azure Computer Gallery in the future. It could make managing self-hosed agent images for Azure much easier.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Practical DevOps in the Enterprise - Event &amp; Whitepaper</title>
      <link>https://blog.richardfennell.net/posts/practical-devops-in-the-enterprise/</link>
      <pubDate>Sun, 08 Oct 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/practical-devops-in-the-enterprise/</guid>
      <description>&lt;p&gt;In advance of my upcoming Black Marble Webinar on the 18th October entitled &amp;lsquo;Practical DevOps in the Enterprise&amp;rsquo;, I have written a whitepaper of the same name.&lt;/p&gt;
&lt;p&gt;You can find the whitepaper on &lt;a href=&#34;https://www.linkedin.com/pulse/practical-devops-enterprise-black-marble/&#34;&gt;LinkedIn&lt;/a&gt; and register for this, and other free Black Marble webinars and in-person events, via &lt;a href=&#34;https://www.blackmarble.com/events&#34;&gt;the Black Marble website&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In advance of my upcoming Black Marble Webinar on the 18th October entitled &lsquo;Practical DevOps in the Enterprise&rsquo;, I have written a whitepaper of the same name.</p>
<p>You can find the whitepaper on <a href="https://www.linkedin.com/pulse/practical-devops-enterprise-black-marble/">LinkedIn</a> and register for this, and other free Black Marble webinars and in-person events, via <a href="https://www.blackmarble.com/events">the Black Marble website</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SonarQube Docker Container will not start</title>
      <link>https://blog.richardfennell.net/posts/sonarqube-container-will-not-start/</link>
      <pubDate>Wed, 27 Sep 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sonarqube-container-will-not-start/</guid>
      <description>&lt;h2 id=&#34;the-problem&#34;&gt;The Problem&lt;/h2&gt;
&lt;p&gt;We run our SonarQube instance in a &lt;a href=&#34;https://devblogs.microsoft.com/premier-developer/sonarqube-hosted-on-azure-app-service/&#34;&gt;Docker container hosted in an Azure Web App Service&lt;/a&gt;. Today, with no notice, it failed. We did the obvious, just tried to restart it and the startup process failed.&lt;/p&gt;
&lt;p&gt;Looking at the Azure Web App&amp;rsquo;s Log Stream we could see the following error repeated on each restart attempt&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:00.797Z INFO - Starting multi-container app..
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:01.024Z INFO - Pulling image: sonarqube:10.1-developer
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:02.100Z INFO - 10.1-developer Pulling from library/sonarqube
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:02.291Z INFO - Digest: sha256:45e7cf02e037b00028d20556a91111f8ae8ae2b2803e516cb0665dd605a6d8b2
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:02.292Z INFO - Status: Image is up to date for sonarqube:10.1-developer
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:02.326Z INFO - Pull Image successful, Time taken: 0 Minutes and 1 Seconds
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:02.344Z INFO - Starting container for site
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:02.346Z INFO - docker run -d -p 8289:9000 --name bmsonarqubeprodwebsite_sonarqube_0_ec31111b -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=bmsonarqubeprodwebsite -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=bmsonarqubeprodwebsite.azurewebsites.net -e WEBSITE_INSTANCE_ID=6d27706a6b4eb56feec6ef57ab9b360923c5761cabb3fb52eb6fc5f4cdfbace3 -e WEBSITE_USE_DIAGNOSTIC_SERVER=False sonarqube:10.1-developer -Dsonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:30:02.347Z INFO - Logging is not enabled for this container.
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:31:09 No new trace in the past 1 min(s).
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:32:09 No new trace in the past 2 min(s).
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:33:09 No new trace in the past 3 min(s).
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;2023-09-27T15:33:52.465Z ERROR - multi-container unit was not started successfully
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;the-solution&#34;&gt;The Solution&lt;/h2&gt;
&lt;p&gt;Something in the back of my mind, from when we ran an on-premises SonarQube instance, made me think of a corrupt ElasticSearch index.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-problem">The Problem</h2>
<p>We run our SonarQube instance in a <a href="https://devblogs.microsoft.com/premier-developer/sonarqube-hosted-on-azure-app-service/">Docker container hosted in an Azure Web App Service</a>. Today, with no notice, it failed. We did the obvious, just tried to restart it and the startup process failed.</p>
<p>Looking at the Azure Web App&rsquo;s Log Stream we could see the following error repeated on each restart attempt</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">2023-09-27T15:30:00.797Z INFO - Starting multi-container app..
</span></span><span class="line"><span class="cl">2023-09-27T15:30:01.024Z INFO - Pulling image: sonarqube:10.1-developer
</span></span><span class="line"><span class="cl">2023-09-27T15:30:02.100Z INFO - 10.1-developer Pulling from library/sonarqube
</span></span><span class="line"><span class="cl">2023-09-27T15:30:02.291Z INFO - Digest: sha256:45e7cf02e037b00028d20556a91111f8ae8ae2b2803e516cb0665dd605a6d8b2
</span></span><span class="line"><span class="cl">2023-09-27T15:30:02.292Z INFO - Status: Image is up to date for sonarqube:10.1-developer
</span></span><span class="line"><span class="cl">2023-09-27T15:30:02.326Z INFO - Pull Image successful, Time taken: 0 Minutes and 1 Seconds
</span></span><span class="line"><span class="cl">2023-09-27T15:30:02.344Z INFO - Starting container for site
</span></span><span class="line"><span class="cl">2023-09-27T15:30:02.346Z INFO - docker run -d -p 8289:9000 --name bmsonarqubeprodwebsite_sonarqube_0_ec31111b -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=bmsonarqubeprodwebsite -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=bmsonarqubeprodwebsite.azurewebsites.net -e WEBSITE_INSTANCE_ID=6d27706a6b4eb56feec6ef57ab9b360923c5761cabb3fb52eb6fc5f4cdfbace3 -e WEBSITE_USE_DIAGNOSTIC_SERVER=False sonarqube:10.1-developer -Dsonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false
</span></span><span class="line"><span class="cl">2023-09-27T15:30:02.347Z INFO - Logging is not enabled for this container.
</span></span><span class="line"><span class="cl">Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
</span></span><span class="line"><span class="cl">2023-09-27T15:31:09 No new trace in the past 1 min(s).
</span></span><span class="line"><span class="cl">2023-09-27T15:32:09 No new trace in the past 2 min(s).
</span></span><span class="line"><span class="cl">2023-09-27T15:33:09 No new trace in the past 3 min(s).
</span></span><span class="line"><span class="cl">2023-09-27T15:33:52.465Z ERROR - multi-container unit was not started successfully
</span></span></code></pre></div><h2 id="the-solution">The Solution</h2>
<p>Something in the back of my mind, from when we ran an on-premises SonarQube instance, made me think of a corrupt ElasticSearch index.</p>
<p>Using Azure Cloud Storage Explorer, I connected to the <code>sonarqube-data</code> file share and deleted the <code>ES8</code> data folder</p>
<p>I then restarted the container and it started OK, recreating the <code>ES8</code> folder and re-indexing the SonarQube content.</p>
<p>Thus far all appears to be OK.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Setting Azure DevOps Agent Pool Descriptions via the Azure DevOps API</title>
      <link>https://blog.richardfennell.net/posts/setting-azure-devops-agent-pool_descriptions/</link>
      <pubDate>Thu, 24 Aug 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-azure-devops-agent-pool_descriptions/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;If you are using &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops&#34;&gt;Azure Scale Set based Azure DevOps Agent Pools&lt;/a&gt; (VMSS) to provide dynamically scalable agent pools, unlike with self hosted agent pools, there is no capabilities tab for the individual agents.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Agent Capabilities tab&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/images/rfennell/vmss1.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;My understanding is that this UX design choice was made as the capabilities of all the agents in a VMSS pool are identical, as they are the same disk image, so why show the same data multiple times.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>If you are using <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops">Azure Scale Set based Azure DevOps Agent Pools</a> (VMSS) to provide dynamically scalable agent pools, unlike with self hosted agent pools, there is no capabilities tab for the individual agents.</p>
<p><img alt="Agent Capabilities tab" loading="lazy" src="/images/rfennell/vmss1.png"></p>
<p>My understanding is that this UX design choice was made as the capabilities of all the agents in a VMSS pool are identical, as they are the same disk image, so why show the same data multiple times.</p>
<p>However, this can raise a discoverability problem for teams who are not involved in the VMSS disk image creation process. They have no way to see what capabilities are available on various pools, or if versions of tools have been updated between disk image versions.</p>
<h2 id="a-solution">A Solution</h2>
<p>Though the individual agents do not have a capabilities tab, the VMSS pool does have an editable &lsquo;Details&rsquo; tab. So, why not publish the capabilities of the agents in the pool&rsquo;s details tab?</p>
<p>This can be achieved with a call to the Azure DevOp API. Though admittedly a call that is not well documented.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="c">##-----------------------------------------------------------------------</span>
</span></span><span class="line"><span class="cl"><span class="c">## &lt;copyright file=&#34;Update-VMSSPoolDetails.ps1&#34;&gt;(c) Richard Fennell. &lt;/copyright&gt;</span>
</span></span><span class="line"><span class="cl"><span class="c">##-----------------------------------------------------------------------</span>
</span></span><span class="line"><span class="cl"><span class="c"># Updates the description of a VMSS based pool in Azure DevOps</span>
</span></span><span class="line"><span class="cl"><span class="k">param</span>
</span></span><span class="line"><span class="cl"><span class="p">(</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="p">[</span><span class="nb">parameter</span><span class="p">(</span><span class="na">Mandatory</span> <span class="p">=</span> <span class="vm">$true</span><span class="p">,</span> <span class="na">HelpMessage</span> <span class="p">=</span> <span class="s2">&#34;URL of the Azure DevOps Organisation e.g. &#39;https://dev.azure.com/myorg&#39;&#34;</span><span class="p">)]</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$orgUri</span> <span class="p">,</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="p">[</span><span class="nb">parameter</span><span class="p">(</span><span class="na">Mandatory</span> <span class="p">=</span> <span class="vm">$false</span><span class="p">,</span> <span class="na">HelpMessage</span> <span class="p">=</span> <span class="s2">&#34;Personal Access Token&#34;</span><span class="p">)]</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$pat</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="p">[</span><span class="nb">parameter</span><span class="p">(</span><span class="na">Mandatory</span> <span class="p">=</span> <span class="vm">$true</span><span class="p">,</span> <span class="na">HelpMessage</span> <span class="p">=</span> <span class="s2">&#34;The name of the AppPool to update&#34;</span><span class="p">)]</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$poolName</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="p">[</span><span class="nb">parameter</span><span class="p">(</span><span class="na">Mandatory</span> <span class="p">=</span> <span class="vm">$true</span><span class="p">,</span> <span class="na">HelpMessage</span> <span class="p">=</span> <span class="s2">&#34;The new description for the VMSS based pool&#34;</span><span class="p">)]</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$description</span> 
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="kd">function</span><span class="w"> </span><span class="nb">Get-WebClient</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="k">param</span>
</span></span><span class="line"><span class="cl">    <span class="p">(</span>
</span></span><span class="line"><span class="cl">        <span class="p">[</span><span class="no">string</span><span class="p">]</span><span class="nv">$pat</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="p">[</span><span class="no">string</span><span class="p">]</span><span class="nv">$ContentType</span> <span class="p">=</span> <span class="s2">&#34;application/json&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nv">$wc</span> <span class="p">=</span> <span class="nb">New-Object</span> <span class="n">System</span><span class="p">.</span><span class="py">Net</span><span class="p">.</span><span class="py">WebClient</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$wc</span><span class="p">.</span><span class="n">Headers</span><span class="p">[</span><span class="s2">&#34;Content-Type&#34;</span><span class="p">]</span> <span class="p">=</span> <span class="nv">$ContentType</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nv">$pair</span> <span class="p">=</span> <span class="s2">&#34;:</span><span class="nv">${pat}</span><span class="s2">&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$bytes</span> <span class="p">=</span> <span class="p">[</span><span class="no">System.Text.Encoding</span><span class="p">]::</span><span class="n">ASCII</span><span class="p">.</span><span class="py">GetBytes</span><span class="p">(</span><span class="nv">$pair</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$base64</span> <span class="p">=</span> <span class="p">[</span><span class="no">System.Convert</span><span class="p">]::</span><span class="n">ToBase64String</span><span class="p">(</span><span class="nv">$bytes</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$wc</span><span class="p">.</span><span class="py">Headers</span><span class="p">.</span><span class="py">Add</span><span class="p">(</span><span class="err">“</span><span class="n">Authorization</span><span class="err">”</span><span class="p">,</span> <span class="s2">&#34;Basic </span><span class="nv">$base64</span><span class="s2">&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nv">$wc</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nv">$wc</span> <span class="p">=</span> <span class="nb">Get-WebClient</span> <span class="n">-pat</span> <span class="nv">$pat</span> 
</span></span><span class="line"><span class="cl"><span class="nb">write-host</span> <span class="s2">&#34;Finding the Agent Pool &#39;</span><span class="nv">$poolName</span><span class="s2">&#39; in &#39;</span><span class="nv">$orgUri</span><span class="s2">&#39; &#34;</span> <span class="n">-ForegroundColor</span> <span class="n">Green</span>
</span></span><span class="line"><span class="cl"><span class="nv">$uri</span> <span class="p">=</span> <span class="s2">&#34;</span><span class="p">$(</span><span class="nv">$orgUri</span><span class="p">)</span><span class="s2">/_apis/distributedtask/pools?api-version=5.0-preview.1&#34;</span>
</span></span><span class="line"><span class="cl"><span class="nv">$jsondata</span> <span class="p">=</span> <span class="nv">$wc</span><span class="p">.</span><span class="py">DownloadString</span><span class="p">(</span><span class="nv">$uri</span><span class="p">)</span> <span class="p">|</span> <span class="nb">ConvertFrom-Json</span>
</span></span><span class="line"><span class="cl"><span class="nv">$poolid</span> <span class="p">=</span> <span class="p">(</span><span class="nv">$jsondata</span><span class="p">.</span><span class="py">value</span> <span class="p">|</span> <span class="nb">where </span><span class="p">{</span> <span class="nv">$_</span><span class="p">.</span><span class="py">name</span> <span class="o">-eq</span> <span class="nv">$poolName</span> <span class="p">}).</span><span class="py">id</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nv">$wc</span> <span class="p">=</span> <span class="nb">Get-WebClient</span> <span class="n">-pat</span> <span class="nv">$pat</span> <span class="n">-ContentType</span> <span class="s2">&#34;application/octet-stream&#34;</span>
</span></span><span class="line"><span class="cl"><span class="nb">write-host</span> <span class="s2">&#34;Updating details of pool ID &#39;</span><span class="nv">$poolid</span><span class="s2">&#39; on &#39;</span><span class="nv">$orgUri</span><span class="s2">&#39; with &#39;</span><span class="nv">$description</span><span class="s2">&#39;&#34;</span> <span class="n">-ForegroundColor</span> <span class="n">Green</span>
</span></span><span class="line"><span class="cl"><span class="nv">$uri</span> <span class="p">=</span> <span class="s2">&#34;</span><span class="p">$(</span><span class="nv">$orgUri</span><span class="p">)</span><span class="s2">/_apis/distributedtask/pools/</span><span class="nv">$poolid</span><span class="s2">/poolmetadata?api-version=5.0-preview.1&#34;</span>
</span></span><span class="line"><span class="cl"><span class="nv">$wc</span><span class="p">.</span><span class="py">UploadString</span><span class="p">(</span><span class="nv">$uri</span><span class="p">,</span> <span class="s2">&#34;PUT&#34;</span><span class="p">,</span> <span class="nv">$description</span><span class="p">)</span> 
</span></span></code></pre></div><p>This script can be called passing in either a simple string description or something more complex such as markdown. For example</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="nb">Update-VMSSPoolDetails</span><span class="p">.</span><span class="py">ps1</span> <span class="n">-orgUri</span> <span class="n">https</span><span class="err">:</span><span class="p">//</span><span class="n">dev</span><span class="p">.</span><span class="py">azure</span><span class="p">.</span><span class="n">com</span><span class="p">/</span><span class="n">myorg</span> <span class="n">-pat</span> <span class="nv">$pat</span> <span class="n">-poolName</span> <span class="s1">&#39;Azure VM Scale Set&#39;</span> <span class="n">-description</span> <span class="s2">&#34;# Agent Details </span><span class="se">`n</span><span class="s2"> This is a description of the agents in this pool </span><span class="se">`n</span><span class="s2"> |Capability| Setting| </span><span class="se">`n</span><span class="s2"> |-|-| </span><span class="se">`n</span><span class="s2"> |Agent.OS | Windows_NT| </span><span class="se">`n</span><span class="s2"> |AgentOSVersion | 10.0| &#34;</span>
</span></span></code></pre></div><p><img alt="Agent Pool Details tab" loading="lazy" src="/images/rfennell/vmss2.png"></p>
<p>The question then becomes how do you build the description parameter? There are no end of options, here are a few that come to mind:</p>
<ul>
<li>Generate the string as part of your base image generation process, maybe using <a href="https://blogs.blackmarble.co.uk/rfennell/you-need-to-pass-a-github-pat-to-create-azure-devops-agent-images-using-packer/">Packer</a></li>
<li>Or as you deploy your base images to Azure, maybe with Terraform or an Azure DevOps pipeline</li>
<li>Or even as part of a <a href="https://blogs.blackmarble.co.uk/rfennell/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines/">custom maintenance job</a>, where you build the details based on the environment variables of the agent (remember an Azure DevOps capability is just an environment variable at the OS level)</li>
</ul>
<p>So hopefully plenty of ideas for you to work with..</p>
]]></content:encoded>
    </item>
    <item>
      <title>Setting up Snipe IT on Azure using Docker</title>
      <link>https://blog.richardfennell.net/posts/setting-up-snipe-it-on-azure/</link>
      <pubDate>Wed, 09 Aug 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-up-snipe-it-on-azure/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;I have recently been looking at getting &lt;a href=&#34;https://github.com/snipe/snipe-it&#34;&gt;Snipe-IT&lt;/a&gt; running on Azure using a Docker container. Though the documentation for this project is good, the detail for the Azure setup is a little lacking. So I thought I would document the steps and &lt;a href=&#34;https://rikhepworth.com/&#34;&gt;Rik Hepworth&lt;/a&gt; took to get it working.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Notes:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;This is a simple configuration to get it working, it can obviously get a lot more complex with the setup of VNETs etc.&lt;/li&gt;
&lt;li&gt;This post documents the manual process, best practice and the next step will be to get it all automated with BICEP/ARM template - For an example of this see &lt;a href=&#34;https://gist.github.com/rfennell/c0aca11e656486b3fdb57845b18b9e3f&#34;&gt;this GiST&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;
&lt;h2 id=&#34;create-an-azure-mysql-paas-instance&#34;&gt;Create an Azure MySQL PaaS instance&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Open the Azure Portal&lt;/li&gt;
&lt;li&gt;Create a new &amp;lsquo;Azure Database for MySQL flexible server&amp;rsquo; in a new resource group
&lt;ul&gt;
&lt;li&gt;Provide a name for the instance&lt;/li&gt;
&lt;li&gt;Set your region&lt;/li&gt;
&lt;li&gt;Workload Type - for this test I use the lowest &amp;lsquo;for development or hobby projects&amp;rsquo;&lt;/li&gt;
&lt;li&gt;Set the MySQL username and password&lt;/li&gt;
&lt;li&gt;For networking pick &amp;lsquo;allow public access&amp;rsquo; and &amp;lsquo;allow public access for any Azure service&amp;rsquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;You can add your Client IP address to the firewall rules if you want to be able to connect to the DB from your local machine, but this is not essential. I had enabled this to do some testing with a locally hosted Docker instance.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>I have recently been looking at getting <a href="https://github.com/snipe/snipe-it">Snipe-IT</a> running on Azure using a Docker container. Though the documentation for this project is good, the detail for the Azure setup is a little lacking. So I thought I would document the steps and <a href="https://rikhepworth.com/">Rik Hepworth</a> took to get it working.</p>
<blockquote>
<p><strong>Notes:</strong></p>
<ul>
<li>This is a simple configuration to get it working, it can obviously get a lot more complex with the setup of VNETs etc.</li>
<li>This post documents the manual process, best practice and the next step will be to get it all automated with BICEP/ARM template - For an example of this see <a href="https://gist.github.com/rfennell/c0aca11e656486b3fdb57845b18b9e3f">this GiST</a></li>
</ul></blockquote>
<h2 id="create-an-azure-mysql-paas-instance">Create an Azure MySQL PaaS instance</h2>
<ol>
<li>Open the Azure Portal</li>
<li>Create a new &lsquo;Azure Database for MySQL flexible server&rsquo; in a new resource group
<ul>
<li>Provide a name for the instance</li>
<li>Set your region</li>
<li>Workload Type - for this test I use the lowest &lsquo;for development or hobby projects&rsquo;</li>
<li>Set the MySQL username and password</li>
<li>For networking pick &lsquo;allow public access&rsquo; and &lsquo;allow public access for any Azure service&rsquo;</li>
</ul>
<blockquote>
<p>You can add your Client IP address to the firewall rules if you want to be able to connect to the DB from your local machine, but this is not essential. I had enabled this to do some testing with a locally hosted Docker instance.</p></blockquote>
</li>
<li>When the instance is created open it in the Azure Portal</li>
<li>Go to the Networking tab and download the SSL certificate <code>DigiCertGlobalRootCA.crt.pem</code></li>
<li>Go to the Database tab and create a new empty DB <code>snipe-it</code></li>
</ol>
<h2 id="create-an-azure-storage-account">Create an Azure Storage Account</h2>
<ol>
<li>Open the Azure Portal</li>
<li>Create a new &lsquo;Storage Account&rsquo; in same resource group as used for the MySQL
<ul>
<li>Provide a name for the instance</li>
<li>Set your region</li>
<li>For networking pick &lsquo;allow public access&rsquo;</li>
</ul>
</li>
<li>When the resource is created open it in the Azure Portal</li>
<li>In storage explorer create a new <code>File Share</code> called <code>snipeit</code></li>
<li>Upload the SSL Cert <code>DigiCertGlobalRootCA.crt.pem</code> to this share</li>
<li>In storage explorer create a new <code>File Share</code> called <code>snipeit-logs</code></li>
</ol>
<h2 id="create-an-azure-web-app">Create an Azure Web App</h2>
<ol>
<li>
<p>Open the Azure Portal</p>
</li>
<li>
<p>Create a new &lsquo;Web App&rsquo; in same resource group as used for the MySQL</p>
<ul>
<li>Provide a name for the instance</li>
<li>Pick the publish type to be <code>Docker Container</code></li>
<li>Set your region</li>
<li>Create a pricing tier, I used a Linux <code>Basic B1</code> for this test</li>
<li>For Docker settings I picked the follow (though we override these later with a compose file)
<ul>
<li>Single Container</li>
<li>Docker Hub</li>
<li>With the image name <code>snipe/snipe-it:latest</code></li>
</ul>
</li>
</ul>
</li>
<li>
<p>When the resource is created open it in the Azure Portal</p>
</li>
<li>
<p>In the configuration <code>Path Mappings</code> I added a new Azure Storage Mount for the cert and other local storage</p>
<ul>
<li>Name - <code>snipeit</code></li>
<li>Mount Path - <code>/var/lib/snipeit</code></li>
<li>Type - <code>Azure Files</code> using the previously created file share</li>
</ul>
</li>
<li>
<p>In the configuration <code>Path Mappings</code> I added a new Azure Storage Mount for the logs</p>
<ul>
<li>Name - <code>snipeit-logs</code></li>
<li>Mount Path - <code>/var/www/html/storage/logs</code></li>
<li>Type - <code>Azure Files</code> using the previously created file share</li>
</ul>
</li>
<li>
<p>In the configuration <code>Application Settings</code> I added the following new <code>Application Settings</code></p>
<ul>
<li><code>MYSQL_DATABASE</code> - <code>snipeit</code> matching the MySQL DB name</li>
<li><code>MYSQL_USER</code> to the username for the MySQL instance</li>
<li><code>MYSQL_PASSWORD</code> to the password for the MySQL instance</li>
<li><code>DB_CONNECTION</code> to <code>mysql</code></li>
<li><code>MYSQL_PORT_3306_TCP_ADDR</code> to the name of the MySQL instance <code>&lt;my-instance&gt;.mysql.database.azure.com</code></li>
<li><code>MYSQL_PORT_3306_TCP_PORT</code> to <code>3306</code></li>
<li><code>DB_SSL_IS_PAAS</code> to <code>true</code></li>
<li><code>DB_SSL</code> to <code>true</code></li>
<li><code>DB_SSL_CA_PATH</code> to <code>/var/lib/snipeit/DigiCertGlobalRootCA.crt.pem</code> matching the path to the SSL cert</li>
<li><code>APP_URL</code> to the URL of the Web App <code>https://&lt;my-instance&gt;.azurewebsites.net</code></li>
<li><code>APP_KEY</code> to a unique ID in the form <code>base64:6M3RwWh4re1FQGMTent3hON9D7ZJJDHxW1123456789=</code>. If you don&rsquo;t set this and start the container, whilst watching the log stream, you will see the new key generated which you can use</li>
<li><code>MAIL_DRIVER</code> to <code>smtp</code></li>
<li><code>MAIL_ENV_ENCRYPTION</code> to <code>tcp</code></li>
<li><code>MAIL_PORT_587_TCP_ADDR</code> to <code>smtp.sendgrid.net</code></li>
<li><code>MAIL_PORT_587_TCP_PORT</code> to <code>587</code></li>
<li><code>MAIL_ENV_USERNAME</code> to <code>apikey</code></li>
<li><code>MAIL_ENV_PASSWORD</code> your SendGrid API Key</li>
<li><code>MAIL_ENV_FROM_ADDR</code> to the email SNipe IT notifications should come from</li>
<li><code>MAIL_ENV_FROM_NAME</code> to <code>Snipe IT</code> or whatever you want the email to be from</li>
<li>You can also set the <code>APP_DEBUG</code> to <code>true</code> or <code>false</code>. If <code>true</code> this means more detailed error messages are shown in the Snipe-IT UI that do not appear in the log stream</li>
</ul>
</li>
<li>
<p>In the deployment center I picked <code>Docker Compose</code> and provided the follow config to mount the storage</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">version</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;3&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">services</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">snipe-it</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l">snipe/snipe-it:latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">snipeit:/var/lib/snipeit</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="l">snipeit-logs:/var/www/html/storage/logs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">snipeit</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">external</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">snipeit-logs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">external</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span></code></pre></div></li>
<li>
<p>Restart your Web App</p>
</li>
<li>
<p>And that should be it, the container should start and you should be able to access the Snipe-IT UI based setup Wizard via the URL of the Web App, as per the <a href="https://snipe-it.readme.io/docs/getting-started#:~:text=When%20your%20Snipe-IT,your%20Admin%20Settings.">product documentation</a></p>
</li>
</ol>
<h2 id="comments--tips">Comments &amp; Tips</h2>
<h3 id="mysql-ssl-certificate">MySQL SSL Certificate</h3>
<p>In my case my initial problems were down to the MySQL certificate. A mixture of initially not setting the environment variable, then setting the wrong one and finally not having correctly mounted the storage to present the file. The problem was in all cases you get the same unhelpful error message in the Snipe-IT UI</p>
<pre tabindex="0"><code>SQLSTATE[HY000] [2002]  (trying to connect via (null)) (SQL: select * from information_schema.tables where table_schema = snipeit and table_name = migrations and table_type = &#39;BASE TABLE&#39;)
</code></pre><p>&hellip; and there was nothing more useful in the Web App Log Stream (the container output). So I had to work out what was wrong by trial and error until I set setting <code>APP_DEBUG</code> to <code>true</code>. After which I started to see more useful error messages about invalid file paths in the UI.</p>
<h3 id="mysql-initial-migration">MySQL Initial Migration</h3>
<p>I also wasted time trying to get the initial DB creation migrations to work. I was getting the following error in the UI when the container was trying to create the DB Tables</p>
<blockquote>
<p><strong>Note:</strong> It appears that the tables are actually being created when the container starts, not when the button is pressed. The create tables button seems more of a checking tool.</p></blockquote>
<pre tabindex="0"><code>SQLSTATE[42000] Syntax error or access violation 1068 Multiple primary key defined
</code></pre><p>If I used the <a href="https://www.mysql.com/products/workbench/">MySQL Workbench</a> to connect to the MySQL instance I could see that a few tables had been created, not not the complete set.</p>
<p>After much trial and error, and manually comparing the setup of two MySQL instances, the fix was to set the following MySQL Server Parameter in the Azure Portal to <code>OFF</code></p>
<ul>
<li><code>sql_generate_invisible_primary_key</code></li>
</ul>
<blockquote>
<p>After setting this parameter to <code>OFF</code> you need to delete the incorrectly created database, create a new empty database of the same name and rerun DB migration.</p></blockquote>
<p>I have no idea why my first Azure MySQL instance had these values set to <code>OFF</code> and my other instances had them set to on <code>ON</code>. I guess I am lucky at least one was set to <code>OFF</code> so I could work out what was wrong.</p>
<h3 id="logs-files">Logs Files</h3>
<p>The best place to check for logs is in the Web App Log Stream, this is where you will see the container output.</p>
<p>Also you have the same information the <code>laravel.logs</code> file created in an Azure File Share, so you can look at these to see if there are any errors.</p>
<h2 id="and-to-finish">And to Finish</h2>
<p>So, I hope these brief notes help someone else get Snipe-IT running on Azure a bit quicker than I did.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SonarCloud Azure DevOps PR Analysis fails with a 404 error</title>
      <link>https://blog.richardfennell.net/posts/sonar-cloud-pr-analysis-fails/</link>
      <pubDate>Thu, 03 Aug 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sonar-cloud-pr-analysis-fails/</guid>
      <description>&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;Recently had an issue on a project that had been paused for a few months.&lt;/p&gt;
&lt;p&gt;When we restarted the project we found that the SonarCloud PR analysis, running via an Azure DevOps YAML pipeline, was failing with a 404 error. The strange thing was that the same pipeline running analysis of the main trunk or the branch the PR related to worked without error.&lt;/p&gt;
&lt;h1 id=&#34;the-solution&#34;&gt;The Solution&lt;/h1&gt;
&lt;p&gt;The issue was fixed by regenerating the &lt;a href=&#34;https://docs.sonarcloud.io/getting-started/azure-devops/&#34;&gt;SonarCloud PAT that was registered in the Azure DevOps project&amp;rsquo;s Service Connection&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="the-problem">The Problem</h1>
<p>Recently had an issue on a project that had been paused for a few months.</p>
<p>When we restarted the project we found that the SonarCloud PR analysis, running via an Azure DevOps YAML pipeline, was failing with a 404 error. The strange thing was that the same pipeline running analysis of the main trunk or the branch the PR related to worked without error.</p>
<h1 id="the-solution">The Solution</h1>
<p>The issue was fixed by regenerating the <a href="https://docs.sonarcloud.io/getting-started/azure-devops/">SonarCloud PAT that was registered in the Azure DevOps project&rsquo;s Service Connection</a>.</p>
<p>For good measure we also replaced the Azure DevOps PAT registered in SonarCloud to allow it to decorate the Azure DevOps PRs.</p>
<p>We don&rsquo;t think either PAT had expired, as they worked for the main and branch SonarCloud analysis, so no idea why this worked. But it did, and that is the important thing.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for metadata generation failed exit code: 2147450750 loading wrong version of DLLs when building Azure Functions</title>
      <link>https://blog.richardfennell.net/posts/fix-for-metadata-generation-failed-exit-code-2147450750-building-azure-functions/</link>
      <pubDate>Wed, 19 Jul 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-metadata-generation-failed-exit-code-2147450750-building-azure-functions/</guid>
      <description>&lt;h2 id=&#34;the-problem&#34;&gt;The Problem&lt;/h2&gt;
&lt;p&gt;Recently an Azure DevOps Pipeline for a .NET 6 based Azure Functions started to fail on some of our self-hosted build agents with the error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;##[error]C:\Users\Administrator\.nuget\packages\microsoft.azure.webjobs.script.extensionsmetadatagenerator\4.0.1\build\Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator.targets(37,5): Error : Metadata generation failed. Exit code: &amp;#39;-2147450750&amp;#39; Error: &amp;#39;Failed to load the dll from [C:\hostedtoolcache\windows\dotnet\shared\Microsoft.NETCore.App\3.1.32\hostpolicy.dll], HRESULT: 0x800700C1An error occurred while loading required library hostpolicy.dll from [C:\hostedtoolcache\windows\dotnet\shared\Microsoft.NETCore.App\3.1.32]&amp;#39;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The pipeline it self was simple, just repeating the steps a developer would use locally&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;UseDotNet@2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;Use .NET 6&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;packageType&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;sdk&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;version&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;m&#34;&gt;6.&lt;/span&gt;&lt;span class=&#34;l&#34;&gt;x&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;performMultiLevelLookup&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;DotNetCoreCLI@2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;dotnet restore&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;command&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;restore&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;projects&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;$(Build.SourcesDirectory)/src/Api.sln&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;feedsToUse&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;select&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;vstsFeed&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;aaa33827-92e2-45a0-924a-925b0d6344677&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;c&#34;&gt;# organisation-level feed&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;DotNetCoreCLI@2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;.NET Build&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;command&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s2&#34;&gt;&amp;#34;build&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;arguments&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;&amp;gt;&lt;/span&gt;&lt;span class=&#34;sd&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        --configuration ${{ parameters.buildConfiguration }}
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        --no-restore
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;        projects: &amp;#34;$(Build.SourcesDirectory)/src/Api.sln&amp;#34;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;h2 id=&#34;the-cause&#34;&gt;The Cause&lt;/h2&gt;
&lt;p&gt;The issue was that the &lt;code&gt;dotnet build&lt;/code&gt; was picking up a .NET 3.1 version of the &lt;code&gt;hostpolicy.dll&lt;/code&gt; from the cache. This was even though the pipeline was set to use .NET 6, and I could see both .NET 3.1 and .NET 6 SDKs in the cache folder.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-problem">The Problem</h2>
<p>Recently an Azure DevOps Pipeline for a .NET 6 based Azure Functions started to fail on some of our self-hosted build agents with the error</p>
<pre tabindex="0"><code>##[error]C:\Users\Administrator\.nuget\packages\microsoft.azure.webjobs.script.extensionsmetadatagenerator\4.0.1\build\Microsoft.Azure.WebJobs.Script.ExtensionsMetadataGenerator.targets(37,5): Error : Metadata generation failed. Exit code: &#39;-2147450750&#39; Error: &#39;Failed to load the dll from [C:\hostedtoolcache\windows\dotnet\shared\Microsoft.NETCore.App\3.1.32\hostpolicy.dll], HRESULT: 0x800700C1An error occurred while loading required library hostpolicy.dll from [C:\hostedtoolcache\windows\dotnet\shared\Microsoft.NETCore.App\3.1.32]&#39;
</code></pre><p>The pipeline it self was simple, just repeating the steps a developer would use locally</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">UseDotNet@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;Use .NET 6&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">packageType</span><span class="p">:</span><span class="w"> </span><span class="l">sdk</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">version</span><span class="p">:</span><span class="w"> </span><span class="m">6.</span><span class="l">x</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">performMultiLevelLookup</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">DotNetCoreCLI@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;dotnet restore&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="l">restore</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">projects</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(Build.SourcesDirectory)/src/Api.sln&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">feedsToUse</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;select&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">vstsFeed</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;aaa33827-92e2-45a0-924a-925b0d6344677&#34;</span><span class="w"> </span><span class="c"># organisation-level feed</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">DotNetCoreCLI@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;.NET Build&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;build&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">arguments</span><span class="p">:</span><span class="w"> </span><span class="p">&gt;</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        --configuration ${{ parameters.buildConfiguration }}
</span></span></span><span class="line"><span class="cl"><span class="sd">        --no-restore
</span></span></span><span class="line"><span class="cl"><span class="sd">        projects: &#34;$(Build.SourcesDirectory)/src/Api.sln&#34;</span><span class="w">
</span></span></span></code></pre></div><h2 id="the-cause">The Cause</h2>
<p>The issue was that the <code>dotnet build</code> was picking up a .NET 3.1 version of the <code>hostpolicy.dll</code> from the cache. This was even though the pipeline was set to use .NET 6, and I could see both .NET 3.1 and .NET 6 SDKs in the cache folder.</p>
<p>Note that the reason that there was also a .NET 3.1 version on the build agent was that the agent was a self-hosted agent that had been used for .NET 3.1 builds in the past.</p>
<p>This would not have occurred with a Microsoft hosted agent as they are rebuilt between each run. Even though we build our agent VM images using the same <a href="https://blogs.blackmarble.co.uk/rfennell/creating-hyper-v-hosted-azure-devops-private-agents-based-on-the-same-vm-images-as-used-by-microsoft-for-their-hosted-agents/">Packer process as used for the Microsoft hosted agent</a> we do not rebuild between runs. So the cache can contain a variety of tools and SDKs from past runs. An advantage of this approach is that it can speed your build times as you don&rsquo;t have to download all the tools each time, but it can lead to issues like this.</p>
<h2 id="the-solution">The Solution</h2>
<p>The solution was simple, I just deleted the cache on the build agent. This was done by deleting the contents of the folder <code>C:\hostedtoolcache\windows\dotnet</code> on all our build agents.</p>
<p>Once this was done the build worked as expected.</p>
<p>Given we don&rsquo;t create .NET 3.1 projects any more, deleting the old cache should be enough. However, I can see scenarios e.g building legacy projects, where a more complex solution to keep the cache &lsquo;cleaner&rsquo; might be needed, but I will leave that for another day.</p>
<p><strong>Update: 17 Aug 2023 was that other day&hellip;</strong></p>
<p>We automated this process with this YAML fragment at the start of the job before any version of .NET is installed</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl">  <span class="p">-</span> <span class="n">powershell</span><span class="err">:</span> <span class="p">|</span>
</span></span><span class="line"><span class="cl">      <span class="c"># Delete the contents of the hostedtoolcache\windows\dotnet search</span>
</span></span><span class="line"><span class="cl">      <span class="nv">$folder</span> <span class="p">=</span> <span class="nb">test-path</span> <span class="n">-path</span> <span class="s2">&#34;c:\hostedtoolcache\windows\dotnet&#34;</span>
</span></span><span class="line"><span class="cl">      <span class="k">if</span> <span class="p">(</span><span class="nv">$folder</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">      <span class="nb">write-host</span> <span class="s2">&#34;Deleting c:\hostedtoolcache\windows\dotnet&#34;</span>
</span></span><span class="line"><span class="cl">      <span class="nb">Remove-Item</span> <span class="n">-Path</span> <span class="s2">&#34;c:\hostedtoolcache\windows\dotnet\*&#34;</span> <span class="n">-Recurse</span> <span class="n">-Force</span> <span class="n">-ErrorAction</span> <span class="n">Ignore</span>
</span></span><span class="line"><span class="cl">      <span class="p">}</span>
</span></span></code></pre></div>]]></content:encoded>
    </item>
    <item>
      <title>Update on sending social media posts from Hugo based static site</title>
      <link>https://blog.richardfennell.net/posts/update-on-sending-social-media-posts-from-hugo-based-static-site/</link>
      <pubDate>Thu, 13 Jul 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/update-on-sending-social-media-posts-from-hugo-based-static-site/</guid>
      <description>&lt;p&gt;Around a year ago I posted on &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/social-media-posts-after-migrating-from-wordpress-to-hugo/&#34;&gt;&amp;lsquo;Social Media Posts after Migrating from WordPress to Hugo Static Pages&amp;rsquo;&lt;/a&gt;. Recently I have found that the Twitter functionality in my Logic App was failing.&lt;/p&gt;
&lt;p&gt;Turns out this was due to the changes in the &lt;a href=&#34;https://developer.twitter.com/en/products/twitter-api&#34;&gt;Twitter Free API&lt;/a&gt;, with them moving from V1 to V2 which requires OAuth authentication as opposed to a Bearer token.&lt;/p&gt;
&lt;p&gt;In essence the core of the problem is that the built-in &lt;a href=&#34;https://learn.microsoft.com/en-us/samples/azure-samples/azure-serverless-twitter-subscription/azure-serverless-twitter-subscription/&#34;&gt;Logic Apps Twitter Connector&lt;/a&gt; only supports the V1 Twitter API. So the only option was to create my own custom solution.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Around a year ago I posted on <a href="https://blogs.blackmarble.co.uk/rfennell/social-media-posts-after-migrating-from-wordpress-to-hugo/">&lsquo;Social Media Posts after Migrating from WordPress to Hugo Static Pages&rsquo;</a>. Recently I have found that the Twitter functionality in my Logic App was failing.</p>
<p>Turns out this was due to the changes in the <a href="https://developer.twitter.com/en/products/twitter-api">Twitter Free API</a>, with them moving from V1 to V2 which requires OAuth authentication as opposed to a Bearer token.</p>
<p>In essence the core of the problem is that the built-in <a href="https://learn.microsoft.com/en-us/samples/azure-samples/azure-serverless-twitter-subscription/azure-serverless-twitter-subscription/">Logic Apps Twitter Connector</a> only supports the V1 Twitter API. So the only option was to create my own custom solution.</p>
<p>This I have done using an Azure Function called from my existing Logic App. The code in the Function is shown below.</p>
<script src="https://gist.github.com/rfennell/d9d89f65725f2a74a21845ae90468e27.js"></script>
<blockquote>
<p><strong>Note:</strong> For this function code to work you also need a <code>function.proj</code> file upload that adds the reference to OAuth.net package</p>
<pre tabindex="0"><code>  &lt;Project Sdk=&#34;Microsoft.NET.Sdk&#34;&gt;
   &lt;PropertyGroup&gt;
       &lt;TargetFramework&gt;netstandard2.0&lt;/TargetFramework&gt;
     &lt;/PropertyGroup&gt;
     &lt;ItemGroup&gt;
       &lt;PackageReference Include=&#34;OAuth.net&#34; Version=&#34;1.7.0&#34; /&gt;
     &lt;/ItemGroup&gt;
   &lt;/Project&gt;
</code></pre></blockquote>
<p>The function is called from my Logic App using the built in Azure Function Connector. So nicely encapsulating all the custom code away from my Logic App.</p>
<p><img alt="Logic App" loading="lazy" src="/images/rfennell/LogicAppScreenShot2.png"></p>
<p>I guess it is time to consider adding Threads and BlueSKy to the Logic App, but I will leave that for another day.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to run your own maintenance job on Azure DevOps pipelines</title>
      <link>https://blog.richardfennell.net/posts/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines/</link>
      <pubDate>Wed, 12 Jul 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines/</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Updated:&lt;/strong&gt; 19 Jul 2023 - Revised the post to use Az CLI Task as opposed to a PowerShell Task&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Updated:&lt;/strong&gt; 29 Aug 2024 - Revised the PowerShell as original version was not working. Also see&lt;a href=&#34;https://blog.richardfennell.net/posts/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines-revisted/&#34;&gt; follow up post on using Workload Identity federation&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;Azure DevOps Pipelines have a built in mechanism to run &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops&amp;amp;tabs=yaml%2Cbrowser#what-is-a-maintenance-job&#34;&gt;maintenance jobs&lt;/a&gt; on a schedule. This is great for cleaning out old temporary data, but what if you want to run your own maintenance job?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<blockquote>
<p><strong>Updated:</strong> 19 Jul 2023 - Revised the post to use Az CLI Task as opposed to a PowerShell Task</p>
<p><strong>Updated:</strong> 29 Aug 2024 - Revised the PowerShell as original version was not working. Also see<a href="https://blog.richardfennell.net/posts/how-to-run-your-own-maintainance-job-on-azure-devops-pipelines-revisted/"> follow up post on using Workload Identity federation</a></p></blockquote>
<h2 id="background">Background</h2>
<p>Azure DevOps Pipelines have a built in mechanism to run <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops&amp;tabs=yaml%2Cbrowser#what-is-a-maintenance-job">maintenance jobs</a> on a schedule. This is great for cleaning out old temporary data, but what if you want to run your own maintenance job?</p>
<p>Writing a pipeline to run on a schedule is not in itself difficult. The problem is how to schedule so it runs on all available agents in all available agent pools.</p>
<h2 id="implementation">Implementation</h2>
<p>There are a number of ways to tackle this, but the way I chose was to use two YAML pipelines and the <a href="https://learn.microsoft.com/en-us/azure/devops/cli/?view=azure-devops">Az DevOps CLI</a></p>
<h3 id="the-maintenance-job-pipeline">The Maintenance Job Pipeline</h3>
<p>The contents of the pipeline to run on each agent is down to your requirements. In my case I wanted to run a PowerShell script to update the cached vulnerability databases for the OWSAP Dependency checker.</p>
<p>The key point to note is that I expose a pair of parameters , aliased into variables, to target the pool and agent. This is so I can pass them in from the calling pipeline.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-YAML" data-lang="YAML"><span class="line"><span class="cl"><span class="c"># This is the pipeline that runs any scheduled maintenance jobs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># we wish to run in addition to the built in Azure DevOps Maintenance jobs</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># The parameters to target each pool and agent</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">pool</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">agent</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># We cannot use to the parameters directly else we get a &#39;A template expression is not allowed in this context&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># However, if we alias them with a variable they work</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">pool</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">value</span><span class="p">:</span><span class="w"> </span><span class="l">${{parameters.pool}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">agent</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">value</span><span class="p">:</span><span class="w"> </span><span class="l">${{parameters.agent}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">trigger</span><span class="p">:</span><span class="w"> </span><span class="l">none</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;$(pool)&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">demands</span><span class="p">:</span><span class="w"> </span><span class="l">Agent.Name -equals $(agent)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">dependency-check-build-task@6</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;Vunerability Scan Exploited Vulnerabilities update check&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">projectName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Maintainance&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">scanPath</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;.&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">format</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;HTML&#39;</span><span class="w">
</span></span></span></code></pre></div><h3 id="the-scheduler-job-pipeline">The Scheduler Job Pipeline</h3>
<blockquote>
<p><strong>Note:</strong> Previously I had run the script using an Azure DevOps PowerShell task, but this had the limitation I had to pass a personal PAT of a user with enough permissions to access the organisation level agent pools using the AZ CLI. This was done using an environment variable injected into the PowerShell.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-YAML" data-lang="YAML"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">PowerShell@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">targetType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inline&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">script</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">     # all the script lines</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Trigger maintainance builds on all active agents&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"> </span><span class="nt">env</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   </span><span class="nt">AZURE_DEVOPS_EXT_PAT</span><span class="p">:</span><span class="w"> </span><span class="l">$(PAT)</span><span class="w">
</span></span></span></code></pre></div><p>I had hoped to use the <code>$(System.AccessToken)</code>, but though this works for some <code>az pipeline</code> commands it returns an empty set when querying the agent pools. I had assumed it was a permissions issue, but couldn&rsquo;t find the solution. Hence the use of a PAT, until I swapped to this solution using the Az CLI task.</p></blockquote>
<p>The maintenance pipeline, shown above, is called from a scheduled pipeline, shown below, that runs a script that calls the AZ CLI to find all the agents to target using the Azure CLI Task</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-YAML" data-lang="YAML"><span class="line"><span class="cl"><span class="c"># Scheduler Pipeline</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">variables</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># filter to limit the scope of agent pools to consider</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">PoolNamePrefix</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">value</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;BM-&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="c"># the pipeline to run on each agent</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">BuildDefintion </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">value</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;ScheduledMaintenanceBuild&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">trigger</span><span class="p">:</span><span class="w"> </span><span class="l">none</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">schedules</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">cron</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;0 0 * * *&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Daily midnight build</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">branches</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">include</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="l">main</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">pool</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">MyAgentPool</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">AzureCLI@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># The Azure Subscription is the link to a Service Principle</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># with permission to access the agent pools and queue builds</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">azureSubscription</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;AzureDevOpsAgentAutomation&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">scriptType</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;pscore&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">scriptLocation</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;inlineScript&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inlineScript</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      write-host &#34;Find the agent pools with the prefix &#39;$(PoolNamePrefix)&#39;&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      $Pools = $(az pipelines pool list --organization $(System.TeamFoundationCollectionUri) --query &#34;[? (starts_with(name,&#39;$(PoolNamePrefix)&#39;))].[id,name]&#34; --output tsv)
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">      write-host &#34;$($Pools.count) found&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">      foreach ($pool in $pools) {
</span></span></span><span class="line"><span class="cl"><span class="sd">          # ugly but works with tsv format data
</span></span></span><span class="line"><span class="cl"><span class="sd">          $poolSplit = $pool.Split(&#34;`t&#34;)
</span></span></span><span class="line"><span class="cl"><span class="sd">          $poolID = $poolSplit[0]
</span></span></span><span class="line"><span class="cl"><span class="sd">          $poolName = $poolSplit[1]
</span></span></span><span class="line"><span class="cl"><span class="sd">          
</span></span></span><span class="line"><span class="cl"><span class="sd">          write-host &#34;Find the agents the pool &#39;$poolName&#39;&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">          $Agents = az pipelines agent list --organization $(System.TeamFoundationCollectionUri) --pool-id $PoolID --query &#34;[?enabled].name&#34;  --output tsv 
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">          foreach ($Agent in $Agents) {
</span></span></span><span class="line"><span class="cl"><span class="sd">              $buildNameid = $(az pipelines run --organization $(System.TeamFoundationCollectionUri) --name $(BuildDefintion) --project $(System.TeamProject) --parameters &#34;pool=$PoolName&#34; &#34;agent=$Agent&#34; &#34;nvdapikey=$(nvdapikey)&#34; --query &#34;name&#34;  --output tsv )
</span></span></span><span class="line"><span class="cl"><span class="sd">              Write-host &#34;Queued build $buildNameid on $agent&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">          }
</span></span></span><span class="line"><span class="cl"><span class="sd">      }</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Trigger maintainance builds on all active agents&#39;</span><span class="w">
</span></span></span></code></pre></div><blockquote>
<p><strong>Note:</strong> A change required if running this script in the AZ CLI task, as opposed to the PowerShell task, is that you need to specify the <code>--organization</code> parameter. It is not picked up automatically. If this is not done you get the somewhat confusing error</p>
<p><code>ERROR: TF401019: The Git repository with name or identifier &lt;name of the  Git repo&gt; does not exist or you do not have permissions for the operation you are attempting.  Operation returned a 404 status code.</code></p></blockquote>
<p>For this pipeline to run you need to setup a Service Principle and Service Connection for the Az CLI task to use. This is done as follows</p>
<ol>
<li><a href="https://azuredevopslabs.com/labs/devopsserver/azureserviceprincipal/">Create a Service Principle in Azure AD and add it to the Azure DevOps organisation as an Azure DevOps Service connection</a></li>
</ol>
<blockquote>
<p><strong>Note:</strong> I used the <code>az ad sp create-for-rbac</code> command to create the service principle, but when I tried to add it as a service connection I got a 404 error about lack of permissions in the subscription. It seems the issue was the new service principle must be granted at least <code>read</code> permissions in the subscription being used to register it in Azure DevOps. This is not done by default</p></blockquote>
<ol start="2">
<li>Add the new service principle as a user to the Azure DevOps organisation</li>
<li>Grant the new service principle <code>reader</code> permissions at the organisation <code>Agent Pool</code> level</li>
<li>In the Team Project that contains your maintenance pipelines, in the Pipelines &gt; manage permissions, grant the new service principle the <code>queue builds</code> permissions</li>
</ol>
<h2 id="summary">Summary</h2>
<p>This is a simple way to run your own maintenance jobs on all agents in all pools. It is not the only way, but it is one that works for my current needs.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Bit rot is killing my pipelines</title>
      <link>https://blog.richardfennell.net/posts/bit-rot-is-killing-my-pipelines/</link>
      <pubDate>Wed, 26 Apr 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/bit-rot-is-killing-my-pipelines/</guid>
      <description>&lt;p&gt;In a modern hybrid cloud world we have to accept constant change as the norm. You can&amp;rsquo;t just build something and forget about it. You have to keep it up to date as newer tools/libraries appear. This is to at least address security issues, even if you don&amp;rsquo;t want to adopt the new features.&lt;/p&gt;
&lt;p&gt;So I am expecting a degree of maintenance work on my Azure DevOps pipelines. However, this is becoming more awkward as many of the tasks used by Azure Pipelines, even ones from Microsoft, are themselves not being maintained.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In a modern hybrid cloud world we have to accept constant change as the norm. You can&rsquo;t just build something and forget about it. You have to keep it up to date as newer tools/libraries appear. This is to at least address security issues, even if you don&rsquo;t want to adopt the new features.</p>
<p>So I am expecting a degree of maintenance work on my Azure DevOps pipelines. However, this is becoming more awkward as many of the tasks used by Azure Pipelines, even ones from Microsoft, are themselves not being maintained.</p>
<p>As well as tasks that are running out of date versions of underlying tools, there also seems to be no end of them that when run report the following warning. A classic sign of a lack of maintenance as this guidance is a few years old:</p>
<pre tabindex="0"><code>##[warning]Task &#39;Some task&#39; (1.2.3) is using deprecated task execution handler. The task should use the supported task-lib: https://aka.ms/tasklib
</code></pre><p>Because of this lack of maintenance, I am becoming more reticent to use tasks from the Azure DevOps Marketplace, especially if they are not from maintainers I trust. Rather I am starting to favour using simple PowerShell or Bash scripts to do the work. So I can control the versioning of the underlying tools.</p>
<p>In the past I would have written my own custom tasks to provide these wrapper functions, but of late it does not seem worth the effort.</p>
<p>So of late, it seems an approach of writing scripts and including them, with the tools they run, via a <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&amp;tabs=schema#define-a-repositories-resource">YAML Pipeline Repository Resource</a> is just easier.</p>
<p>Am I alone in this change of approach?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving my Azure DevOps Pipeline generated social posts to Azure Logic Apps</title>
      <link>https://blog.richardfennell.net/posts/moving-my-azure-devops-pipeline-social-posts-to-logic-apps/</link>
      <pubDate>Tue, 25 Apr 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-my-azure-devops-pipeline-social-posts-to-logic-apps/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/social-media-posts-after-migrating-from-wordpress-to-hugo/&#34;&gt;posted&lt;/a&gt; a while ago about how I had automated the generation of social media posts for my static Hugo based website using Azure Logic Apps.&lt;/p&gt;
&lt;p&gt;The other place I auto-generate social media posts is from releases via my project&amp;rsquo;s Azure DevOps Pipeline builds. These use a YAML Pipeline Template that calls a &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Twitter&#34;&gt;Marketplace task to post to Twitter&lt;/a&gt; and a PowerShell task to  &lt;code&gt;Invoke-WebRequest&lt;/code&gt; to post to Mastodon.&lt;/p&gt;
&lt;p&gt;Recently the Twitter task started to fail, and given the recent changes to the Twitter API with the move to the &lt;a href=&#34;https://developer.twitter.com/en/support/twitter-api/v2&#34;&gt;V2 API&lt;/a&gt;, I decided a new solution was required.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="https://blogs.blackmarble.co.uk/rfennell/social-media-posts-after-migrating-from-wordpress-to-hugo/">posted</a> a while ago about how I had automated the generation of social media posts for my static Hugo based website using Azure Logic Apps.</p>
<p>The other place I auto-generate social media posts is from releases via my project&rsquo;s Azure DevOps Pipeline builds. These use a YAML Pipeline Template that calls a <a href="https://marketplace.visualstudio.com/items?itemName=petergroenewegen.PeterGroenewegen-Xpirit-Vsts-Release-Twitter">Marketplace task to post to Twitter</a> and a PowerShell task to  <code>Invoke-WebRequest</code> to post to Mastodon.</p>
<p>Recently the Twitter task started to fail, and given the recent changes to the Twitter API with the move to the <a href="https://developer.twitter.com/en/support/twitter-api/v2">V2 API</a>, I decided a new solution was required.</p>
<p>I realised the simplest solution was to use another Azure Logic App sharing the connectors I had already created for my website posts. To make this change, my post build YAML template was greatly simplified to:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">parameters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">buildNumber</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">string</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">extensionName</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">string</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">socialmediaLogicAppURL</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">string </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">steps</span><span class="p">:</span><span class="w">  
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># Update the build number variable so the next build will be the next minor version</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">richardfennellBM.BM-VSTS-BuildUpdating-Tasks-DEV.BuildVariableTask-Task.BuildVariableTask@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Update Build Variable&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">variable</span><span class="p">:</span><span class="w"> </span><span class="l">Minor</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">mode</span><span class="p">:</span><span class="w"> </span><span class="l">Autoincrement</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">usedefaultcreds</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># Get the PR title and hence the reason for the release</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">richardfennellBM.BM-VSTS-ArtifactDescription-Tasks-DEV.ArtifactDescriptionTask.ArtifactDescriptionTask@1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Get Git Artifact PR Reason&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">OutputText</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;OutputedText&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="c"># Post to the Logic App to create various social media posts</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">pwsh</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">   $msg = &#34;I have just released Version ${{parameters.buildNumber}} of my Azure DevOps Pipeline ${{parameters.extensionName}} http://bit.ly/VSTS-RF $(OutputedText) &#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">   write-host &#34;Posting message: $msg&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">   $uri = &#34;${{parameters.socialmediaLogicAppURL}}&#34; 
</span></span></span><span class="line"><span class="cl"><span class="sd">   $body = &#34;{ `&#34;Message`&#34;: `&#34;$msg`&#34;}&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">   
</span></span></span><span class="line"><span class="cl"><span class="sd">   Invoke-WebRequest -Uri $uri -Method POST -Body $body -Headers $headers -ContentType &#34;application/json&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Create social media posts about new release&#39;</span><span class="w">
</span></span></span></code></pre></div><p>I can now just update the Logic App to change to which social media platforms posts are generated.</p>
<p><img alt="Logic App Design" loading="lazy" src="/images/rfennell/LogicAppScreenShot1.png"></p>
<p>So, arguably a better solution as I am using each tool for the job it was designed for i.e handing off the orchestration of external systems to Logic Apps as opposed to managing it in the build pipeline.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A more secure alternative to PAT tokens for accessing Azure DevOps Programmatically</title>
      <link>https://blog.richardfennell.net/posts/a-more-secure-alternative-to-pat-tokens-for-azure-devops/</link>
      <pubDate>Fri, 21 Apr 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-more-secure-alternative-to-pat-tokens-for-azure-devops/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;When working with Azure DevOps, you may need to access the &lt;a href=&#34;https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1&#34;&gt;REST API&lt;/a&gt; if you wish to perform scripted tasks such as creating work items, or generating reports. Historically, you had to use a &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&amp;amp;tabs=Windows&#34;&gt;Personal Access Token (PAT)&lt;/a&gt; to do this.&lt;/p&gt;
&lt;p&gt;If you look in my &lt;a href=&#34;https://github.com/rfennell/AzureDevOpsPowershell&#34;&gt;repo of useful Azure DevOps PowerShell scripts&lt;/a&gt; you will find all the scripts make use of a function that creates an authenticated &lt;code&gt;WebClient&lt;/code&gt; object using a passed in PAT token.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>When working with Azure DevOps, you may need to access the <a href="https://learn.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-7.1">REST API</a> if you wish to perform scripted tasks such as creating work items, or generating reports. Historically, you had to use a <a href="https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&amp;tabs=Windows">Personal Access Token (PAT)</a> to do this.</p>
<p>If you look in my <a href="https://github.com/rfennell/AzureDevOpsPowershell">repo of useful Azure DevOps PowerShell scripts</a> you will find all the scripts make use of a function that creates an authenticated <code>WebClient</code> object using a passed in PAT token.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="kd">function</span><span class="w"> </span><span class="nb">Get-WebClient</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="p">[</span><span class="nb">CmdletBinding</span><span class="p">()]</span>
</span></span><span class="line"><span class="cl">    <span class="k">param</span>
</span></span><span class="line"><span class="cl">    <span class="p">(</span>
</span></span><span class="line"><span class="cl">        <span class="nv">$pat</span>
</span></span><span class="line"><span class="cl">    <span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nv">$webclient</span> <span class="p">=</span> <span class="nb">new-object</span> <span class="n">System</span><span class="p">.</span><span class="py">Net</span><span class="p">.</span><span class="py">WebClient</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$webclient</span><span class="p">.</span><span class="py">Encoding</span> <span class="p">=</span> <span class="p">[</span><span class="no">System.Text.Encoding</span><span class="p">]::</span><span class="n">UTF8</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$encodedPat</span> <span class="p">=</span> <span class="p">[</span><span class="no">System.Convert</span><span class="p">]::</span><span class="n">ToBase64String</span><span class="p">([</span><span class="no">System.Text.Encoding</span><span class="p">]::</span><span class="n">UTF8</span><span class="p">.</span><span class="py">GetBytes</span><span class="p">(</span><span class="s2">&#34;:</span><span class="nv">$pat</span><span class="s2">&#34;</span><span class="p">))</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$webclient</span><span class="p">.</span><span class="py">Headers</span><span class="p">.</span><span class="py">Add</span><span class="p">(</span><span class="s2">&#34;Authorization&#34;</span><span class="p">,</span> <span class="s2">&#34;Basic </span><span class="nv">$encodedPat</span><span class="s2">&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">    <span class="k">return</span> <span class="nv">$webclient</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>which is used thus</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="nv">$wc</span> <span class="p">=</span> <span class="nb">Get-WebClient</span> <span class="n">-pat</span> <span class="s2">&#34;a-pat-string&#34;</span>
</span></span><span class="line"><span class="cl"><span class="nv">$result</span><span class="p">=</span> <span class="nv">$wc</span><span class="p">.</span><span class="py">DownloadString</span><span class="p">(</span><span class="s2">&#34;https://dev.azure.com/MyOrg/MyProject/_apis/build/builds&#34;</span><span class="p">)</span> <span class="p">|</span> <span class="nb">ConvertFrom-Json</span>
</span></span><span class="line"><span class="cl"><span class="nv">$result</span><span class="p">.</span><span class="py">value</span>
</span></span></code></pre></div><p>The problem with this approach is that the PAT tokens have to be managed. They expire after a period of time, so have to be regenerated and also, as they are in effect passwords, need to be stored securely.</p>
<h2 id="a-better-approach">A better approach</h2>
<p>A newly available and better approach is to use an <a href="https://learn.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/service-principal-managed-identity?view=azure-devops">Azure AD App Service Principle</a> to authenticate to Azure DevOps. This addresses the issues with PAT tokens, as the App Service Principles do not expire and are defined securely in Azure AD.</p>
<p>The basic setup is as follows</p>
<ol>
<li>Create a new Azure AD App</li>
<li>Add the new App Service Principle to the Azure DevOps organisation as a user</li>
<li>Grant the App Service Principle the required permissions in Azure DevOps</li>
<li>Use the App Service Principle for programmatic authenticate to Azure DevOps e.g to the API</li>
</ol>
<p>The sample script hence becomes</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="kd">function</span><span class="w"> </span><span class="nb">Get-WebClient</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="p">[</span><span class="nb">CmdletBinding</span><span class="p">()]</span>
</span></span><span class="line"><span class="cl">    <span class="k">param</span>
</span></span><span class="line"><span class="cl">    <span class="p">(</span>
</span></span><span class="line"><span class="cl">        <span class="nv">$ClientID</span> <span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nv">$Secret</span>   <span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nv">$TenantID</span> 
</span></span><span class="line"><span class="cl">    <span class="p">)</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c"># This is a static value</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$AdoAppClientID</span> <span class="p">=</span> <span class="s2">&#34;499b84ac-1321-427f-aa17-267ca6975798/.default&#34;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nv">$loginUrl</span> <span class="p">=</span> <span class="s2">&#34;https://login.microsoftonline.com/</span><span class="nv">$tenantId</span><span class="s2">/oauth2/token&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$body</span> <span class="p">=</span> <span class="vm">@</span><span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="n">grant_type</span>    <span class="p">=</span> <span class="s2">&#34;client_credentials&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="n">client_id</span>     <span class="p">=</span> <span class="nv">$ClientID</span>
</span></span><span class="line"><span class="cl">        <span class="n">client_secret</span> <span class="p">=</span> <span class="nv">$Secret</span> 
</span></span><span class="line"><span class="cl">        <span class="n">resource</span>      <span class="p">=</span> <span class="nv">$AdoAppClientID</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$token</span> <span class="p">=</span> <span class="nb">Invoke-RestMethod</span> <span class="n">-Uri</span> <span class="nv">$loginUrl</span> <span class="n">-Method</span> <span class="n">POST</span> <span class="n">-Body</span> <span class="nv">$body</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    
</span></span><span class="line"><span class="cl">    <span class="nv">$webclient</span> <span class="p">=</span> <span class="nb">new-object</span> <span class="n">System</span><span class="p">.</span><span class="py">Net</span><span class="p">.</span><span class="py">WebClient</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$webclient</span><span class="p">.</span><span class="py">Encoding</span> <span class="p">=</span> <span class="p">[</span><span class="no">System.Text.Encoding</span><span class="p">]::</span><span class="n">UTF8</span>
</span></span><span class="line"><span class="cl">    <span class="nv">$webclient</span><span class="p">.</span><span class="py">Headers</span><span class="p">.</span><span class="py">Add</span><span class="p">(</span><span class="s2">&#34;Authorization&#34;</span><span class="p">,</span> <span class="s2">&#34;Bearer </span><span class="p">$(</span><span class="nv">$token</span><span class="p">.</span><span class="n">access_token</span><span class="p">)</span><span class="s2">&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">    <span class="k">return</span> <span class="nv">$webclient</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="nv">$wc</span> <span class="p">=</span> <span class="nb">Get-WebClient</span> <span class="n">-ClientID</span> <span class="s2">&#34;a string&#34;</span> <span class="n">-Secret</span> <span class="s2">&#34;a secret&#34;</span> <span class="n">-TenantID</span> <span class="s2">&#34;a tenant id&#34;</span>
</span></span><span class="line"><span class="cl"><span class="nv">$result</span><span class="p">=</span> <span class="nv">$wc</span><span class="p">.</span><span class="py">DownloadString</span><span class="p">(</span><span class="s2">&#34;https://dev.azure.com/MyOrg/MyProject/_apis/build/builds&#34;</span><span class="p">)</span> <span class="p">|</span> <span class="nb">ConvertFrom-Json</span>
</span></span><span class="line"><span class="cl"><span class="nv">$result</span><span class="p">.</span><span class="py">value</span>
</span></span></code></pre></div><p>Now, the observant amongst you will have noticed in this sample the <code>Get-WebClient</code> function still takes a secret, which is less than optimal. So, in most use-case it is recommended that the token is retrieved using a certificate, rather than a secret, but the process is basically the same. See the worked <a href="https://learn.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/service-principal-managed-identity?view=azure-devops">Microsoft example for details</a>.</p>
<p>The one potential downside of this approach is that the App Service Principle may require <a href="https://learn.microsoft.com/en-us/azure/devops/organizations/security/access-levels?view=azure-devops">a paid for Azure DevOps license</a>, but this is not always the case, it depends on the API calls you will be making.</p>
<p>Broadly speaking, calls to get Work Item details can be done with free stakeholder licenses, but others to get build or code details will probably require a basic license. However, remember you do get 5 free basic licenses, so there is a good chance you have one spare, or at worst they are only $6 a month.</p>
<p>So is this something that may make your programmatic access to Azure DevOps easier and more secure?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Downloading NuGet packages with &#39;System.Net.WebClient&#39; from an Azure DevOps Artifact feed</title>
      <link>https://blog.richardfennell.net/posts/downloading-nuget-packages-with-system.net.webclient/</link>
      <pubDate>Sat, 01 Apr 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/downloading-nuget-packages-with-system.net.webclient/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;We use &lt;a href=&#34;https://github.com/VirtualEngine/Lability&#34;&gt;Lability&lt;/a&gt; to build Windows Server images for our test labs. Lability makes use of &lt;a href=&#34;https://docs.microsoft.com/en-us/powershell/dsc/overview&#34;&gt;Desired State Configuration&lt;/a&gt; (DSC) to build the VM images. Part of this process is for Lability to download DSC modules, as ZIP files, from a NuGet feed such as &lt;a href=&#34;https://www.powershellgallery.com/&#34;&gt;PowerShell Gallery&lt;/a&gt; to inject into the created VM image.&lt;/p&gt;
&lt;p&gt;Historically, we have stored our own private DSC modules on an internally hosted NuGet server. However, we wanted to move these modules to a private &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/artifacts/quickstarts/nuget?view=azure-devops&#34;&gt;Azure DevOps Artifacts feed&lt;/a&gt;. The problem was that Lability does not support downloading of DSC modules from Azure DevOps Artifact feeds, whether they are public or private, because of the way the package URLs are constructed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>We use <a href="https://github.com/VirtualEngine/Lability">Lability</a> to build Windows Server images for our test labs. Lability makes use of <a href="https://docs.microsoft.com/en-us/powershell/dsc/overview">Desired State Configuration</a> (DSC) to build the VM images. Part of this process is for Lability to download DSC modules, as ZIP files, from a NuGet feed such as <a href="https://www.powershellgallery.com/">PowerShell Gallery</a> to inject into the created VM image.</p>
<p>Historically, we have stored our own private DSC modules on an internally hosted NuGet server. However, we wanted to move these modules to a private <a href="https://docs.microsoft.com/en-us/azure/devops/artifacts/quickstarts/nuget?view=azure-devops">Azure DevOps Artifacts feed</a>. The problem was that Lability does not support downloading of DSC modules from Azure DevOps Artifact feeds, whether they are public or private, because of the way the package URLs are constructed.</p>
<p>I am in the process of creating a PR for Lability to add support for Azure DevOps Artifact feeds, but I thought it worth this quick blog post on package URL formats as it might be useful to others.</p>
<h2 id="urls-for-powershell-gallery">URLs for PowerShell Gallery</h2>
<p>The URL format for downloading a package from the PowerShell Gallery is well known, being</p>
<pre tabindex="0"><code>https://www.powershellgallery.com/packages/&lt;package name&gt;/&lt;version&gt;
</code></pre><p>Enter this URL in a browser and the requested package will be downloaded</p>
<h2 id="urls-for-azure-devops-artifacts">URLs for Azure DevOps Artifacts</h2>
<p>You might well expect the format for downloading a package from Azure DevOps Artifacts to be</p>
<pre tabindex="0"><code>https://pkgs.dev.azure.com/&lt;AzDo Org&gt;/_packaging/&lt;Feed Name&gt;/nuget/v2/Packages/&lt;package name&gt;/&lt;version&gt;
</code></pre><p>but it is not.</p>
<p>Stackoverflow suggests the URL format is</p>
<pre tabindex="0"><code>https://pkgs.dev.azure.com/&lt;AzDo Org&gt;/_packaging/&lt;Feed Name&gt;/nuget/v2/Packages(id=&lt;Package name&gt;,version=&lt;version&gt;)
</code></pre><p>However, this does not download a package, but rather an XML manifest file. But from this manifest we can see a <code>&lt;content&gt;</code> element that contains the URL to the package, and it is an interesting format.</p>
<p>So for the manifest  URL</p>
<pre tabindex="0"><code>https://pkgs.dev.azure.com/myorg/_packaging/myfeed/nuget/v2/Packages(id=xNetworking,version=5.7.0.0)
</code></pre><p>the content URL is</p>
<pre tabindex="0"><code>https://pkgs.dev.azure.com/myorg/_packaging/myfeed/nuget/v2?id=xnetworking&amp;version=5.7.0
</code></pre><p>Note that</p>
<ul>
<li>the package name is lower case</li>
<li>the four part version number is replaced with a three part version number</li>
<li>the XML encoded <code>&amp;amp;</code> value in the manifest is replaced with a simple <code>&amp;</code></li>
</ul>
<p>The following gist is a function to build the required URL from the package name and version number and download the package as a ZIP file</p>
<script src="https://gist.github.com/rfennell/835575c96f54e27a3ba0816f2f8c2317.js"></script>
<p>Hope this helps someone and saves them some time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Pinning specific Azure DevOps task versions</title>
      <link>https://blog.richardfennell.net/posts/pinning-specific-azure-devops-task-versions/</link>
      <pubDate>Wed, 08 Mar 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pinning-specific-azure-devops-task-versions/</guid>
      <description>&lt;p&gt;I make every effort keep all my &lt;a href=&#34;https://marketplace.visualstudio.com/search?term=fennell&amp;amp;target=AzureDevOps&amp;amp;category=All%20categories&amp;amp;sortBy=Relevance&#34;&gt;Azure DevOps Pipeline extensions&lt;/a&gt; reliable as I know they are used by many people, but mistakes happen.&lt;/p&gt;
&lt;p&gt;Yesterday I released an updated version of my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;ReleaseNotes task&lt;/a&gt; that introduced a bug if the pipeline produced no artifacts. I am pleased to say I have fixed the bug, and addressed this gap in my test coverage.&lt;/p&gt;
&lt;p&gt;However, this did mean for about 12 hours if you are using this task in a pipeline that did not produce artifacts, maybe one that just deployed consumed artifacts from other pipelines, you had a failing pipeline.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I make every effort keep all my <a href="https://marketplace.visualstudio.com/search?term=fennell&amp;target=AzureDevOps&amp;category=All%20categories&amp;sortBy=Relevance">Azure DevOps Pipeline extensions</a> reliable as I know they are used by many people, but mistakes happen.</p>
<p>Yesterday I released an updated version of my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">ReleaseNotes task</a> that introduced a bug if the pipeline produced no artifacts. I am pleased to say I have fixed the bug, and addressed this gap in my test coverage.</p>
<p>However, this did mean for about 12 hours if you are using this task in a pipeline that did not produce artifacts, maybe one that just deployed consumed artifacts from other pipelines, you had a failing pipeline.</p>
<p>In this case all is not lost, as there is a feature of YAML pipelines I only recently discovered.</p>
<p>I knew you could select a specific major version of a task (as you can with Classic Builds and Releases) e.g.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">XplatGenerateReleaseNotes@3</span><span class="w">
</span></span></span></code></pre></div><p>or</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">XplatGenerateReleaseNotes@4</span><span class="w">
</span></span></span></code></pre></div><p>However, did you know you can also pin a specific version of a task e.g.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">XplatGenerateReleaseNotes@4.6.2</span><span class="w">
</span></span></span></code></pre></div><p>Thus allowing you to pick any version you wish, and not just the major version. A great way to lock down your pipelines to a known good version of a task, whether as a short term fix or a long term audit control.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting x86 .NET 3.x tests running on the latest Azure Devops hosted agents</title>
      <link>https://blog.richardfennell.net/posts/getting-x86-test-running-on-the-latest-azure-devops-hosted-agents/</link>
      <pubDate>Mon, 06 Mar 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-x86-test-running-on-the-latest-azure-devops-hosted-agents/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;At Black Marble we have our own private build agents, but they are built using the same &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/creating-hyper-v-hosted-azure-devops-private-agents-based-on-the-same-vm-images-as-used-by-microsoft-for-their-hosted-agents/&#34;&gt;Packer process as the Microsoft hosted ones&lt;/a&gt;. I recently rebuilt our agents to match the latest version of the hosted agents, and I ran into an issue with some .NET 3.1 based x86 MSTests. The tests were failing with the following error:&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-text&#34; data-lang=&#34;text&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;A total of 34 test files matched the specified pattern.
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;##[error]Testhost process exited with error: A fatal error occurred. The required library hostfxr.dll could not be found.
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;##[error]If this is a self-contained application, that library should exist in [E:\Agent\_work\1\s\src\Ux.Common.UnitTests\bin\x86\Release\netcoreapp3.1\].
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;##[error]If this is a framework-dependent application, install the runtime in the global location [C:\Program Files (x86)\dotnet] or use the DOTNET_ROOT(x86) environment variable to specify the runtime location or register the runtime location in [HKLM\SOFTWARE\dotnet\Setup\InstalledVersions\x86\InstallLocation].
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;When I checked the folder &lt;code&gt;C:\Program Files (x86)\dotnet&lt;/code&gt; it was not there, .NET 3.1 x86 was no longer present on the agent.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>At Black Marble we have our own private build agents, but they are built using the same <a href="https://blogs.blackmarble.co.uk/rfennell/creating-hyper-v-hosted-azure-devops-private-agents-based-on-the-same-vm-images-as-used-by-microsoft-for-their-hosted-agents/">Packer process as the Microsoft hosted ones</a>. I recently rebuilt our agents to match the latest version of the hosted agents, and I ran into an issue with some .NET 3.1 based x86 MSTests. The tests were failing with the following error:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">A total of 34 test files matched the specified pattern.
</span></span><span class="line"><span class="cl">##[error]Testhost process exited with error: A fatal error occurred. The required library hostfxr.dll could not be found.
</span></span><span class="line"><span class="cl">##[error]If this is a self-contained application, that library should exist in [E:\Agent\_work\1\s\src\Ux.Common.UnitTests\bin\x86\Release\netcoreapp3.1\].
</span></span><span class="line"><span class="cl">##[error]If this is a framework-dependent application, install the runtime in the global location [C:\Program Files (x86)\dotnet] or use the DOTNET_ROOT(x86) environment variable to specify the runtime location or register the runtime location in [HKLM\SOFTWARE\dotnet\Setup\InstalledVersions\x86\InstallLocation].
</span></span></code></pre></div><p>When I checked the folder <code>C:\Program Files (x86)\dotnet</code> it was not there, .NET 3.1 x86 was no longer present on the agent.</p>
<p>This removal of the x86 .NET SDK from the Packer build appears to have occurred around the start of the year. As when I last rebuild out agents in December 2022, they still got the x86 .NET SDK installed.</p>
<h2 id="solution">Solution</h2>
<p>I found the solution in <a href="https://github.com/microsoft/azure-pipelines-tasks/issues/16501">this GitHub issue</a>, to use an Azure DevOps task to install the .NET 3.1 x86 SDK as part of my build pipeline</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">UseDotNet@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="l">Install .NET 3.1 x86 to support vstest.console.exe</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">packageType</span><span class="p">:</span><span class="w"> </span><span class="l">sdk</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">version</span><span class="p">:</span><span class="w"> </span><span class="m">3.</span><span class="l">x</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">env</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">PROCESSOR_ARCHITECTURE</span><span class="p">:</span><span class="w"> </span><span class="l">x86</span><span class="w">
</span></span></span></code></pre></div><p>However, I found that it was not enough to just install the x86 SDK in this manner.</p>
<p>The <code>UseDotNet@2</code> task installs the SDK into <code>C:\hostedtoolcache\windows\dotnet</code> and set the environment variable <code>DOTNET_ROOT</code> to point this folder, so the SDK can be found by tools that need it.</p>
<p>The problem is that <code>vstest.console.exe</code> does not seem to be aware of this environment variable, and even if <code>DOTNET_ROOT</code> is set it still looks in the default <code>C:\Program Files (x86)\dotnet</code> folder.</p>
<p>So I also needed to add the following PowerShell inline script to my build pipeline to also set the <code>DOTNET_ROOT(x86)</code> environment variable. This extra step was also detailed towards the start of <a href="https://github.com/microsoft/azure-pipelines-tasks/issues/16501">the same GitHub issue when discussing manual install based workarounds</a>, but I had initially missed it:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">powershell</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        # Set the DOTNET_ROOT(x86) so vstest.console.exe can find the SDK
</span></span></span><span class="line"><span class="cl"><span class="sd">        # The UseDotNet@2 only sets the platform independant DOTNET_ROOT one that is not read by vstest.console.exe
</span></span></span><span class="line"><span class="cl"><span class="sd">        Write-Host &#34;Setting environment variable DOTNET_ROOT(x86)=$(Agent.ToolsDirectory)\dotnet&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        Write-Host &#34;##vso[task.setvariable variable=DOTNET_ROOT(x86)]$(Agent.ToolsDirectory)\dotnet&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Set `DOTNET_ROOT(x86)` environment variable&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">   
</span></span></span></code></pre></div><p>So together these two steps allowed my x86 MSTest tests to run successfully again.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting &#34;cannot find path&#34; error using Install-Package</title>
      <link>https://blog.richardfennell.net/posts/getting-cannot-find-path-error-using-install-package/</link>
      <pubDate>Mon, 27 Feb 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-cannot-find-path-error-using-install-package/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;I was recently trying to use PowerShelGet &lt;code&gt;Install-Package&lt;/code&gt; to install a module from an Azure DevOps Artifacts hosted PowerShell Gallery using the following script&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-powershell&#34; data-lang=&#34;powershell&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;c&#34;&gt;# For authentication use a PAT as the password, UID can be anything&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nv&#34;&gt;$PATcreds&lt;/span&gt; &lt;span class=&#34;p&#34;&gt;=&lt;/span&gt; &lt;span class=&#34;nb&#34;&gt;Get-Credential&lt;/span&gt; 
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;Register-PSRepository&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;-Name&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;BM&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;-SourceLocation&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;https://pkgs.dev.azure.com/&amp;lt;org&amp;gt;/_packaging/PowerShell/nuget/v2&amp;#39;&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;-PublishLocation&lt;/span&gt; &lt;span class=&#34;s1&#34;&gt;&amp;#39;https://pkgs.dev.azure.com/&amp;lt;org&amp;gt;/_packaging/PowerShell/nuget/v2&amp;#39;&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;-InstallationPolicy&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;Trusted&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;nb&#34;&gt;Install-Package&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;BlackMarble&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;.&lt;/span&gt;&lt;span class=&#34;py&#34;&gt;Package&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;-Source&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;BM&lt;/span&gt; &lt;span class=&#34;n&#34;&gt;-Credential&lt;/span&gt; &lt;span class=&#34;nv&#34;&gt;$PATcreds&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The script did not work I was getting the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Install-Package : Cannot find the path &amp;lsquo;C:\Users&amp;lt;user&amp;gt;\AppData\Local\Temp\936930114\BlackMarble.Package\BlackMarble.Package.0.3.79\BlackMarble.Package.psd1&amp;rsquo; because it does not exist.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;You could see a download progress bar that suggested the download had occurred, but no module was installed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>I was recently trying to use PowerShelGet <code>Install-Package</code> to install a module from an Azure DevOps Artifacts hosted PowerShell Gallery using the following script</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="c"># For authentication use a PAT as the password, UID can be anything</span>
</span></span><span class="line"><span class="cl"><span class="nv">$PATcreds</span> <span class="p">=</span> <span class="nb">Get-Credential</span> 
</span></span><span class="line"><span class="cl"><span class="nb">Register-PSRepository</span> <span class="n">-Name</span> <span class="n">BM</span> <span class="n">-SourceLocation</span> <span class="s1">&#39;https://pkgs.dev.azure.com/&lt;org&gt;/_packaging/PowerShell/nuget/v2&#39;</span> <span class="n">-PublishLocation</span> <span class="s1">&#39;https://pkgs.dev.azure.com/&lt;org&gt;/_packaging/PowerShell/nuget/v2&#39;</span> <span class="n">-InstallationPolicy</span> <span class="n">Trusted</span>
</span></span><span class="line"><span class="cl"><span class="nb">Install-Package</span> <span class="n">BlackMarble</span><span class="p">.</span><span class="py">Package</span> <span class="n">-Source</span> <span class="n">BM</span> <span class="n">-Credential</span> <span class="nv">$PATcreds</span>
</span></span></code></pre></div><p>The script did not work I was getting the error</p>
<blockquote>
<p>Install-Package : Cannot find the path &lsquo;C:\Users&lt;user&gt;\AppData\Local\Temp\936930114\BlackMarble.Package\BlackMarble.Package.0.3.79\BlackMarble.Package.psd1&rsquo; because it does not exist.</p></blockquote>
<p>You could see a download progress bar that suggested the download had occurred, but no module was installed.</p>
<h2 id="solution">Solution</h2>
<p>Turns out the answer was to not register the repository, but to use a URL in the <code>-Source</code> parameter to <code>Install-Package</code>.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-powershell" data-lang="powershell"><span class="line"><span class="cl"><span class="nb">Install-Package</span> <span class="n">BlackMarble</span><span class="p">.</span><span class="py">Package</span>  <span class="n">-Source</span> <span class="n">https</span><span class="err">:</span><span class="p">//</span><span class="n">pkgs</span><span class="p">.</span><span class="py">dev</span><span class="p">.</span><span class="py">azure</span><span class="p">.</span><span class="n">com</span><span class="p">/&lt;</span><span class="n">org</span><span class="p">&gt;/</span><span class="n">_packaging</span><span class="p">/</span><span class="n">PowerShell</span><span class="p">/</span><span class="n">nuget</span><span class="p">/</span><span class="n">v2</span> <span class="n">-Credential</span> <span class="nv">$PATcreds</span>
</span></span></code></pre></div><p>It sort of makes sense that there could be a bug such that URLs work and aliases, added via a <code>register-psrepository</code>, do not. Maybe a bug?</p>
<p>However, more strangely I have found that if the alias is registered the the <code>Install-Package</code> fails even if a URL is used. The complete solution is therefore to first <code>unregister-psrepository</code> the alias that match the URL you wish to use before running the <code>Install-Package</code> command.</p>
<p>All very strange</p>
]]></content:encoded>
    </item>
    <item>
      <title>What happens when you link an Azure DevOps Variable Group to an Azure Key Vault?</title>
      <link>https://blog.richardfennell.net/posts/what-happens-when-you-link-a-variable-group-to-key-vault/</link>
      <pubDate>Mon, 13 Feb 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-happens-when-you-link-a-variable-group-to-key-vault/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;It is a really useful feature that you can expose &lt;a href=&#34;https://learn.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&amp;amp;tabs=yaml#link-secrets-from-an-azure-key-vault&#34;&gt;Key Vault stored secrets as Azure DevOps pipeline variables via a variable group&lt;/a&gt;, but what happens when you do this? And what can you do if you try to expose too many variables?&lt;/p&gt;
&lt;p&gt;I was recently working on a system where there was an increasing number of Key Vault secrets that were being exposed as variables via a variable group. This was working fine, until I started getting warnings in the following form on Windows based Azure DevOps agents:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>It is a really useful feature that you can expose <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=azure-devops&amp;tabs=yaml#link-secrets-from-an-azure-key-vault">Key Vault stored secrets as Azure DevOps pipeline variables via a variable group</a>, but what happens when you do this? And what can you do if you try to expose too many variables?</p>
<p>I was recently working on a system where there was an increasing number of Key Vault secrets that were being exposed as variables via a variable group. This was working fine, until I started getting warnings in the following form on Windows based Azure DevOps agents:</p>
<blockquote>
<p>Environment variable &lsquo;VSTS_SECRET_VARIABLES&rsquo; exceeds the maximum supported length. Environment variable length: 32993 , Maximum supported length: 32766</p></blockquote>
<p>It was flagged as a warning, but in reality the pipeline failed as none of the secrets were being exposed as variables.</p>
<h2 id="analysis">Analysis</h2>
<p>When you link a variable group to a Key Vault, the secrets are exposed as environment variables. This is automatically done by the pipeline agent running the <code>AzureKeyVault@1</code> task (interestingly not the newer <code>AzureKeyVault@2</code> task) prior to any other steps in the containing job. This task takes parameters for the Key Vault name and a comma separated list based filter for the secrets to expose. Both of these are derived from the settings of the variable group.</p>
<h2 id="workarounds">Workarounds</h2>
<p>The simplest workarounds are to either:</p>
<ul>
<li>Switch to a Linux based agent which has a much higher limit on the length of environment variables. A move that is not always possible depending on the other tasks in the job.</li>
<li>Or to reduce the number of secrets exposed as variables. This can be done by either removing secrets from the Key Vault or by removing secret mapping within the variable group. A valid solution, but one that requires manual management.</li>
</ul>
<p>This got me thinking, is there another way to build the secret filter list so it is more flexible than a manually managed comma separated list?</p>
<p>The answer is yes, you can use a PowerShell script to build the list of secrets to expose and then call the <code>AzureKeyVault</code> task directly.</p>
<p>Instead of exposing the key Vault secrets via the variable group, you could use the following tasks.</p>
<ul>
<li>A PowerShell script is used to convert a wildcard based filter  into a comma separated list of secrets(this does assume you have a consistent secret naming convention)</li>
<li>Then, use the same <code>AzureKeyVault</code> task, to expose the secrets as environment variables.</li>
</ul>
<script src="https://gist.github.com/rfennell/6edb0ea400f23fdeb0c188b4011caf29.js"></script>
]]></content:encoded>
    </item>
    <item>
      <title>Handling return values from Azure Functions in Hugo static website</title>
      <link>https://blog.richardfennell.net/posts/hugo-static-website-azure-functions-and-return-values/</link>
      <pubDate>Fri, 20 Jan 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/hugo-static-website-azure-functions-and-return-values/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;I am using an Azure Function as backend for processing forms submissions from a Hugo static website, to process a simple contact form.&lt;/p&gt;
&lt;p&gt;I wanted to add reCAPTCHA support, as the site was generating too many spam emails. I also wanted to show a different confirmation pages depending on whether the reCAPTCHA check passed or failed&lt;/p&gt;
&lt;p&gt;There a good few posts about using an Azure Function as backend for a static web site form. But what I could not find was how to handle the return value from the Azure Function.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>I am using an Azure Function as backend for processing forms submissions from a Hugo static website, to process a simple contact form.</p>
<p>I wanted to add reCAPTCHA support, as the site was generating too many spam emails. I also wanted to show a different confirmation pages depending on whether the reCAPTCHA check passed or failed</p>
<p>There a good few posts about using an Azure Function as backend for a static web site form. But what I could not find was how to handle the return value from the Azure Function.</p>
<p>So as I have a solution, I thought a blog post would be a good idea to share it.</p>
<h2 id="the-solution">The solution</h2>
<p>On my contact form Hugo layout page, I have a hidden iframe, this is used as the target for the HTML form i.e. where the Azure Function posts back too. The Azure function returns a simple string, either &ldquo;OK&rdquo; or an error message depending on the outcome of the processing. The hidden target iframe has an onload event handler that checks the return value and redirects the user to a different page depending on the outcome.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-html" data-lang="html"><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">script</span> <span class="na">type</span><span class="o">=</span><span class="s">&#34;text/javascript&#34;</span><span class="p">&gt;</span><span class="kd">var</span> <span class="nx">submitted</span> <span class="o">=</span> <span class="kc">false</span><span class="p">;&lt;/</span><span class="nt">script</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">iframe</span> <span class="na">name</span><span class="o">=</span><span class="s">&#34;hidden_iframe&#34;</span> <span class="na">id</span><span class="o">=</span><span class="s">&#34;hidden_iframe&#34;</span> <span class="na">style</span><span class="o">=</span><span class="s">&#34;display:none;&#34;</span> <span class="na">onload</span><span class="o">=</span><span class="s">&#34;
</span></span></span><span class="line"><span class="cl"><span class="s">    if(submitted) {
</span></span></span><span class="line"><span class="cl"><span class="s">        const res = document.getElementById( &#39;hidden_iframe&#39; ).contentWindow.document.body.innerText +&#39;]&#39;;
</span></span></span><span class="line"><span class="cl"><span class="s">        if (res.includes(&#39;OK&#39;)) {
</span></span></span><span class="line"><span class="cl"><span class="s">          window.location=&#39;/confirmation/enquiry&#39;;
</span></span></span><span class="line"><span class="cl"><span class="s">        } else {
</span></span></span><span class="line"><span class="cl"><span class="s">          window.location=&#39;/confirmation/error&#39;;
</span></span></span><span class="line"><span class="cl"><span class="s">        }
</span></span></span><span class="line"><span class="cl"><span class="s">    }
</span></span></span><span class="line"><span class="cl"><span class="s">    &#34;</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl"><span class="p">&lt;/</span><span class="nt">iframe</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">form</span> <span class="na">name</span><span class="o">=</span><span class="s">&#34;contact&#34;</span> <span class="na">action</span><span class="o">=</span><span class="s">&#34;/api/GenericFormsHandler&#34;</span> <span class="na">method</span><span class="o">=</span><span class="s">&#34;POST&#34;</span> <span class="na">target</span><span class="o">=</span><span class="s">&#34;hidden_iframe&#34;</span> <span class="na">onsubmit</span><span class="o">=</span><span class="s">&#34;submitted=true;&#34;</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="p">&lt;</span><span class="nt">input</span> <span class="na">id</span><span class="o">=</span><span class="s">&#34;g-recaptcha-response&#34;</span> <span class="na">name</span><span class="o">=</span><span class="s">&#34;g-recaptcha-response&#34;</span> <span class="na">type</span><span class="o">=</span><span class="s">&#34;hidden&#34;</span> <span class="na">value</span><span class="o">=</span><span class="s">&#34;&#34;</span> <span class="p">/&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="p">&lt;</span><span class="nt">script</span> <span class="na">src</span><span class="o">=</span><span class="s">&#34;https://www.google.com/recaptcha/api.js?render={{.Site.Data.reCAPCHA.key}}&amp;hl=en&#34;</span>  <span class="p">&gt;&lt;/</span><span class="nt">script</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="p">&lt;</span><span class="nt">script</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">        <span class="k">if</span> <span class="p">(</span><span class="k">typeof</span> <span class="nx">grecaptcha</span> <span class="o">!==</span> <span class="s1">&#39;undefined&#39;</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nx">grecaptcha</span><span class="p">.</span><span class="nx">ready</span><span class="p">(</span><span class="kd">function</span> <span class="p">()</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">                <span class="nx">grecaptcha</span><span class="p">.</span><span class="nx">execute</span><span class="p">(</span><span class="s1">&#39;{{.Site.Data.reCAPCHA.key}}&#39;</span><span class="p">,</span> <span class="p">{</span> <span class="s1">&#39;action&#39;</span><span class="o">:</span> <span class="s1">&#39;submit&#39;</span> <span class="p">}).</span><span class="nx">then</span><span class="p">(</span><span class="kd">function</span> <span class="p">(</span><span class="nx">token</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">                    <span class="nb">document</span><span class="p">.</span><span class="nx">getElementById</span><span class="p">(</span><span class="s1">&#39;g-recaptcha-response&#39;</span><span class="p">).</span><span class="nx">value</span> <span class="o">=</span> <span class="nx">token</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">                <span class="p">});</span>
</span></span><span class="line"><span class="cl">            <span class="p">});</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">&lt;/</span><span class="nt">script</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="c">&lt;!-- all my input fields --&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="p">&lt;</span><span class="nt">input</span> <span class="na">type</span><span class="o">=</span><span class="s">&#34;submit&#34;</span> <span class="na">id</span><span class="o">=</span><span class="s">&#34;submitButton&#34;</span> <span class="na">value</span><span class="o">=</span><span class="s">&#34;Submit&#34;</span> <span class="p">/&gt;</span>
</span></span><span class="line"><span class="cl"><span class="p">&lt;/</span><span class="nt">form</span><span class="p">&gt;</span>
</span></span></code></pre></div><p>My <a href="https://learn.microsoft.com/en-us/azure/static-web-apps/add-api?tabs=vanilla-javascript">Azure Static WebSite is configured to contain a managed Azure Function</a> with an HTTP trigger &lsquo;/api/GenericFormsHandler&rsquo; to handle the forms processing.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-typescript" data-lang="typescript"><span class="line"><span class="cl"><span class="kr">import</span> <span class="p">{</span> <span class="nx">AzureFunction</span><span class="p">,</span> <span class="nx">Context</span><span class="p">,</span> <span class="nx">HttpRequest</span> <span class="p">}</span> <span class="kr">from</span> <span class="s2">&#34;@azure/functions&#34;</span>
</span></span><span class="line"><span class="cl"><span class="kr">import</span> <span class="nx">fetch</span> <span class="kr">from</span> <span class="s2">&#34;node-fetch&#34;</span><span class="p">;</span> <span class="c1">// needs to be installed with npm i node-fetch@2.6.1 
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>
</span></span><span class="line"><span class="cl"><span class="kr">const</span> <span class="nx">httpTrigger</span>: <span class="kt">AzureFunction</span> <span class="o">=</span> <span class="kr">async</span> <span class="kd">function</span> <span class="p">(</span><span class="nx">context</span>: <span class="kt">Context</span><span class="p">,</span> <span class="nx">req</span>: <span class="kt">HttpRequest</span><span class="p">)</span><span class="o">:</span> <span class="nx">Promise</span><span class="p">&lt;</span><span class="nt">void</span><span class="p">&gt;</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s1">&#39;An HTTP POST trigger function to processed enquiry forms&#39;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">    <span class="kd">var</span> <span class="kt">object</span> <span class="o">=</span> <span class="p">{};</span>
</span></span><span class="line"><span class="cl">    <span class="kd">var</span> <span class="nx">returnValue</span> <span class="o">=</span> <span class="s2">&#34;OK&#34;</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">    <span class="kt">object</span><span class="p">[</span><span class="s2">&#34;events&#34;</span><span class="p">]</span> <span class="o">=</span> <span class="p">[];</span>
</span></span><span class="line"><span class="cl">    <span class="kd">var</span> <span class="nx">status</span> <span class="o">=</span> <span class="mi">200</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c1">// a very basic HTML form to object parser
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>    <span class="nx">req</span><span class="p">.</span><span class="nx">body</span><span class="p">.</span><span class="nx">split</span><span class="p">(</span><span class="s1">&#39;&amp;&#39;</span><span class="p">).</span><span class="nx">forEach</span><span class="p">(</span><span class="nx">field</span> <span class="o">=&gt;</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="kd">var</span> <span class="nx">pair</span> <span class="o">=</span> <span class="nx">field</span><span class="p">.</span><span class="nx">split</span><span class="p">(</span><span class="s2">&#34;=&#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">        <span class="kt">object</span><span class="p">[</span><span class="nx">pair</span><span class="p">[</span><span class="mi">0</span><span class="p">]]</span> <span class="o">=</span> <span class="nb">decodeURIComponent</span><span class="p">(</span><span class="nx">pair</span><span class="p">[</span><span class="mi">1</span><span class="p">]).</span><span class="nx">replace</span><span class="p">(</span><span class="sr">/\+/g</span><span class="p">,</span> <span class="s2">&#34; &#34;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">&#34;Validating recaptcha token&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">    
</span></span><span class="line"><span class="cl">    <span class="c1">// the secret key is stored in the Azure Function App settings
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>    <span class="kd">var</span> <span class="nx">postData</span> <span class="o">=</span> <span class="sb">`secret=</span><span class="si">${</span><span class="nx">process</span><span class="p">.</span><span class="nx">env</span><span class="p">.</span><span class="nx">RECAPTCHA_SECRETKEY</span><span class="si">}</span><span class="sb">&amp;response=</span><span class="si">${</span><span class="kt">object</span><span class="p">[</span><span class="s2">&#34;g-recaptcha-response&#34;</span><span class="p">]</span><span class="si">}</span><span class="sb">`</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="sb">`reCAPTCHA request payload: </span><span class="si">${</span><span class="nx">JSON</span><span class="p">.</span><span class="nx">stringify</span><span class="p">(</span><span class="nx">postData</span><span class="p">)</span><span class="si">}</span><span class="sb">`</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="c1">// recaptcha validation only accepts POST requests using &#39;application/x-www-form-urlencoded&#39; content type
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>    <span class="kr">const</span> <span class="nx">response</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">fetch</span><span class="p">(</span><span class="s2">&#34;https://www.google.com/recaptcha/api/siteverify&#34;</span><span class="p">,</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nx">method</span><span class="o">:</span> <span class="s1">&#39;POST&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nx">body</span>: <span class="kt">postData</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="nx">headers</span><span class="o">:</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="s1">&#39;Content-Type&#39;</span><span class="o">:</span> <span class="s1">&#39;application/x-www-form-urlencoded&#39;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">            <span class="s1">&#39;Content-Length&#39;</span><span class="o">:</span> <span class="sb">`</span><span class="si">${</span><span class="nx">JSON</span><span class="p">.</span><span class="nx">stringify</span><span class="p">(</span><span class="nx">postData</span><span class="p">).</span><span class="nx">length</span><span class="si">}</span><span class="sb">`</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">});</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="nx">response</span><span class="p">.</span><span class="nx">ok</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">&#34;Error calling reCAPTCHA&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">        <span class="nx">returnValue</span> <span class="o">=</span> <span class="s2">&#34;Error&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="k">else</span> <span class="k">if</span> <span class="p">(</span><span class="nx">response</span><span class="p">.</span><span class="nx">status</span> <span class="o">&gt;=</span> <span class="mi">400</span><span class="p">)</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s1">&#39;HTTP Error from to reCAPTCHA: &#39;</span> <span class="o">+</span> <span class="nx">response</span><span class="p">.</span><span class="nx">status</span> <span class="o">+</span> <span class="s1">&#39; - &#39;</span> <span class="o">+</span> <span class="nx">response</span><span class="p">.</span><span class="nx">statusText</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">        <span class="nx">returnValue</span> <span class="o">=</span> <span class="s2">&#34;HTTPError&#34;</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="k">else</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">&#34;Successful call to reCAPTCHA&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">        <span class="kr">const</span> <span class="nx">data</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">response</span><span class="p">.</span><span class="nx">json</span><span class="p">();</span>
</span></span><span class="line"><span class="cl">        <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="sb">`reCAPTCHA response: </span><span class="si">${</span><span class="nx">JSON</span><span class="p">.</span><span class="nx">stringify</span><span class="p">(</span><span class="nx">data</span><span class="p">)</span><span class="si">}</span><span class="sb">`</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">        <span class="c1">// if the score is less than the minimum score (App settings) then we don&#39;t process the form
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>        <span class="k">if</span> <span class="p">(</span><span class="nx">data</span><span class="p">.</span><span class="nx">success</span> <span class="o">&amp;&amp;</span> <span class="nx">data</span><span class="p">.</span><span class="nx">score</span> <span class="o">&gt;=</span> <span class="nb">parseFloat</span><span class="p">(</span><span class="nx">process</span><span class="p">.</span><span class="nx">env</span><span class="p">.</span><span class="nx">RECAPTCHA_MINSCORE</span><span class="p">))</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">            <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">&#34;Sending email as reCAPTCHA detected a human&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">        
</span></span><span class="line"><span class="cl">            <span class="kr">const</span> <span class="nx">sgMail</span> <span class="o">=</span> <span class="kr">require</span><span class="p">(</span><span class="s1">&#39;@sendgrid/mail&#39;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">            <span class="nx">sgMail</span><span class="p">.</span><span class="nx">setApiKey</span><span class="p">(</span><span class="nx">process</span><span class="p">.</span><span class="nx">env</span><span class="p">.</span><span class="nx">SENDGRID_API_KEY</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">            <span class="kr">const</span> <span class="nx">msg</span> <span class="o">=</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">                <span class="c1">// App Settings use for the from and to addresses
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>                <span class="nx">to</span>: <span class="kt">process.env.ENQUIRY_TOADDRESS</span><span class="p">,</span> 
</span></span><span class="line"><span class="cl">                <span class="kr">from</span><span class="o">:</span> <span class="nx">process</span><span class="p">.</span><span class="nx">env</span><span class="p">.</span><span class="nx">ENQUIRY_FROMADDRESS</span><span class="p">,</span> 
</span></span><span class="line"><span class="cl">                <span class="nx">html</span><span class="o">:</span> <span class="c1">// add in the content generate from th form content
</span></span></span><span class="line"><span class="cl"><span class="c1"></span>            <span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">            <span class="k">await</span> <span class="nx">sgMail</span>
</span></span><span class="line"><span class="cl">                <span class="p">.</span><span class="nx">send</span><span class="p">(</span><span class="nx">msg</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">                <span class="p">.</span><span class="nx">then</span><span class="p">((</span><span class="nx">response</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">                    <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="sb">`Send Email returned  </span><span class="si">${</span><span class="nx">response</span><span class="p">[</span><span class="mi">0</span><span class="p">].</span><span class="nx">statusCode</span><span class="si">}</span><span class="sb">`</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">                    <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="nx">response</span><span class="p">[</span><span class="mi">0</span><span class="p">].</span><span class="nx">headers</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">                    <span class="nx">returnValue</span> <span class="o">=</span> <span class="s2">&#34;OK&#34;</span>
</span></span><span class="line"><span class="cl">                <span class="p">})</span>
</span></span><span class="line"><span class="cl">                <span class="p">.</span><span class="k">catch</span><span class="p">((</span><span class="nx">error</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">                    <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="sb">`ERROR sending email </span><span class="si">${</span><span class="nx">error</span><span class="si">}</span><span class="sb">`</span><span class="p">)</span>
</span></span><span class="line"><span class="cl">                    <span class="nx">returnValue</span> <span class="o">=</span> <span class="nx">error</span><span class="p">;</span>
</span></span><span class="line"><span class="cl">                <span class="p">})</span>
</span></span><span class="line"><span class="cl">		
</span></span><span class="line"><span class="cl">		<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">           <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">&#34;Not sending email as reCAPTCHA detected a bot&#34;</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">            <span class="nx">returnValue</span> <span class="o">=</span> <span class="s2">&#34;reCAPTCHA Error&#34;</span>
</span></span><span class="line"><span class="cl">        <span class="p">}</span>
</span></span><span class="line"><span class="cl">    <span class="p">}</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    <span class="nx">context</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="sb">`Setting return value as </span><span class="si">${</span><span class="nx">returnValue</span><span class="si">}</span><span class="sb">`</span><span class="p">);</span>
</span></span><span class="line"><span class="cl">    <span class="nx">context</span><span class="p">.</span><span class="nx">res</span> <span class="o">=</span> <span class="p">{</span>
</span></span><span class="line"><span class="cl">        <span class="nx">body</span>: <span class="kt">returnValue</span>
</span></span><span class="line"><span class="cl">    <span class="p">};</span>
</span></span><span class="line"><span class="cl"><span class="p">};</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="kr">export</span> <span class="k">default</span> <span class="nx">httpTrigger</span><span class="p">;</span>
</span></span></code></pre></div><h2 id="important">Important</h2>
<p>The key thing to note with this solution is that you get the Azure Function to write it&rsquo;s return value into the hidden iFrame on the calling page.</p>
<p>The only reason that this iFrame content (the return value) can read (using JavaScript on the form page) is because the Hugo static pages and managed Azure Function are in the same domain. So there are no cross site scripting issues blocking the reading of the iFrame contents e.g. your get no console error messages in the form:</p>
<blockquote>
<p>SecurityError: Blocked a frame with origin &ldquo;<a href="http://www.example.com">http://www.example.com</a>&rdquo; from accessing a cross-origin frame.</p></blockquote>
<p>Hence, the logic on the <code>onload</code> function can pick the correct confirmation page based on the value returned by the Azure Function.</p>
<h2 id="conclusion">Conclusion</h2>
<p>So, not the most elegant solution, but it works.</p>
<p>Hope this post save some other people some time</p>
]]></content:encoded>
    </item>
    <item>
      <title>Could not find assembly deploying a dotnet 6 console app</title>
      <link>https://blog.richardfennell.net/posts/could-not-find-assembly-deploying-dotnet6-app/</link>
      <pubDate>Fri, 13 Jan 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/could-not-find-assembly-deploying-dotnet6-app/</guid>
      <description>&lt;h2 id=&#34;problem&#34;&gt;Problem&lt;/h2&gt;
&lt;p&gt;After deploying a dotnet 6 console app to a production server, I got the following error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Exception: [Could not load file or assembly &amp;lsquo;System.Data.Odbc, Version=6.0.0.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51&amp;rsquo;. The system cannot find the file specified.]&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The strange thing was the same EXE, built on a Microsoft hosted Azure DevOps build agent, was working on the test server and was deployed using the same Azure DevOps Pipeline to both systems.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="problem">Problem</h2>
<p>After deploying a dotnet 6 console app to a production server, I got the following error:</p>
<blockquote>
<p>Exception: [Could not load file or assembly &lsquo;System.Data.Odbc, Version=6.0.0.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51&rsquo;. The system cannot find the file specified.]</p></blockquote>
<p>The strange thing was the same EXE, built on a Microsoft hosted Azure DevOps build agent, was working on the test server and was deployed using the same Azure DevOps Pipeline to both systems.</p>
<p>I tried many things to work out what the problem was. My first though was <a href="https://learn.microsoft.com/en-us/dotnet/framework/tools/fuslogvw-exe-assembly-binding-log-viewerdot-trace">Fuslogvw</a>, but found out I should be using <a href="https://github.com/dotnet/diagnostics/blob/main/documentation/dotnet-trace-instructions.md">dotnet-trace</a>, but this showed nothing other than the assembly was not found i.e. no downstream dependencies were missing.</p>
<p>I then manually pulled the same deployment ZIP onto the server, unpacked it and ran the EXE from the command line, it worked - what was the difference?</p>
<h2 id="solution">Solution</h2>
<p>Eventually I spotted the problem, the <code>runtimes</code> folder was missing on the copy that was deployed via the pipeline.</p>
<p>My automated deployment look like this</p>
<p><img alt="Bad folder structure" loading="lazy" src="/images/rfennell/noruntime.png"></p>
<p>But my manual deploy had a <code>runtimes</code> folder with the platform specific folders in the <code>runtimes</code> folder</p>
<p><img alt="Good folder structure" loading="lazy" src="/images/rfennell/runtime.png"></p>
<p>Remember, the same deployment pipeline was deploying the same artifact to both the test and production servers, and I could see the folder structure was correct in the build agent staging folder (where the ZIp was expanded prior to the copy). So why the differences?</p>
<p>And it got stranger, I redeployed the package using the same pipelines (just a re-run stage, not a complete now run) and this time the <code>runtimes</code> folder was there, and the EXE worked!</p>
<h2 id="conclusion">Conclusion</h2>
<p>I am just going to put this one down to a glitch in the matrix, maybe related to debris in the target folder?</p>
<p>But the learning is, when you get missing assembly errors with a .NET 6 (or .NET Core) deployment make sure the runtime folder is present and contains the platform specific folders.</p>
<p>Also make sure your target folder is empty before copying in the new code to avoid any debris from previous deployments just to be on the safe side.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Book Review - Code: The Hidden Language of Computer Hardware and Software</title>
      <link>https://blog.richardfennell.net/posts/code-the-hidden-language-of-computer-hardware-and-software/</link>
      <pubDate>Wed, 11 Jan 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/code-the-hidden-language-of-computer-hardware-and-software/</guid>
      <description>&lt;p&gt;Late last year at (DDDNorth](&lt;a href=&#34;https://twitter.com/dddnorth&#34;&gt;https://twitter.com/dddnorth&lt;/a&gt;) my session had the title &lt;a href=&#34;https://github.com/rfennell/Presentations&#34;&gt;&amp;lsquo;Why don&amp;rsquo;t people seem to be able to diagnose problems these days?&amp;rsquo;&lt;/a&gt;. Between anecdotes, a key theme was tha people too often don&amp;rsquo;t make sensible diagnostic steps. A problem caused, in my opinion, by the fact they have not been exposed to the fundamentals of how a computer works, due to the way computing is taught today, as opposed to how it was taught in my youth in the 80s&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Late last year at (DDDNorth](<a href="https://twitter.com/dddnorth">https://twitter.com/dddnorth</a>) my session had the title <a href="https://github.com/rfennell/Presentations">&lsquo;Why don&rsquo;t people seem to be able to diagnose problems these days?&rsquo;</a>. Between anecdotes, a key theme was tha people too often don&rsquo;t make sensible diagnostic steps. A problem caused, in my opinion, by the fact they have not been exposed to the fundamentals of how a computer works, due to the way computing is taught today, as opposed to how it was taught in my youth in the 80s</p>
<p>So, I was really please to discover the 2nd Edition of Charles Petzold&rsquo;s book <a href="https://amzn.to/3XptzMs">Code: The Hidden Language of Computer Hardware and Software</a>.</p>
<p><img alt="Code: The Hidden Language of Computer Hardware and Software" loading="lazy" src="/images/rfennell/Petzold-Code.png"></p>
<p>This book covers exactly the journey I discussed. Starting with simple codes and electronics, building up to how a computer actually functions.</p>
<p>Highly recommended for anyone who wants to understand how a computer works, so to better diagnose problems.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Duplicate Test DLLs with vstest.console.exe causes failures</title>
      <link>https://blog.richardfennell.net/posts/duplicate-test-dlls-with-vstest.console.exe-causes-failures/</link>
      <pubDate>Mon, 09 Jan 2023 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/duplicate-test-dlls-with-vstest.console.exe-causes-failures/</guid>
      <description>&lt;h2 id=&#34;the-problem&#34;&gt;The Problem&lt;/h2&gt;
&lt;p&gt;I recently did our regular &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/building-private-vsts-build-agents-using-the-microsoft-packer-based-agent-image-creation-model/?query=build%20agents&#34;&gt;update of our Azure DevOps Private build agents&lt;/a&gt;. It is rare we see problems when we do this, but this time one of our very regularly run builds started to fail when running unit tests.&lt;/p&gt;
&lt;p&gt;We had not changed the project source code, all the test ran locally in Visual Studio. We had not change the build pipeline YAML&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-YAML&#34; data-lang=&#34;YAML&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;  &lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;VSTest@2&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Unit Tests - Services&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;          &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;testAssemblyVer2&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;p&#34;&gt;|&lt;/span&gt;&lt;span class=&#34;sd&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;            **\*.unittests.dll
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;sd&#34;&gt;            !**\obj\**&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;          &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;searchFolder&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;&amp;#39;$(System.DefaultWorkingDirectory)/src&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;And from the pipeline logs, we could see that the CLI command being generated by the &lt;code&gt;VSTest@2&lt;/code&gt; task was also unchanged, finding 39 DLLs that matched the filter.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-problem">The Problem</h2>
<p>I recently did our regular <a href="https://blogs.blackmarble.co.uk/rfennell/building-private-vsts-build-agents-using-the-microsoft-packer-based-agent-image-creation-model/?query=build%20agents">update of our Azure DevOps Private build agents</a>. It is rare we see problems when we do this, but this time one of our very regularly run builds started to fail when running unit tests.</p>
<p>We had not changed the project source code, all the test ran locally in Visual Studio. We had not change the build pipeline YAML</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-YAML" data-lang="YAML"><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">VSTest@2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Unit Tests - Services&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">testAssemblyVer2</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">            **\*.unittests.dll
</span></span></span><span class="line"><span class="cl"><span class="sd">            !**\obj\**</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">searchFolder</span><span class="p">:</span><span class="w"> </span><span class="l">&#39;$(System.DefaultWorkingDirectory)/src</span><span class="w">
</span></span></span></code></pre></div><p>And from the pipeline logs, we could see that the CLI command being generated by the <code>VSTest@2</code> task was also unchanged, finding 39 DLLs that matched the filter.</p>
<p>So the only change was the version of the Visual Studio tools installed on the build agent. Specifically <code>vstest.console.exe</code> had been updated from Version 17.3.2 (x64) to Version 17.4.1 (x64)</p>
<h2 id="solution">Solution</h2>
<p>On checking the list of DLLs containing tests to be run we saw that some unit test DLLs were duplicated. They were appearing in their expected folders, but also in other project folders e.g.</p>
<blockquote>
<p>vstest.console.exe
&ldquo;E:\Agent_work\3\s\src\BM.Services\BM.Service1.UnitTests\bin\Release\BM.Service1.UnitTests.dll&rdquo;
&ldquo;E:\Agent_work\3\s\src\BM.Services\BM.Service2.UnitTests\bin\Release\BM.Service2.UnitTests.dll&rdquo;
&ldquo;E:\Agent_work\3\s\src\BM.Services\BM.Service2.UnitTests\bin\Release\BM.Service1.UnitTests.dll&rdquo;
&hellip;</p></blockquote>
<p>This was due to reference entries in the <code>.csproj</code> files that should not have been there.</p>
<p>So, the fix was to just remove the incorrectly present references to the DLLs in the various <code>.csproj</code> files.</p>
<p>This is one of those problems that makes you ask &lsquo;why has this not been an issue before?&rsquo;, but at least it is fixed now.</p>
]]></content:encoded>
    </item>
    <item>
      <title>GitHub agent Node version stops Hugo site build</title>
      <link>https://blog.richardfennell.net/posts/agent-node-version-stops-hugo-site-build/</link>
      <pubDate>Fri, 16 Dec 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/agent-node-version-stops-hugo-site-build/</guid>
      <description>&lt;h2 id=&#34;the-problem&#34;&gt;The Problem&lt;/h2&gt;
&lt;p&gt;I have &lt;a href=&#34;https://blogs.blackmarble.co.uk/search/?query=hugo&amp;amp;scope=rfennell&#34;&gt;blogged previously&lt;/a&gt; about moving various web sites over to become Hugo Static Sites.&lt;/p&gt;
&lt;p&gt;Recently one of my site&amp;rsquo;s, one using the &lt;a href=&#34;https://tailwindcss.com/&#34;&gt;tailwindscss module&lt;/a&gt;, GitHub Build and Deployment Workflow started failing with the following error:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Start building sites … 
hugo v0.108.0-a0d64a46e36dd2f503bfd5ba1a5807b900df231d+extended linux/amd64 BuildDate=2022-12-06T13:37:56Z VendorInfo=gohugoio
Error: Error building site: POSTCSS: failed to transform &amp;#34;css/style.css&amp;#34; (text/css): node:internal/errors:478
    ErrorCaptureStackTrace(err);
    ^

SystemError [ERR_SYSTEM_ERROR]: A system error occurred: uv_os_homedir returned ENOENT (no such file or directory)
    at Object.&amp;lt;anonymous&amp;gt; (/opt/nodejs/16.18.0/lib/node_modules/npm/node_modules/clean-stack/index.js:6:61)
    at Module._compile (node:internal/modules/cjs/loader:1155:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1209:10)
    at Module.load (node:internal/modules/cjs/loader:1033:32)
    at Function.Module._load (node:internal/modules/cjs/loader:868:12)
    at Module.require (node:internal/modules/cjs/loader:1057:19)
    at require (node:internal/modules/cjs/helpers:103:18)
    at Object.&amp;lt;anonymous&amp;gt; (/opt/nodejs/16.18.0/lib/node_modules/npm/node_modules/aggregate-error/index.js:3:20)
    at Module._compile (node:internal/modules/cjs/loader:1155:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1209:10) {
  code: &amp;#39;ERR_SYSTEM_ERROR&amp;#39;,
  info: {
    errno: -2,
    code: &amp;#39;ENOENT&amp;#39;,
    message: &amp;#39;no such file or directory&amp;#39;,
    syscall: &amp;#39;uv_os_homedir&amp;#39;
  },
  errno: [Getter/Setter],
  syscall: [Getter/Setter]
}
Total in 1653 ms
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;solution&#34;&gt;Solution&lt;/h2&gt;
&lt;p&gt;The problem it turned out was that the default version of Node used by the GitHub Actions runner had changed from Node 14 to Node 16.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-problem">The Problem</h2>
<p>I have <a href="https://blogs.blackmarble.co.uk/search/?query=hugo&amp;scope=rfennell">blogged previously</a> about moving various web sites over to become Hugo Static Sites.</p>
<p>Recently one of my site&rsquo;s, one using the <a href="https://tailwindcss.com/">tailwindscss module</a>, GitHub Build and Deployment Workflow started failing with the following error:</p>
<pre tabindex="0"><code>Start building sites … 
hugo v0.108.0-a0d64a46e36dd2f503bfd5ba1a5807b900df231d+extended linux/amd64 BuildDate=2022-12-06T13:37:56Z VendorInfo=gohugoio
Error: Error building site: POSTCSS: failed to transform &#34;css/style.css&#34; (text/css): node:internal/errors:478
    ErrorCaptureStackTrace(err);
    ^

SystemError [ERR_SYSTEM_ERROR]: A system error occurred: uv_os_homedir returned ENOENT (no such file or directory)
    at Object.&lt;anonymous&gt; (/opt/nodejs/16.18.0/lib/node_modules/npm/node_modules/clean-stack/index.js:6:61)
    at Module._compile (node:internal/modules/cjs/loader:1155:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1209:10)
    at Module.load (node:internal/modules/cjs/loader:1033:32)
    at Function.Module._load (node:internal/modules/cjs/loader:868:12)
    at Module.require (node:internal/modules/cjs/loader:1057:19)
    at require (node:internal/modules/cjs/helpers:103:18)
    at Object.&lt;anonymous&gt; (/opt/nodejs/16.18.0/lib/node_modules/npm/node_modules/aggregate-error/index.js:3:20)
    at Module._compile (node:internal/modules/cjs/loader:1155:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1209:10) {
  code: &#39;ERR_SYSTEM_ERROR&#39;,
  info: {
    errno: -2,
    code: &#39;ENOENT&#39;,
    message: &#39;no such file or directory&#39;,
    syscall: &#39;uv_os_homedir&#39;
  },
  errno: [Getter/Setter],
  syscall: [Getter/Setter]
}
Total in 1653 ms
</code></pre><h2 id="solution">Solution</h2>
<p>The problem it turned out was that the default version of Node used by the GitHub Actions runner had changed from Node 14 to Node 16.</p>
<p>I needed to explicitly set the version of Node to use in the workflow file to Node 14.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">build_and_deploy_job</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">if</span><span class="p">:</span><span class="w"> </span><span class="l">github.event_name == &#39;push&#39; || (github.event_name == &#39;pull_request&#39; &amp;&amp; github.event.action != &#39;closed&#39;)</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Build and Deploy Job</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">env</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">HUGO_VERSION</span><span class="p">:</span><span class="w"> </span><span class="m">0.108.0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">NODE_VERSION</span><span class="p">:</span><span class="w"> </span><span class="m">14</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">     </span><span class="l">....</span><span class="w">
</span></span></span></code></pre></div><p>Once this was set the build worked as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Updating my Azure DevOps Pipeline Task to Use the Node16 Runner</title>
      <link>https://blog.richardfennell.net/posts/updating-my-azure-devops-tasks-to-node16/</link>
      <pubDate>Tue, 13 Dec 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updating-my-azure-devops-tasks-to-node16/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;It is easy to create an Open Source Project and leave it to gather technical debt as the libraries it depends upon are updated.&lt;/p&gt;
&lt;p&gt;I have tried to keep on top of updating all &lt;a href=&#34;https://marketplace.visualstudio.com/search?term=fennell&amp;amp;target=AzureDevOps&amp;amp;category=All%20categories&amp;amp;sortBy=Relevance&#34;&gt;my Azure DevOps Pipeline Extensions&lt;/a&gt;, and I have to say &lt;a href=&#34;https://docs.github.com/en/code-security/dependabot/working-with-dependabot&#34;&gt;Dependabot&lt;/a&gt; has certainly helped, but I have not been as diligent as I might have been.&lt;/p&gt;
&lt;p&gt;So, as a Christmas project, I took the chance to start to do a major update of all my extensions. To make sure they used the newer Node16 execution runner (as per &lt;a href=&#34;https://github.com/microsoft/azure-pipelines-tasks/blob/master/docs/migrateNode16.md&#34;&gt;the document update process&lt;/a&gt;) and to update all the NPM packages used.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>It is easy to create an Open Source Project and leave it to gather technical debt as the libraries it depends upon are updated.</p>
<p>I have tried to keep on top of updating all <a href="https://marketplace.visualstudio.com/search?term=fennell&amp;target=AzureDevOps&amp;category=All%20categories&amp;sortBy=Relevance">my Azure DevOps Pipeline Extensions</a>, and I have to say <a href="https://docs.github.com/en/code-security/dependabot/working-with-dependabot">Dependabot</a> has certainly helped, but I have not been as diligent as I might have been.</p>
<p>So, as a Christmas project, I took the chance to start to do a major update of all my extensions. To make sure they used the newer Node16 execution runner (as per <a href="https://github.com/microsoft/azure-pipelines-tasks/blob/master/docs/migrateNode16.md">the document update process</a>) and to update all the NPM packages used.</p>
<h2 id="steps-part1">Steps (Part1)</h2>
<ul>
<li>In each <code>task.json</code> file
<ul>
<li>Update the minimum runner version</li>
<li>Add the Node16 based runner (assuming the task is Node based as most of mine are, I do have one or two PowerShell based ones)</li>
</ul>
</li>
<li>Update all the NPM packages. I used <a href="https://www.npmjs.com/package/npm-check-updates">npm-check-updates</a> to do this.</li>
<li>I then checked that task still built locally (the TypeScript linting and transpilation)</li>
<li>I then ran my mocha/chai based test locally, and this is where I hit problems.</li>
</ul>
<h2 id="problems-with-mocha-and-typescript">Problems with Mocha and TypeScript</h2>
<p>The error I got was in the form</p>
<blockquote>
<p>TypeError [ERR_UNKNOWN_FILE_EXTENSION]: Unknown file extension &ldquo;.ts&rdquo; for D:\a\1\s\Extensions\WikiPDFExport\WikiPDFExportTask\test\foldercreation.test.ts</p></blockquote>
<p>Seems this is a <a href="https://github.com/mochajs/mocha/issues/4726">known issue</a>. As suggested, I tried swapping from the  <code>&quot;require&quot;: &quot;ts-node/register&quot;</code> to <code>&quot;loader&quot;: &quot;ts-node/esm&quot;</code> in the <code>.mocharc.json</code> file, this caused the error to changed to</p>
<blockquote>
<p>mocha ./test/<em>.test.ts
(node:31412) ExperimentalWarning: Custom ESM Loaders is an experimental feature. This feature could change at any time (Use <code>node --trace-warnings ...</code> to show where the warning was created)
Warning: Cannot find any files matching pattern &ldquo;extensions/**/test/</em>.ts&rdquo;</p>
<p>TSError: ⨯ Unable to compile TypeScript:
test/foldercreation.test.ts:12:23 - error TS2695: Left side of comma operator is unused and has no side effects.
12         var actual = (0, GitWikiFunctions_1.GetWorkingFolder)(&quot;.\&quot;, &ldquo;testdata\subfolder\1\file.md&rdquo;, agentSpecific_1.logInfo);</p></blockquote>
<p>I battled with this for a while, but in the end gave up and just set my test macro to do the transpile first and then running mocha against the .js files, as opposed to letting mocha do all the steps itself.</p>
<p>So, swapping from</p>
<pre tabindex="0"><code>&#34;test-no-logger&#34;: &#34;mocha -r ts-node/register ./test/*.test.ts &#34;
</code></pre><p>to</p>
<pre tabindex="0"><code>&#34;transpile&#34;: &#34;tsc -p ./&#34;,
&#34;test-no-logger&#34;: &#34;npm run transpile &amp;&amp; mocha ./dist/test/*.test.js &#34;
</code></pre><p>I could now run my tests locally, but some were failing. This turned out to be due to using the older <code>del</code> NPM module. I swapped to <code>fs-extra</code> and all was good.</p>
<h2 id="steps-part2">Steps (Part2)</h2>
<p>When it all builds and tests locally you can continue</p>
<ul>
<li>As this is a major change and to give people the choice as to whether to update, I updated the Major version and zero&rsquo;d the Minor version numbers in the CI/CD pipeline that publishes the task (for my extensions I chose to manually manage the major versions, increment the minor version for each public release, and use the patch as build number)</li>
<li>Create PR and let the CI/CD pipeline deploy a private version of the extension to the Azure DevOps Marketplace</li>
<li>I can then increment the task version number in the pipeline YAML files so the new version is tested (you can&rsquo;t set this until you have deployed the incremented extension to the marketplace as the YAML validation will fail)</li>
</ul>
<p>And that should be it&hellip;</p>
<h2 id="summary">Summary</h2>
<p>This all seems a lot of work, but is essential if my OSS extensions are to trustworthy and usable. I have a few more to do, but I am hoping to get them all done before the end of the year.</p>
]]></content:encoded>
    </item>
    <item>
      <title>You need a license to touch that (revised 18 years on)</title>
      <link>https://blog.richardfennell.net/posts/you-need-a-license-to-touch-that-revised/</link>
      <pubDate>Mon, 05 Dec 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-need-a-license-to-touch-that-revised/</guid>
      <description>&lt;h1 id=&#34;you-need-a-license-to-touch-that-the-case-for-licensing-of-individuals-within-the-software-engineering-profession&#34;&gt;“You need a license to touch that!”. The case for licensing of individuals within the software engineering profession.&lt;/h1&gt;
&lt;blockquote&gt;
&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;Back in 2004 I completed my BSc Degree in Computer Science at Bradford University; only 20 years after I started. My year in industry got out of hand!&lt;/p&gt;
&lt;p&gt;One module I did involved me writing a paper on a legal area concerning computing. I chose to discuss licensing of engineers.&lt;/p&gt;
&lt;p&gt;Whilst at the &lt;a href=&#34;https://www.dddnorth.co.uk/&#34;&gt;DDDNorth&lt;/a&gt; community conference last weekend (Dec 2022) I got chatting on aspects of staff development, problem solving and training with a fellow MVP &lt;a href=&#34;https://twitter.com/MattVSTS&#34;&gt;Matteo Emili&lt;/a&gt; and I mentioned this paper I had written. We both thought it might be interesting to revisit it and see if it still held true, best part of 20 years on.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="you-need-a-license-to-touch-that-the-case-for-licensing-of-individuals-within-the-software-engineering-profession">“You need a license to touch that!”. The case for licensing of individuals within the software engineering profession.</h1>
<blockquote>
<h2 id="background">Background</h2>
<p>Back in 2004 I completed my BSc Degree in Computer Science at Bradford University; only 20 years after I started. My year in industry got out of hand!</p>
<p>One module I did involved me writing a paper on a legal area concerning computing. I chose to discuss licensing of engineers.</p>
<p>Whilst at the <a href="https://www.dddnorth.co.uk/">DDDNorth</a> community conference last weekend (Dec 2022) I got chatting on aspects of staff development, problem solving and training with a fellow MVP <a href="https://twitter.com/MattVSTS">Matteo Emili</a> and I mentioned this paper I had written. We both thought it might be interesting to revisit it and see if it still held true, best part of 20 years on.</p>
<p>So enjoy, remembering this was originally written 18 years ago&hellip;</p></blockquote>
<h2 id="introduction">Introduction</h2>
<p>Throughout the history of science and technology there has been a trend in the development spectrum away from the gentleman amateur and towards the professional engineer. This report explores at which point on this spectrum Software Engineering currently lies and, if it is found to lie at the fully professional end, whether or not there is a need for licenses to practice.</p>
<h2 id="software-engineering-professionalism">Software Engineering Professionalism</h2>
<p>Computing in general, and software engineering in particular, is a young discipline. Its professional societies are only around 50 years old, the <a href="https://www.bcs.org/">BCS</a> was formed in 1957; it is now only reaching a point where it is not uncommon that when retiring, practitioners have had their whole career in the field.</p>
<p>Due to this relative youth the formal call for professionalisation is only recently being brought to the fore. This is not to say that people working in computing have not had the need to be professional - this has always been important - but since the mid 1990s, there has been serious talk of some formality, maybe even of a license to practice, as in other professions.
This debate can be seen at least as a sign of puberty, if not one of maturity, for the industry.</p>
<h2 id="what-are-the-real-benefits-of-licensing">What are the Real Benefits of Licensing?</h2>
<p>The key factor behind any call to licensing is one of public confidence. The license to practice provides a standard by which the purchaser of any service can see that a practitioner has met a minimum level of competence. This does not mean that they are a good Doctor or Accountant, bad practitioners still exist, but, if they do not maintain this standard, there are processes through which their license may be revoked.</p>
<h2 id="perception-of-the-computing-industry">Perception of the Computing Industry</h2>
<p>Currently such a position of public confidence is not enjoyed by the computing industry. The media coverage is usually concentrated on perceived-failing systems, such as the Swanwick Air Traffic Control Centre and the new NPfIT NHS records project (even before the project has really started). The media does not talk of computing success, it is not newsworthy.</p>
<p><img alt="Figure 1 [RL04]" loading="lazy" src="/images/rfennell/licensing-1.png"></p>
<p>This media slant is not unsurprising, given the project success rates shown in Figure 1. If this were figures in the Civil Engineering industry, would anyone be crossing bridges?</p>
<p>In fact, the route to licensing of Civil Engineering came from failing constructions such as bridges. Canadian engineering folk law [<strong>SM99a</strong>] has it that the iron used to make the rings given to graduating engineers comes from the collapsed Quebec City Bridge of 1907, as a reminder of their public duty. Licensing of engineers in Texas came about after a boiler explosion that killed 300 school children [<strong>SM99a</strong>].</p>
<p>Another factor in the perception of computing in many people’s eyes is that they say ‘I have a computer and use it OK, so how hard can this software thing be?’</p>
<p>This viewpoint is not limited to the ‘man in the street’, but also to many corporate managers.</p>
<p>A figure regularly reported at Microsoft conferences is that 70% of corporate development is done using the Visual Basic family of languages, commonly not by specialist programmers but by end users. These users have little knowledge of software engineering principles, and hence often produce initially functional but poorly maintainable or un-scalable systems. It is often the reliability of these systems that shape public perception of computing.</p>
<h2 id="is-there-need-for-universal-licensing">Is there need for universal licensing?</h2>
<p>Given these facts, would requiring all IT professionals to have a license be the right move? The answer has to be no.</p>
<table>
  <thead>
      <tr>
          <th>Discipline</th>
          <th>Licensed</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Civil</td>
          <td>44%</td>
      </tr>
      <tr>
          <td>Mechanical</td>
          <td>23%</td>
      </tr>
      <tr>
          <td>Electrical</td>
          <td>9%</td>
      </tr>
      <tr>
          <td>Chemical</td>
          <td>8%</td>
      </tr>
      <tr>
          <td>All Engineering</td>
          <td>18%</td>
      </tr>
  </tbody>
</table>
<p><strong>Table 1: Percentage of licensed Engineering graduates in the US as of 1996</strong> [<strong>SM99a</strong>]</p>
<p>In other engineering professions only a small percentage of people are licensed [Table 1]. The remainder of the workforce are working towards licensing, or are responsible to a licensed engineer.</p>
<p>Given this data Steve McConnell suggest only 5-10% of software engineers need to be licensed, specifically those working on safety-critical systems [<strong>SM99b</strong>].</p>
<h2 id="a-move-to-licensing">A Move to Licensing</h2>
<p>In 1999 the Texas Society for Professional Engineers passed a resolution to license Software Engineers [<strong>LHW99</strong>]. The scheme was based on applicants providing nine references and meeting one of the following criteria:</p>
<ul>
<li>16 years’ engineering experience</li>
<li>12 years’ engineering experience, plus a bachelor’s degree from an accredited program.</li>
<li>6 years’ engineering experience, plus a PhD in engineering or related subject from a university with an accredited undergraduate program.</li>
</ul>
<p>In June 2001 the Canadian Engineering Accreditation Board (CEAB) accredited three degree programs [<strong>DLP02a</strong>] and instigated a licensing program with similar requirements to that in Texas.</p>
<h2 id="the-current-bcs-position">The Current BCS Position</h2>
<p>Currently there is no plan to license Software Engineers in the UK or the EU, only voluntary professional schemes are in place.</p>
<p>Historically the BCS had a membership system which provided a means to get an Engineering Council Charter Engineer (CEng) qualification. In 2002 the BCS underwent a radical restructure to provide a more outward-looking system in an aim to promote professionalism in the industry. The major change in the membership system was a new grade of Chartered Information Technology Professional (MBCS CITP) that sits along side the CEng qualification.</p>
<p>Initial results seem to show that the changes have been approved by the industry as ‘The British Computer Society has reported a 200% leap in membership applications since the launch of its new grading structure, - reports chief executive David Clarke’ [<strong>BCS04a</strong>]</p>
<p>There has also been a change in the culture at the BCS. At the West Yorkshire BCS branch AGM (25th May 2004) David Clarke the Chief Executive of the BCS gave an example of how, when he joined the BCS, there was no standard way for the public to ask it questions; they had been asked by a national newspaper for comments on the Home Secretary’s proposed ID card scheme. It had taken three weeks for his team to find a person willing to speak on the subject. Changes to the BCS now see it taking a more active role in advising bodies such as government committees on IT related issues [<strong>BCS04b</strong>]. This is a key area that a professional organisation must address in the current media-centric world.</p>
<p>For the foreseeable future the BCS is not making moves toward licensing, preferring certification based on internationally recognised products that it provides such as European Computer Driving License (ECDL) and the European Certification for IT Professionals (EUCIP).</p>
<h2 id="certification-as-an-alternative">Certification as an alternative</h2>
<p>Usually certification is not compulsory, unlike licensing, and is often vendor-led. Other than the BCS, the major certification schemes are from vendors such as Microsoft, Cisco and Novell.
Certification is well established, and aims to give all parties benefits; the customer knows the engineer has been trained, the vendor reduces their support calls due to engineer training, and the engineer gets a way to differentiate themselves from their competitors.</p>
<p>The problem is that such qualifications age rapidly - usually re-certification is required every 3 to 5 years as vendor products are dropped and new ones replace them. Also, some of the qualifications have become devalued due to ‘boot camps’  and ‘brain-dumps’ , passing the certification exams is often dependant on the students budget and ability to cram, not their fundamental knowledge of the subject.</p>
<p>Certification is a valuable tool to the industry, providing good specific product training, but not a replacement for good basic grounding provided by a degree or similar.</p>
<h2 id="has-licensing-worked-thus-far">Has licensing worked thus far?</h2>
<p>A view on the success or failure of licensing can be found in the ‘Communications of ACM’ [<strong>KK02</strong>]</p>
<blockquote>
<p>“…after four years, what has been the overall impact of software engineering licensing in Texas? In our view, the news is good: licensing had practically no effect…”</p></blockquote>
<p>The key factor cited in reviews of the state of licensing is the lack of a stable core of knowledge that a practitioner is required to study [<strong>KK02</strong>] [<strong>DLP02b</strong>]. In other engineering disciplines there is a well defined boundary between research science and engineering. For example if you wished to make TVs for the consumer market you could hire a physicist; they could make you a TV, but it would no doubt be expensive, unreliable but revolutionary. The normal practice would be to hire experienced electronics and production engineers, who would draw on the core knowledge of their fields, originated by research scientists, to make a suitable specified product on time and on budget.</p>
<h2 id="are-we-tackling-the-right-problem">Are we tackling the right problem?</h2>
<p>An interesting point of view is put forward by Pete McBreen in ‘Software Craftsmanship’, this again states the problems with lack of a stable core of knowledge, but highlights that ‘The concept of a single, responsible engineer signing off a complete work is not feasible for software’. [<strong>PM02a</strong>]. Software projects are just too large and complex and, unlike Civil Engineering building codes, there is no agreed legal standard to sign off against anyway.</p>
<p>McBreen goes on to propose that Software Engineering, not just licensing, fails not because of the lack of experience or skill, but due to the lack of personal responsibility. He argues that it is better to have a master-apprentice approach to software development, with a senior programmer taking juniors under their wing for a number of years, with their systems being developed and handed on like a prize heirloom - the EMACS product is held up as an example [<strong>PM02b</strong>].</p>
<p>This structure has its appeal. It seems to fit well with other professions, and while they may not be called master and apprentice similar structures occur in the legal, medical and accountancy professions. Though, interestingly, they appear less frequently in traditional engineering, where in many ways the personal craftsman’s responsibility has been replaced by a corporate regulation framework.</p>
<h2 id="is-licensing-round-the-corner">Is Licensing Round the Corner?</h2>
<p>Software engineering is not ready for universal licensing, the core body of knowledge is still growing too fast to allow any true foundation engineering degrees to be stabilised. It could reasonably be argued that this stabilisation may never occur, the rate of technology change and tre ihe huge variety of projects for which software is developed may defeat all efforts for universal standards for licensing. However, this does not mean that licensing may not appear in specialised areas, if they can be formalised to a sufficient degree. The most likely area is safety-critical systems.</p>
<p>In general, for the foreseeable future the industry will have to rely on voluntary membership of organisations such as the BCS and vendor certifications. This in effect means that the onus of good practice and professionalism falls, as always, on the shoulders of the individual. Maybe what is required is the personal responsibility of the gentleman amateur and craftsman to build the confidence of the general public rightly expects.</p>
<blockquote>
<h2 id="-and-18-years-on">.. And 18 years on</h2>
<p>So what has changed 18 years on, well not a lot that I can see. There is no clamour for licensing, our media only cover stories of failure, and projects of all sizes are still failing.</p>
<p>There is still no agreement on best practice, at least to the level required for licensing. IT is still just too fast moving, another week another framework, language or pattern.</p>
<p>We as individuals and teams are still, and always will be, responsible for our own actions, and we still have to strive to deliver high quality solutions that meet our client business needs.</p>
<p>What are the chances of a change in the next 20 years? Or will it become an irrelevance as we move to the use of more AI tools?</p>
<p>Maybe I will come back to this post around 2043 and see what has changed.</p></blockquote>
<h2 id="references">References</h2>
<table>
  <thead>
      <tr>
          <th>Code</th>
          <th>Details</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>BCS04a</td>
          <td>BCS Press Releases, “New BCS Membership Campaign Results in 200% Application Rise”, 20 September 2004</td>
      </tr>
      <tr>
          <td>BCS04b</td>
          <td>“Identity Cards”, BCS News Jan 2004 “Response from the British Computer Society regarding the call for statements to House of Commons Home Affairs Committee enquiry into Identity Cards”ResponseJan2004.htm</td>
      </tr>
      <tr>
          <td>DLP02a</td>
          <td>P96 “Licensing software engineers in Canada”: David Lorge Pamas, “Communications of the ACM” Volume 45 Issue 11 November 2002, ACM Digital Library</td>
      </tr>
      <tr>
          <td>DLP02b</td>
          <td>P98 “Licensing software engineers in Canada”: David Lorge Pamas, “Communications of the ACM” Volume 45 Issue 11 November 2002, ACM Digital Library</td>
      </tr>
      <tr>
          <td>KK02</td>
          <td>P95 “A Rice University perspective on software engineering licensing”: Ken Kennedy and Moshe Y. Vardi, “Communications of the ACM” Volume 45 Issue 11 November 2002, ACM Digital Library</td>
      </tr>
      <tr>
          <td>LHW99</td>
          <td>P27 “Licensing software professionals: where are we?”: Laurie Honour Werth “The proceedings of the thirtieth SIGCSE technical symposium on Computer science education”, 1999 ACM Digital Library</td>
      </tr>
      <tr>
          <td>PM02a</td>
          <td>P40 “Licensing is an attempt to solve the wrong problem”: Pete McBreen “Software Craftsmanship: The new imperative”, Addison Wesley 2002</td>
      </tr>
      <tr>
          <td>PM02b</td>
          <td>P66 “Being the Maintainer of an Application is a High-Status Position”: Pete McBreen “Software Craftsmanship: The new imperative”, Addison Wesley 2002</td>
      </tr>
      <tr>
          <td>RL04</td>
          <td>ARC220 - Slide 6, Rafel Lukawiecki, “New Developments in Microsoft Solutions Framework”, Microsoft TechEd Europe 2004 Post Conference DVD</td>
      </tr>
      <tr>
          <td>SM99a</td>
          <td>P56 “Need for Engineering”: Steve McConnell, “After the gold rush, creating a true profession of software engineering”, Microsoft Press 1999</td>
      </tr>
      <tr>
          <td>SM99b</td>
          <td>P103-104 “Stinking Badges, Licensing” : Steve McConnell, “After the gold rush, creating a true profession of software engineering”, Microsoft Press 1999</td>
      </tr>
  </tbody>
</table>
]]></content:encoded>
    </item>
    <item>
      <title>Fixing my Logitech Spotlight Presentation Remote that would not switch on</title>
      <link>https://blog.richardfennell.net/posts/fixing-my-logitech-spotlight/</link>
      <pubDate>Mon, 14 Nov 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixing-my-logitech-spotlight/</guid>
      <description>&lt;h2 id=&#34;the-problem&#34;&gt;The Problem&lt;/h2&gt;
&lt;p&gt;I am getting back out in front of audiences again as opposed to doing Teams/Zoom events. So I dug out my trusty Logitech Spotlight presentation remote from the bag where it had been sitting for well over a year. However, there was a problem, it would not pair with my re-built Windows 11 PC. It could not even switch on.&lt;/p&gt;
&lt;p&gt;A quick search showed I was not alone in having this problem, &lt;a href=&#34;https://support.logi.com/hc/en-us/community/posts/360049560234-Spotlight-not-charging&#34;&gt;the Logitech forums showed a lot of angry people&lt;/a&gt;. The summary was, if left alone for a few months a Spotlight goes so flat it cannot be charged, and Logitech don&amp;rsquo;t have a fix, or seem to care. There were a lot of comments about very expensive paper weights.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-problem">The Problem</h2>
<p>I am getting back out in front of audiences again as opposed to doing Teams/Zoom events. So I dug out my trusty Logitech Spotlight presentation remote from the bag where it had been sitting for well over a year. However, there was a problem, it would not pair with my re-built Windows 11 PC. It could not even switch on.</p>
<p>A quick search showed I was not alone in having this problem, <a href="https://support.logi.com/hc/en-us/community/posts/360049560234-Spotlight-not-charging">the Logitech forums showed a lot of angry people</a>. The summary was, if left alone for a few months a Spotlight goes so flat it cannot be charged, and Logitech don&rsquo;t have a fix, or seem to care. There were a lot of comments about very expensive paper weights.</p>
<h2 id="the-fix">The Fix</h2>
<p>The solution, found a in thread comment on the Logitech Forum, was this very useful video from 2ai.</p>
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden;">
      <iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share; fullscreen" loading="eager" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/KuDt2un6zvI?autoplay=0&amp;controls=1&amp;end=0&amp;loop=0&amp;mute=0&amp;start=0" style="position: absolute; top: 0; left: 0; width: 100%; height: 100%; border:0;" title="YouTube video"></iframe>
    </div>

<p>The basic summary is to charge the battery via an external source to kick start the Spotlight. You can see a photo of the setup I used here.</p>
<p><img alt="My Spotlight fix setup" loading="lazy" src="/images/rfennell/spotlight-fix.jpg"></p>
<p>A few comments on the process:</p>
<ul>
<li>Getting the back off the Spotlight remote is a pain, even using my <a href="https://www.ifixit.com/Store/Tools/Pro-Tech-Toolkit/IF145-307">iFixit tools</a> I added a few scratches to the case.</li>
<li>As the video said, the soldering is delicate, you need to strip a way the cellophane to expose the solder points.</li>
<li>The video was not clear on the polarity or voltage of the battery/PSU.
<ul>
<li>The wire from the centre terminal on the Spotlight battery was connected to the +ve terminal on my PSU</li>
<li>The wire from the outside terminal on the Spotlight battery was connected to the -ve terminal on my PSU</li>
<li>I used a variable voltage power supply. I switched it on at its minimum of 3V and slowly increased the voltage until the Spotlight leapt into life at about 4V. This took only a few seconds.</li>
</ul>
</li>
<li>As soon as the Spotlight came to life, I plugged in a USB-C cable and removed my externally connected the PSU connections. Leaving the Spotlight to fully charge via USB.</li>
</ul>
<p>So a slightly fiddly fix, but it worked. Much better than the alternative of buying a new remote and sending the old one to e-waste/landfill.</p>
<h2 id="updated--16-nov-22---i-spoke-to-soon">Updated  16 Nov 22 - I spoke to soon</h2>
<p>It seems the fix was only temporary. The battery in the Spotlight is not holding charge, so I have a remote that works as long as I have it plugging in via a USB-C charging cable (or I suppose an external battery pack). So, not the most remote of remote presenter devices!</p>
<p>So I did more digging and found this <a href="https://www.ifixit.com/Guide/Logitech&#43;Spotlight&#43;Battery&#43;Replacement/164429">battery replacement guide on iFixit</a>. So, I have ordered a suitable battery and will update this post when it arrives and I have done the replacement.</p>
<h2 id="updated--19-nov-22---success-at-last">Updated  19 Nov 22 - Success at last</h2>
<p><img alt="Replacement battery for my Spotlight fix setup" loading="lazy" src="/images/rfennell/spotlight-fix2.jpg"></p>
<p>I have fitted the replacement <a href="https://www.ebay.co.uk/itm/234519858817">90mAh 3.7V Lithium Polymer Li-Po li ion Rechargeable Battery</a> and the Spotlight is now working as it should. I did have not had to charge it, via the USB-C cable, but once done the remote works as it should.</p>
<p>It will be interesting to see how long the battery lasts, but even if it has to be charged more often, it is still a lot better than having a dead remote.</p>
<p>So if you have this issue, I would recommend a battery swap. It is not that hard to do, and the battery is not expensive. Much better than a $150 paper weight.</p>
<h2 id="updated-27-sept-2023---i-let-it-go-flat-again">Updated 27 Sept 2023 - I let it go flat again!</h2>
<p>I left my Logitech Spotlight in my bag and it went flat again. So I was back to square one, it would not charge.</p>
<p>I first tried to &lsquo;jump start it&rsquo; again, putting 4V from an external PSU across the battery terminals while it was also connected to USB power to charge the battery.</p>
<p>Initially it appeared to work, but as soon as I removed the external USB power it died again.</p>
<p>Next, I unsoldered the battery and tested it, it seemed OK giving around 4V.</p>
<p>So I reconnected it, plugged in USB power, this time overnight, then tested it again. This time it worked.</p>
<p>So either it was a poor battery connection, or the battery just needed a much longer charge as it was so flat.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Azure DevOps Migration Tools Again</title>
      <link>https://blog.richardfennell.net/posts/time-for-azure-devops-migration-tools-again/</link>
      <pubDate>Fri, 11 Nov 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/time-for-azure-devops-migration-tools-again/</guid>
      <description>&lt;p&gt;I have recently been working with a client who needed to move Azure DevOps Work Items between Team Projects on different Azure DevOps instances. The &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/options-migrating-tfs-to-vsts/&#34;&gt;only realistic choice&lt;/a&gt; was to use the free open source &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=nkdagility.vsts-sync-migration&#34;&gt;Azure DevOps Migration Tools&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I have used these tools before, but it was a while ago, and as it is under active development, I had to relearn a few things. It is fair to say that the learning curve for this tool is steep, but as the documentation does not hide this&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently been working with a client who needed to move Azure DevOps Work Items between Team Projects on different Azure DevOps instances. The <a href="https://blogs.blackmarble.co.uk/rfennell/options-migrating-tfs-to-vsts/">only realistic choice</a> was to use the free open source <a href="https://marketplace.visualstudio.com/items?itemName=nkdagility.vsts-sync-migration">Azure DevOps Migration Tools</a>.</p>
<p>I have used these tools before, but it was a while ago, and as it is under active development, I had to relearn a few things. It is fair to say that the learning curve for this tool is steep, but as the documentation does not hide this</p>
<blockquote>
<p>WARNING: This tool is not designed for a novice. This tool was developed to support the scenarios below, and the edge cases that have been encountered by the 30+ contributors from around the Azure DevOps community. You should be comfortable with the TFS/Azure DevOps object model, as well as debugging code in Visual Studio</p></blockquote>
<p>So where did I trip up this time?</p>
<h2 id="but-the-documentation">But the documentation!</h2>
<p>The documentation is OK, but can be confusing at the start as page links loop back on themselves. Adding to confusion is that there are details for older legacy V1 processors, current V1 processors and new V2 ones, but the reality is you are probably using only a small subset of the ones listed. Probably just current V1 ones for most use cases.</p>
<p>In my case I could ignore all bar the <code>WorkItemMigrationConfig</code>processor and the three &lsquo;beta&rsquo; <code>Test*MigrationConfig</code> processors, and I suspect this is true for many people. This might not appear the case on a first read of the documentation.</p>
<p>Today, the <code>WorkItemMigrationConfig</code> processor is the one that does the bulk of the work, doing the work previously done by a set of legacy processors e.g creating Iteration and Area nodes.</p>
<h2 id="recreating-configuration-files">Recreating Configuration Files</h2>
<p>Always recreate your configuration file when you update the version of the migration tools you are using. The whole schema could have changed and some option settings certainly will have.</p>
<p>The documentation does tell you to do this, don&rsquo;t be tempted to just update the version number in the config file, it usually does not work as you miss a required new setting.
It is invariably quicker to re-add in your source and target details, rather than work out what other flag has changed.</p>
<h2 id="watch-our-for-case-sensitivity">Watch our for case sensitivity</h2>
<p>I found that the current version, V12, is much more case sensitive than previous versions. I initially had my target Team Project name in lower case, and the tools failed in the strangest way.</p>
<p>A connection was successfully made to the team projects, but when it tried to create the missing iteration or area paths it failed with message about being unable to create nodes i.e.</p>
<p><code>NewNode is not anchored in the target project, it cannot be created</code></p>
<p>A search of <a href="https://github.com/nkdAgility/azure-devops-migration-tools/issues">GitHub Issues</a>, <a href="https://github.com/nkdAgility/azure-devops-migration-tools/discussions">GitHub Discussions</a> and <a href="https://stackoverflow.com/questions/tagged/azure-devops-migration-tools">Stack overflow thread</a> for this error suggested problem with the configuration values for the <code>NodeBasePaths</code> settings. However, after a bit of interactive debugging, I found this was not the root cause. In fact, as I was doing a simple copy migration all tht was needed was an empty array for the <code>NodeBasePaths</code>, as I had first thought.</p>
<p>The debugging showed that the problem was the case of the target project name. The mismatch meant a regex expression was failing to match the built in root Area and Iteration nodes. Hence the error.</p>
<p>So, top tip, copy and paste in URLs and Team Project names to avoid stupid typos</p>
<h2 id="how-to-increase-the-chances-of-a-successful-migration">How to increase the chances of a successful migration</h2>
<p>I always start by migrating over a single work item of a given type into a test target project. When this works, I try a single work item of another type. I repeat this process until I have migrated all the work item types in use.</p>
<p>I will try to selectively pick sample work items so there are relationships between them, thus ensuring the linking is working, and ones with attachments, to ensure those are migrated correctly too.</p>
<p>Once I know all the work item types can be migrated, then and only then do i try to run a migration for all the work items in the source project.</p>
<p>It is far better to spend a few minutes making sure all the work item type field and states are mapped correctly, rather than finding a few hours into a long migration run.</p>
<p>The way I do this filtering is to edit the <code>WIQLQueryBit</code></p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="s2">&#34;WIQLQueryBit&#34;</span><span class="err">:</span> <span class="s2">&#34;AND  [Microsoft.VSTS.Common.ClosedDate] = &#39;&#39; AND [System.WorkItemType] NOT IN (&#39;Test Suite&#39;, &#39;Test Plan&#39;,&#39;Shared Steps&#39;,&#39;Shared Parameter&#39;,&#39;Feedback Request&#39;) AND [System.Id] = &#39;1234&#39;&#34;</span><span class="err">,</span>
</span></span></code></pre></div><blockquote>
<p><strong>Note:</strong> The configuration file does contain a <code>&quot;WorkItemIDs&quot;:[]</code> setting that the documentation says is &ldquo;A list of work items to import&rdquo;. I had assumed this would filter the work items returned by the WIQL query, it does not. After looking at the code it does not seem to be used at present in the <code>WorkItemMigrationConfig</code> processor.</p></blockquote>
<p>A key advantage of the WIQL approach is by filtering the number of work items returned to a single one, the migration tools will start much more quickly. There is a &rsquo;tax&rsquo; on every run of the migration tools where it has to do analysis of the source and target projects to work out what needs to be migrated. If it has to check hundreds or thousands of work items this will take many minutes. However, if it only has to check one, it will be done in, from my experience, around 10 seconds</p>
<h2 id="summary">Summary</h2>
<p>Using this tool is always an adventure, but it does what it says it will when you have it configured correctly.</p>
<p>And if you do have problems, the debugging logs are detailed, and there is active support on <a href="https://github.com/nkdAgility/azure-devops-migration-tools/issues">GitHub Issues</a> and <a href="https://github.com/nkdAgility/azure-devops-migration-tools/discussions">GitHub Discussions</a>.</p>
<p>If you do have problems, don&rsquo;t be afraid of running the tool using Visual Studio. This is a great way to debug the flow and see what is going on. If you do make changes remember this is an open source project and your bug fix and enhancement PRs will always be considered.</p>
<p>The bottom line is that you just have to accept that this form of migration will be a slow process, but once you have the basics of the configuration file correct the tool is reliable.</p>
]]></content:encoded>
    </item>
    <item>
      <title>GitVersion task fails on a cloned Azure DevOps YAML Pipeline</title>
      <link>https://blog.richardfennell.net/posts/gitversion-fails-on-a-cloned-yaml-build/</link>
      <pubDate>Tue, 08 Nov 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/gitversion-fails-on-a-cloned-yaml-build/</guid>
      <description>&lt;h2 id=&#34;problem&#34;&gt;Problem&lt;/h2&gt;
&lt;p&gt;I recently had a strange problem. I had an existing Azure DevOps YAML Pipeline that used the checkout task to do a deep Git fetch of a repo and it&amp;rsquo;s submodules. The reason for the deep fetch was that later in the pipeline we ran GitVersion and this needs the whole repo to be able to calculate the version.&lt;/p&gt;
&lt;div class=&#34;highlight&#34;&gt;&lt;pre tabindex=&#34;0&#34; class=&#34;chroma&#34;&gt;&lt;code class=&#34;language-yaml&#34; data-lang=&#34;yaml&#34;&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;- &lt;span class=&#34;nt&#34;&gt;checkout&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;self&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;persistCredentials&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;        &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;submodules&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;gitversion/setup@0&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Get current version of GitVersion&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;versionSpec&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;5.x&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;&lt;/span&gt;- &lt;span class=&#34;nt&#34;&gt;task&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;l&#34;&gt;gitversion/execute@0&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;displayName&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;Run GitVersion to generate SEMVER&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;inputs&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;useConfigFile&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;kc&#34;&gt;true&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class=&#34;line&#34;&gt;&lt;span class=&#34;cl&#34;&gt;&lt;span class=&#34;w&#34;&gt;    &lt;/span&gt;&lt;span class=&#34;nt&#34;&gt;configFilePath&lt;/span&gt;&lt;span class=&#34;p&#34;&gt;:&lt;/span&gt;&lt;span class=&#34;w&#34;&gt; &lt;/span&gt;&lt;span class=&#34;s1&#34;&gt;&amp;#39;$(System.DefaultWorkingDirectory)/GitVersion.yml&amp;#39;&lt;/span&gt;&lt;span class=&#34;w&#34;&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;On this original pipeline this was all working as expected.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="problem">Problem</h2>
<p>I recently had a strange problem. I had an existing Azure DevOps YAML Pipeline that used the checkout task to do a deep Git fetch of a repo and it&rsquo;s submodules. The reason for the deep fetch was that later in the pipeline we ran GitVersion and this needs the whole repo to be able to calculate the version.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">checkout</span><span class="p">:</span><span class="w"> </span><span class="l">self</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">persistCredentials</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">submodules</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">gitversion/setup@0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Get current version of GitVersion&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">versionSpec</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;5.x&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">task</span><span class="p">:</span><span class="w"> </span><span class="l">gitversion/execute@0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">displayName</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Run GitVersion to generate SEMVER&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">useConfigFile</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">configFilePath</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;$(System.DefaultWorkingDirectory)/GitVersion.yml&#39;</span><span class="w">
</span></span></span></code></pre></div><p>On this original pipeline this was all working as expected.</p>
<p>However, when I created a second Azure DevOps pipeline that pointed to the same YAML file, expecting it to work without issues, it failed at the GitVersion task. It could not calculate the version.</p>
<p>On checking the pipeline logs, I could see that the git checkout had used a depth of 1 i.e. a shallow fetch, more efficient, but not providing all the branch and commit information GitVersion needs</p>
<h2 id="the-fix">The fix</h2>
<p>It seems that on this 2nd pipeline the UX based pipeline settings were set differently, overriding the YAML ones, so causing a shallow fetch to be used.</p>
<p><img alt="ScreenShot" loading="lazy" src="/images/rfennell/YMLSettingsScrrenShot.png"></p>
<p>When I switched off the UX &lsquo;set shallow fetch&rsquo; all worked as expected [accessed via Pipeline &gt; Edit &gt; Ellipsis menu &gt; Triggers &gt; YAML tab]</p>
<p>Confusing that a cloned pipeline should end up with different settings, especially when the setting you need to change is far from easily discoverable</p>
]]></content:encoded>
    </item>
    <item>
      <title>Migrating our &#34;Living the Dream&#34; DevOps demo to GitHub Enterprise</title>
      <link>https://blog.richardfennell.net/posts/migrating-living-the-dream-to-github/</link>
      <pubDate>Tue, 01 Nov 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/migrating-living-the-dream-to-github/</guid>
      <description>&lt;p&gt;At Black Marble, we have had a long standing Azure DevOps Team Project that we used for end-to-end demos of the principles of DevOps called &lt;a href=&#34;https://www.youtube.com/watch?v=TCmwR-HdvSk&amp;amp;index=1&amp;amp;list=PLiP6RW7A4433fa1t77ZU4aq0DM08brY5t&#34;&gt;Living the Dream&lt;/a&gt;. This used a legacy codebase, the old Microsoft Fabrikam demo, and showed that can be deployed using modern tools.&lt;/p&gt;
&lt;p&gt;As I had no similar demo for GitHub Enterprise, I thought it would be interesting to see how the migration process taking my Azure DevOps implementation over to GitHub would go. This is a good learning exercise as it is the type of problem that many of our enterprise clients will need to do if changing DevOps platform. My key aim was to do the minimum to get the CI/CD process moved from Azure Pipelines to GitHub Action&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At Black Marble, we have had a long standing Azure DevOps Team Project that we used for end-to-end demos of the principles of DevOps called <a href="https://www.youtube.com/watch?v=TCmwR-HdvSk&amp;index=1&amp;list=PLiP6RW7A4433fa1t77ZU4aq0DM08brY5t">Living the Dream</a>. This used a legacy codebase, the old Microsoft Fabrikam demo, and showed that can be deployed using modern tools.</p>
<p>As I had no similar demo for GitHub Enterprise, I thought it would be interesting to see how the migration process taking my Azure DevOps implementation over to GitHub would go. This is a good learning exercise as it is the type of problem that many of our enterprise clients will need to do if changing DevOps platform. My key aim was to do the minimum to get the CI/CD process moved from Azure Pipelines to GitHub Action</p>
<h2 id="moving-the-source">Moving the source</h2>
<p>This was easy, I just <a href="https://docs.github.com/en/get-started/importing-your-projects-to-github/importing-source-code-to-github/importing-a-repository-with-github-importer">imported the Git repo from Azure DevOps</a></p>
<h2 id="build">Build</h2>
<p>The build for my solution was fairly easy, the project is a pair of Visual Studio solutions, one for the ARM code and the other for the Website code.</p>
<p>The key point was to make sure I passed in the correct parameters to make sure the web site was packaged up using WebDeploy</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">Build-Solution</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l">windows-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">actions/checkout@v3</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Add MSBuild to PATH</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">microsoft/setup-msbuild@v1.0.2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Restore NuGet packages</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">working-directory</span><span class="p">:</span><span class="w"> </span><span class="l">${{env.GITHUB_WORKSPACE}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l">nuget restore ${{env.SOLUTION_FILE_PATH}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Build Solution</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">working-directory</span><span class="p">:</span><span class="w"> </span><span class="l">${{env.GITHUB_WORKSPACE}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># Add additional options to the MSBuild command line here (like platform or verbosity level).</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="c"># See https://docs.microsoft.com/visualstudio/msbuild/msbuild-command-line-reference</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="l">msbuild /m /p:Configuration=${{env.BUILD_CONFIGURATION}} /p:DeployOnBuild=true /p:PublishProfile=${{env.BUILD_CONFIGURATION}} ${{env.SOLUTION_FILE_PATH}}</span><span class="w">
</span></span></span></code></pre></div><h2 id="deployment">Deployment</h2>
<p>The majority of the work was required to get the solution deployed i.e.</p>
<ul>
<li>Creating all the required Azure resources using an ARM template</li>
<li>Deploying the website via MSDeploy.</li>
</ul>
<p>In Azure DevOps I had tasks to manage these steps, for GitHub actions, though some actions exist, I also had to write some scripts.</p>
<p>I ended up putting the Actions required for the ARM and Solution deployment in <a href="https://docs.github.com/en/actions/using-workflows/reusing-workflows">reusable workflows</a>, so I could call the steps at multiple locations on my workflows (for the test deployment and the production deployment) without repeating actions.</p>
<h3 id="arm">ARM</h3>
<p>The workflow for the ARM was as follows. Nothing that special, the key points to note are</p>
<ul>
<li>You have to use the <a href="https://github.com/marketplace/actions/azure-login">Azure Login action</a>, this in effect replaces the Azure Pipelines service connection.</li>
<li>I have to use a script to check the Azure resource group exists prior to the deployment. In Azure Pipelines the ARM task will create the resource group if not present, but this is not so with the GitHub Action</li>
<li>I inject all my ARM parameters as inline parameters (as opposed to a file), this was just to keep the same pattern as had been used on Azure DevOps</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Reuseable workflow to publish ARM to Azure resource group</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">on</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">workflow_call</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">environment</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">string</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># all secrets inherited    </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">Integration-ARM</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l">windows-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">environment</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">${{ inputs.environment }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Download ARM Build Artifact</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">actions/download-artifact@v3.0.1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">id</span><span class="p">:</span><span class="w"> </span><span class="l">arm-download</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="c"># Artifact name</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">ARM</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l">./ARM</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Azure Login</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">Azure/login@v1.4.6</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">creds</span><span class="p">:</span><span class="w"> </span><span class="l">${{ secrets.AZURE_CREDENTIALS }}     </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Ensure Azure Resource Group is created</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        $rgexists = az group exists -n ${{ secrets.AzureResourceGroup}}
</span></span></span><span class="line"><span class="cl"><span class="sd">        if ($rgexists -eq &#39;false&#39;) {
</span></span></span><span class="line"><span class="cl"><span class="sd">          az group create --name ${{ secrets.AzureResourceGroup}} --location ${{ secrets.AZURELOCATION }}
</span></span></span><span class="line"><span class="cl"><span class="sd">          write-host &#34;Creating Azure Resource Group&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        }</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shell</span><span class="p">:</span><span class="w"> </span><span class="l">pwsh </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Deploy Azure Resource Manager (ARM) Template</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">Azure/arm-deploy@v1.0.8</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">scope</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;resourcegroup&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">subscriptionId</span><span class="p">:</span><span class="w"> </span><span class="l">${{ secrets.AZURESUBSCRIPTION }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">region</span><span class="p">:</span><span class="w"> </span><span class="l">${{ secrets.AZURELOCATION }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">resourceGroupName</span><span class="p">:</span><span class="w"> </span><span class="l">${{ secrets.AzureResourceGroup}}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">template</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;.\\ARM\\Templates\\WebSiteSQLDatabase.json&#39;</span><span class="w"> 
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">deploymentMode</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Incremental&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">parameters</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;hostingPlanName=&#34;${{ secrets.HostingPlanName}}&#34; hostingPlanSku=&#34;${{ secrets.hostingPlanSku}}&#34; hostingPlanCapacity=&#34;${{ secrets.hostingPlanCapacity}}&#34; webSiteName=&#34;${{ secrets.Sitename}}&#34; sqlserverName=&#34;${{ secrets.sqlservername}}&#34; sqlServerAdminLogin=&#34;${{ secrets.SQLUser}}&#34; sqlServerAdminPassword=&#34;${{ secrets.SQLPassword}}&#34; databaseName=&#34;${{ secrets.databasename}}&#34; collation=&#34;SQL_Latin1_General_CP1_CI_AS&#34; edition=&#34;Standard&#34; maxSizeBytes=&#34;1073741824&#34; requestedServiceObjectiveName=&#34;S0&#34; appInsightsLocation=&#34;${{ secrets.AzureLocation}}&#34; VersionTag=&#34;1.2.3&#34;&#34; DeploymentDate=&#34;2022-10-28&#34;&#34; EnvironmentTag=&#34;tag&#34;&#39;</span><span class="w">
</span></span></span></code></pre></div><h3 id="solution">Solution</h3>
<p>The MSDeploy was more problematic. It is fair to say this is not a currently fashionable technology. Web deploy has very much moved to the &lsquo;copy a zip file&rsquo; approach. There was no GitHub Action available to run MSDeploy against Azure, so I had to work out the parameters and script it.</p>
<p>Again the key points to note</p>
<ul>
<li>I could not find a GitHub Action that would automatically update a configuration file replacing tokens found with values from environment variables and secrets. There a few that do part of the job, but not it all. This is something I might well write, but for now a script that replaces the tokens did the job - and yes I know it is probably better practice to set these values in the Azure WebApp directly, but as I said I was trying for a like for like replacement.</li>
<li>The MSDeploy relies on pulling down the publish profile then calling the MSDeploy EXE which is present on the GitHub Agent. This took a while to get right, but once it was done, it is reliable</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Reuseable workflow to publish web solution to Azure WebApp</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">on</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">workflow_call</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">inputs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">environment</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">required</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">string</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># all secrets inherited    </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">Integration-ARM</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l">windows-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">environment</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">${{ inputs.environment }}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Download Web Deploy Solution Build Artifact</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">actions/download-artifact@v3.0.1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="c"># Artifact name</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Webdeploy-Package&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l">./WebDeploy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Replace tokens in configuration file</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        $file = &#34;.\WebDeploy\FabrikamFiber.Web.SetParameters.xml&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent = Get-Content -Path $file
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent = $filecontent -replace &#34;__Sitename__&#34;, &#34;${{secrets.Sitename}}&#34; 
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent = $filecontent -replace &#34;__LOCATION__&#34;, &#34;${{secrets.LOCATION}}&#34; 
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent = $filecontent -replace &#34;__GENERATETESTDATA__&#34;, &#34;${{secrets.GENERATETESTDATA}}&#34; 
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent = $filecontent -replace &#34;__sqlservername__&#34;, &#34;${{secrets.sqlservername}}&#34; 
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent = $filecontent -replace &#34;__databasename__&#34;, &#34;${{secrets.databasename}}&#34; 
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent = $filecontent -replace &#34;__SQLUser__&#34;, &#34;${{secrets.SQLUser}}&#34; 
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent = $filecontent -replace &#34;__SQLPassword__&#34;, &#34;${{secrets.SQLPassword}}&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">        $filecontent | Out-File $file
</span></span></span><span class="line"><span class="cl"><span class="sd">        cat $file          </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shell</span><span class="p">:</span><span class="w"> </span><span class="l">pwsh</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Azure Login</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">Azure/login@v1.4.6</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">with</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">creds</span><span class="p">:</span><span class="w"> </span><span class="l">${{ secrets.AZURE_CREDENTIALS }}     </span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;Deploy web site with MSDeploy&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">        $publishProfile = az webapp deployment list-publishing-profiles --resource-group ${{ secrets.AzureResourceGroup}} --name ${{ secrets.Sitename }} --query &#34;[?publishMethod==&#39;MSDeploy&#39;]&#34; --subscription &#34;${{ secrets.AZURESUBSCRIPTION}}&#34; | convertfrom-json
</span></span></span><span class="line"><span class="cl"><span class="sd">        $shortPath = (New-Object -ComObject Scripting.FileSystemObject).GetFolder(&#34;./WebDeploy&#34;).ShortPath  
</span></span></span><span class="line"><span class="cl"><span class="sd">        &amp; &#34;C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe&#34; -verb:sync -source:package=&#34;$shortpath\FabrikamFiber.Web.zip&#34; -setParamFile:&#34;$shortpath\FabrikamFiber.Web.SetParameters.xml&#34; -dest:auto,ComputerName=&#34;https://$($publishProfile.msdeploySite).scm.azurewebsites.net/msdeploy.axd?site=$($publishProfile.msdeploySite)&#34;,UserName=$($publishProfile.userName),Password=$($publishProfile.userPWD),AuthType=&#39;Basic&#39; -verbose -debug -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">shell</span><span class="p">:</span><span class="w"> </span><span class="l">pwsh</span><span class="w">
</span></span></span></code></pre></div><h2 id="summary">Summary</h2>
<p>So I now have the core of my &lsquo;Living the Dream&rsquo; demo on GitHub, the is more of course I can do, but it is a good start and has been a good learning experience.</p>
<p>This form of activity is something I would recommend to anyone trying to get their had around the intricacies of GitHub, or any technology new to them. You always learn more I think when trying to do your own project as opposed to just following a lab.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Ignite 2022 Azure DevOps &amp; GitHub Announcements - GitHub Advanced Security comes to Azure DevOps</title>
      <link>https://blog.richardfennell.net/posts/ignite-2022-azure-devops-and-github-announcements/</link>
      <pubDate>Wed, 12 Oct 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ignite-2022-azure-devops-and-github-announcements/</guid>
      <description>&lt;p&gt;Today at &lt;a href=&#34;https://ignite.microsoft.com/&#34;&gt;Microsoft&amp;rsquo;s Ignite Conference&lt;/a&gt; there have been some very interesting announcements related to Azure DevOps and GitHub.&lt;/p&gt;
&lt;p&gt;In the recent past, I have seen confusion from our clients as to what is Microsoft&amp;rsquo;s recommended  DevOps solution, given they have both Azure DevOps and GitHub.&lt;/p&gt;
&lt;p&gt;It is true that Microsoft have said, and continue to say, that GitHub is the &amp;rsquo;north star&amp;rsquo; the long term destination for all users. However, that does not help clients today. Many of mine ask &amp;lsquo;but I am using Azure DevOps, but all Microsoft seem to talk about is GitHub, is Azure DevOps dead?&amp;rsquo;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today at <a href="https://ignite.microsoft.com/">Microsoft&rsquo;s Ignite Conference</a> there have been some very interesting announcements related to Azure DevOps and GitHub.</p>
<p>In the recent past, I have seen confusion from our clients as to what is Microsoft&rsquo;s recommended  DevOps solution, given they have both Azure DevOps and GitHub.</p>
<p>It is true that Microsoft have said, and continue to say, that GitHub is the &rsquo;north star&rsquo; the long term destination for all users. However, that does not help clients today. Many of mine ask &lsquo;but I am using Azure DevOps, but all Microsoft seem to talk about is GitHub, is Azure DevOps dead?&rsquo;.</p>
<p>Well today&rsquo;s announcements in the conference session <a href="https://ignite.microsoft.com/en-US/sessions/8847d725-4863-4cea-961f-52e1df342709">&lsquo;Accelerate innovation with the world&rsquo;s most complete cloud developer platform&rsquo;</a> and the blog post <a href="https://devblogs.microsoft.com/devops/integrate-security-into-your-developer-workflow-with-github-advanced-security-for-azure-devops/">&lsquo;Integrate security into your developer workflow with GitHub Advanced Security for Azure DevOps&rsquo;</a> goes a long way to answering that question with a positive answer.</p>
<p>&lsquo;Cool features&rsquo; from GitHub Enterprise, specifically Code Advanced Security, are coming to Azure DevOps, along with other Azure DevOps specific investments. Thus greatly re-enforcing the &lsquo;better together&rsquo; story for Azure DevOps and GitHub, and that Azure DevOps has a future.</p>
<p>These changes could be taken two ways, depending on your position</p>
<ul>
<li>You could argue it makes it much clearer for both existing Azure DevOps and GitHub users to see their road-map, with both products sharing a common set of share services</li>
<li>Conversely you could argue it muddles the waters and slows the inevitable move to GitHub</li>
</ul>
<p>I suspect which position you take is dependent as to whether you are an Azure DevOps or GitHub Enterprise user.</p>
<p>If this is an area of interest, I will be talking more about these changes at Black Marble&rsquo;s next free webinar titled <a href="https://www.blackmarble.com/events/43">&lsquo;Which is best for me&hellip; GitHub, Azure DevOps, or Better Together?&rsquo;</a> on 26th October 2022.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SonarCloud PR branch analysis when the main/trunk branch has not been analysed</title>
      <link>https://blog.richardfennell.net/posts/sonarcloud-pr-analysis-when-the-main-branch-has-not-been-analysed/</link>
      <pubDate>Sat, 01 Oct 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sonarcloud-pr-analysis-when-the-main-branch-has-not-been-analysed/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://sonarcloud.io&#34;&gt;SonarCloud&lt;/a&gt; (and it&amp;rsquo;s on premise equivalent &lt;a href=&#34;https://www.sonarqube.org/&#34;&gt;SonarQube&lt;/a&gt;) understand the concept of Git branching and PRs (in various platforms, in my case Azure DevOps was the important one). This means you can &lt;a href=&#34;https://docs.sonarcloud.io/improving/pull-request-analysis/&#34;&gt;block the completion of a PR&lt;/a&gt; if the new code in the branch/PR does not meet the SonarCloud Quality Gate. A great way to stop the addition of technical debt.&lt;/p&gt;
&lt;p&gt;However, I recently found a problem when starting to use SonarCloud in an older codebase. You cannot do SonarCloud analysis of a child branch before the main/trunk has been analysed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://sonarcloud.io">SonarCloud</a> (and it&rsquo;s on premise equivalent <a href="https://www.sonarqube.org/">SonarQube</a>) understand the concept of Git branching and PRs (in various platforms, in my case Azure DevOps was the important one). This means you can <a href="https://docs.sonarcloud.io/improving/pull-request-analysis/">block the completion of a PR</a> if the new code in the branch/PR does not meet the SonarCloud Quality Gate. A great way to stop the addition of technical debt.</p>
<p>However, I recently found a problem when starting to use SonarCloud in an older codebase. You cannot do SonarCloud analysis of a child branch before the main/trunk has been analysed.</p>
<p>This is most likely an issue because you are using Azure DevOps YAML Pipelines and must commit your revised pipeline, that triggers SonarCloud analysis, via a PR i.e. you cannot commit direct to main/trunk.</p>
<p>However, in my case I was using Azure DevOps Classic Builds and it was because the work I was doing was to move a codebase from an unsupported version of .NET to .NET 4.8. On the build agents available to me I had no means to build the old codebase, and I was not minded to provision agents just to do that job. So I looked for away to analyse the main/trunk branch to unblock my PR analysis.</p>
<p>Normally SonarCloud analysis is done as part of the automated build process, but there are <a href="https://docs.sonarcloud.io/advanced-setup/ci-based-analysis/sonarscanner-cli/">CLI tools</a> to manage it too. I decided to use these in the quickest dirtiest way I could think of</p>
<ol>
<li>I cloned my legacy repo (you never use the code in the repo, but it is needed so SonarCloud can detect the branch. You could possibly use an empty git repo as an alternative)</li>
<li>I switched to the main branch and created a folder off the roo</li>
<li>In this folder I created a new dotnet console app</li>
<li>From the root of the repo I ran the SonarCloud <a href="https://docs.sonarcloud.io/advanced-setup/ci-based-analysis/sonarscanner-for-net/">dotnet CLI Scanner</a>, passing in the details required to create a new SonarCloud project</li>
</ol>
<p><code>dotnet sonarscanner begin /k:&quot;AKEY&quot; /o:&quot;&lt;my sonarcloud org name&gt;&quot; /d:sonar.login=&quot;&lt;your  access token&gt;&quot; /d:sonar.host.url=&quot;https://sonarcloud.io&quot; /n:&quot;Project Name&quot; </code></p>
<ol start="5">
<li>I then built the dotnet project</li>
<li>Finally I completed the SonarCloud analysis</li>
</ol>
<p><code>dotnet sonarscanner end /d:sonar.login=&quot;&lt;your  access token&gt;&quot;</code></p>
<p>This created an analysis of the main branch with the single dotnet <code>program.cs</code> file</p>
<p>It is then possible to do the analysis of the child branches, with all the real code. What code is detected as &rsquo;new code&rsquo; in this branch by SonarCloud will be dependant on your <a href="https://docs.sonarcloud.io/improving/new-code-definition/">settings</a>. So even though the code on this first PR branch will have never been seen by SonarCloud, it might not be detected as new due to the file&rsquo;s last modified dates.</p>
<p>I realise I could have locally built the legacy code on my PC (removing steps 2 &amp; 3, and replacing step 5 with an MSBuild), but that would have required installing legacy frameworks I did not wish to do.</p>
<p>So a dirty hack, but got me out of a hole, and once the PR was completed SonarCloud showed all the code metrics I would expect.</p>
<h2 id="updated-3rd-oct-2022">Updated: 3rd Oct 2022</h2>
<p>Just for completeness, these are command lines to do a local MSBuild of an existing codebase using the CLI Scanner. A better options if you have a dev PC available to build the legacy code based but not suitable build agent (and don&rsquo;t want to make your PC a temporary build agent)</p>
<ol>
<li>Clone the legacy repo</li>
<li>From the root of the repo run the SonarCloud <a href="https://docs.sonarcloud.io/advanced-setup/ci-based-analysis/sonarscanner-for-net/">dotnet MSbuild CLI Scanner</a>, passing in the details required to create a new SonarCloud project</li>
</ol>
<p><code>sonarscanner.msbuild.exe begin /k:&quot;AKEY&quot; /o:&quot;&lt;my sonarcloud org name&gt;&quot; /d:sonar.login=&quot;&lt;your  access token&gt;&quot; /d:sonar.host.url=&quot;https://sonarcloud.io&quot; /n:&quot;Project Name&quot; </code></p>
<ol start="5">
<li>Built the legacy project(s) with MSBuild</li>
</ol>
<p><code>'C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\msbuild' .\MyProjectFolder\MyProject.csproj</code></p>
<ol start="6">
<li>Completed the SonarCloud analysis</li>
</ol>
<p><code>sonarscanner.msbuild.exe end /d:sonar.login=&quot;&lt;your  access token&gt;&quot;</code></p>
]]></content:encoded>
    </item>
    <item>
      <title>Showing OWASP Dependency Check results in SonarCloud</title>
      <link>https://blog.richardfennell.net/posts/linking-dependencycheck-and-sonarcloud/</link>
      <pubDate>Thu, 29 Sep 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/linking-dependencycheck-and-sonarcloud/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=dependency-check.dependencycheck&#34;&gt;OWASP Dependency Checker&lt;/a&gt; can be used to check for known vulnerabilities in a variety of eco-systems. This tool produces a HTML based report, but I wanted to expose the issues in &lt;a href=&#34;https://sonarcloud.io&#34;&gt;SonarCloud&lt;/a&gt;. The problem is that SonarCloud does not allow ingestion of OWASP Dependency Checker vulnerabilities out the box.&lt;/p&gt;
&lt;p&gt;However, there is the option to ingest &lt;a href=&#34;https://docs.sonarcloud.io/enriching/generic-issue-data/&#34;&gt;Generic Issue Data&lt;/a&gt;. To make use of this I just needed to change my XML results file to a JSON format&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="https://marketplace.visualstudio.com/items?itemName=dependency-check.dependencycheck">OWASP Dependency Checker</a> can be used to check for known vulnerabilities in a variety of eco-systems. This tool produces a HTML based report, but I wanted to expose the issues in <a href="https://sonarcloud.io">SonarCloud</a>. The problem is that SonarCloud does not allow ingestion of OWASP Dependency Checker vulnerabilities out the box.</p>
<p>However, there is the option to ingest <a href="https://docs.sonarcloud.io/enriching/generic-issue-data/">Generic Issue Data</a>. To make use of this I just needed to change my XML results file to a JSON format</p>
<script src="https://gist.github.com/rfennell/7a80189659f7fe128f29c71962b11c8e.js"></script>
<p>Once this was done the only remaining step was to tell SonarCloud where the coverted JSON file was</p>
<pre tabindex="0"><code># Additional properties that will be passed to the scanner, 
# Put one key=value per line, example:
# sonar.exclusions=**/*.bin
sonar.cpd.exclusions=**/AssemblyInfo.cs,**/*.g.cs, **/Migrations/**/*.cs
sonar.cs.vscoveragexml.reportsPaths=$(Agent.TempDirectory)/**/*.coveragexml
sonar.cs.vstest.reportsPaths=$(Agent.TempDirectory)/**/*.trx
# the full path location of the converted file
sonar.externalIssuesReportPaths=dependancy-results.json
</code></pre><p>Now the OWASP Dependency Checker vulnerabilities appear in SonarCloud, but with a few limitations</p>
<ul>
<li>Issues cannot be managed within SonarCloud; for instance, there is no ability to mark them as False Positive.</li>
<li>The activation of the rules that raise these issues cannot be managed within SonarCloud.</li>
<li>External rules are not visible on the Rules page or reflected in any Quality Profile.</li>
<li>My script only does a simple mapping of the different issue formats - but this could be modified to meet any other specific needs</li>
<li>Issues have to be mapped to a file already under analysis, you can&rsquo;t have general project issues.</li>
</ul>
<p>That all said, I think this is a nice solution to having a single dashboard for monitoring all my software supply chain issues.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Clearing the AssignedTo field on an Azure DevOps Work items with the AZ CLI</title>
      <link>https://blog.richardfennell.net/posts/clearing-the-assigned-to-field-with-az-cli/</link>
      <pubDate>Tue, 06 Sep 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/clearing-the-assigned-to-field-with-az-cli/</guid>
      <description>&lt;p&gt;In the past I &lt;a href=&#34;https://github.com/rfennell/AzureDevOpsPowershell/tree/a11eda67cfeb2ff34712b1423bdcb8f2cac28b0c/REST&#34;&gt;have written most of my Azure DevOps scripts&lt;/a&gt; calling the Azure DevOps REST API from PowerShell. This has worked, but did involve a lot of JSON payload handling.&lt;/p&gt;
&lt;p&gt;A better option these days is to look at the &lt;a href=&#34;https://docs.microsoft.com/en-us/cli/azure/&#34;&gt;AZ CLI&lt;/a&gt; and specifically the &lt;code&gt;azure-devops&lt;/code&gt; extension, as this does much of the heavy lifting for you.&lt;/p&gt;
&lt;p&gt;This does not mean that everything is plain sailing though. Today I hit a problem that took me a while to solve.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In the past I <a href="https://github.com/rfennell/AzureDevOpsPowershell/tree/a11eda67cfeb2ff34712b1423bdcb8f2cac28b0c/REST">have written most of my Azure DevOps scripts</a> calling the Azure DevOps REST API from PowerShell. This has worked, but did involve a lot of JSON payload handling.</p>
<p>A better option these days is to look at the <a href="https://docs.microsoft.com/en-us/cli/azure/">AZ CLI</a> and specifically the <code>azure-devops</code> extension, as this does much of the heavy lifting for you.</p>
<p>This does not mean that everything is plain sailing though. Today I hit a problem that took me a while to solve.</p>
<p>Today, I wanted to use the AZ CLI to remove assigned identity from a work item.</p>
<p>There is a parameter to assign the WI owner on the <code>az boards work-item update</code> command</p>
<p><code>az boards work-item update --id 123 --assigned-to myname</code></p>
<p>But I could not find a way to pass a null/empty/unassigned to clear this value.</p>
<p>In the end I found the answer was to use the following form, editing the field by name</p>
<p><code>az boards work-item update --id 123 --fields &quot;System.AssignedTo=&quot;</code></p>
<p>Note, that there is nothing after the <code>=</code>, no space, null or empty quotes (and yes I had tried all of those first)</p>
<p>Hope this post means someone gets to this solution quicker than I did.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Book Review &#34;Accelerate DevOps with GitHub&#34; by Michael Kaufmann</title>
      <link>https://blog.richardfennell.net/posts/book-review-accelerate-devops-with-github/</link>
      <pubDate>Sun, 04 Sep 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/book-review-accelerate-devops-with-github/</guid>
      <description>&lt;p&gt;The contents  of this book is not at all what I was expecting from the title ‘&lt;a href=&#34;https://www.amazon.com/Accelerate-DevOps-GitHub-software-performance-ebook-dp-B0B4DW7NSL/dp/B0B4DW7NSL/ref=mt_other?_encoding=UTF8&amp;amp;me=&amp;amp;qid=1660231179&#34;&gt;Accelerate DevOps with GitHub&lt;/a&gt;’. Usually books that aim to provide up to date walkthroughs for a specific current tools tend to not place them within the large tapestry of the ecosystem. This is not the case with this book from &lt;a href=&#34;https://github.com/wulfland&#34;&gt;Michael Kaufmann&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Each section is delivered in broadly three parts, which I found really effective&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The contents  of this book is not at all what I was expecting from the title ‘<a href="https://www.amazon.com/Accelerate-DevOps-GitHub-software-performance-ebook-dp-B0B4DW7NSL/dp/B0B4DW7NSL/ref=mt_other?_encoding=UTF8&amp;me=&amp;qid=1660231179">Accelerate DevOps with GitHub</a>’. Usually books that aim to provide up to date walkthroughs for a specific current tools tend to not place them within the large tapestry of the ecosystem. This is not the case with this book from <a href="https://github.com/wulfland">Michael Kaufmann</a>.</p>
<p>Each section is delivered in broadly three parts, which I found really effective</p>
<ul>
<li>the history as to why we got to where we are</li>
<li>theory as to how we should use a process/technology</li>
<li>and what practical we can do to best adopt this practice with GitHub within our organisation.</li>
</ul>
<p>Taken all together it gives coherent examples of what does, and does not work, based on different teams sizes and common external/company constraints.</p>
<p>As with all books based on a technology stack, in this case GitHub, it is going to age quickly. Cloud based tools move on, walkthroughs become outdated. However, because of the emphasis this book places on background history and core theories as to why we use a technology, it will stand the test of time better than most.</p>
<p>So I think it is well worth a read, irrespective of your experience with DevOps and GitHub. There is something in it for any individual or company no matter where you are on your DevOps journey</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why has my HP printer become a DVD? A fix for HP USB Printers not being detected on Windows 10/11</title>
      <link>https://blog.richardfennell.net/posts/why-has-my-printer-become-a-drive/</link>
      <pubDate>Sat, 30 Jul 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-has-my-printer-become-a-drive/</guid>
      <description>&lt;p&gt;Today I made the fateful mistake of offering to try to fix a family members home printer. Family IT, and especially printers, the bane of all IT Professionals.&lt;/p&gt;
&lt;h1 id=&#34;the-problem&#34;&gt;The Problem&lt;/h1&gt;
&lt;p&gt;The system in question was a 10 year old setup made up of a Dell Optiplex desktop currently running Windows 10 and an HP M1132 LaserJet multifunction printer. This had all been working until a couple of weeks ago when the PC failed to detect the printer. The printer was uninstalled, assuming that would fix the problem, but the Add Printer tools could not even find the printer.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today I made the fateful mistake of offering to try to fix a family members home printer. Family IT, and especially printers, the bane of all IT Professionals.</p>
<h1 id="the-problem">The Problem</h1>
<p>The system in question was a 10 year old setup made up of a Dell Optiplex desktop currently running Windows 10 and an HP M1132 LaserJet multifunction printer. This had all been working until a couple of weeks ago when the PC failed to detect the printer. The printer was uninstalled, assuming that would fix the problem, but the Add Printer tools could not even find the printer.</p>
<p>I tried all the normal things, power cycle everything, swap the USB cable, reinstalling drivers, tested the printer on my Windows 11 laptop, all to no effect.</p>
<p>I was close to giving up, assuming the USB circuitry on the printer must have failed. However, just before I gave up, I noticed that when you plugged the USB printer cable in a DVD was detected by the PC. HP used this means, or at least did 10 years ago, to ship the drivers and other HP bloatware they deemed necessary. Until recently after this DVD was detected, Windows when onto detecting the printer. It was this second step that had failed, and I suspect the cause was a recent Windows (10 &amp; 11) Security Update.</p>
<h1 id="the-solution">The Solution</h1>
<p>Turns out <a href="https://h30434.www3.hp.com/t5/LaserJet-Printing/Re-Windows-Recognize-my-printer-as-cd-drive/td-p/6845785">the solution</a> was in the HP Support Forum, but it took some finding as the forums are full of wrong answer, half solutions and confusion.</p>
<p>Basically you need to run a <a href="https://ftp.hp.com/pub/softlib/software12/COL53553/Im-129228-1/LJM1130_M1210_SI_Utility.exe">utility</a> that disable the HP Smart Install DVD feature, so the printer does not appear as a DVD. Once this was done the expected Windows Add Printer Wizard worked as expected.</p>
<p>So that was two hours of my life I won&rsquo;t get back, and I do wonder how many older HP printers will end up landfill due to what I assume is a Windows Security update to block some autorun run vulnerability?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why has my MVC site stopped working?</title>
      <link>https://blog.richardfennell.net/posts/why-has-my-mvc-site-stopped-working/</link>
      <pubDate>Thu, 28 Jul 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-has-my-mvc-site-stopped-working/</guid>
      <description>&lt;p&gt;I am currently upgrading a .NET Core 3.1(LTS) MVC website to run on .NET 6, stepping it through intermediate .NET versions to make sure some EF Migrations were done correctly.&lt;/p&gt;
&lt;p&gt;Everything upgraded without any major issue until the final step to .NET 6. As soon as I did this my MVC pages failed to render, but no error was reported&lt;/p&gt;
&lt;p&gt;After much fiddling the solution to the problem was pointed out to me by one of my colleagues. The way MVC is shipped with .NET (Core) has changed from being a NuGet package to being part of the framework. This happened a good while ago, but become a blocking issue with .NET Core.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am currently upgrading a .NET Core 3.1(LTS) MVC website to run on .NET 6, stepping it through intermediate .NET versions to make sure some EF Migrations were done correctly.</p>
<p>Everything upgraded without any major issue until the final step to .NET 6. As soon as I did this my MVC pages failed to render, but no error was reported</p>
<p>After much fiddling the solution to the problem was pointed out to me by one of my colleagues. The way MVC is shipped with .NET (Core) has changed from being a NuGet package to being part of the framework. This happened a good while ago, but become a blocking issue with .NET Core.</p>
<p>The fix was in the end simple, to remove the NuGet package references to the Microsoft.AspNetCore.* 2.2.0. Once this was done the unedited MVC pages loaded using the previous working methods. I did do some further tidying to use the current MVC controller methods, but this was not essential as the old methods are now just alias for the new methods.</p>
<p>So the lesson learnt?</p>
<p>Pay more attention to the various &lsquo;Migrate from version of .NET Core to a later version steps&rsquo; documents, and don&rsquo;t put off making changes for soon to be deprecated features, as it will trip you up in a confusing manner later.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot retrieve Umbraco node</title>
      <link>https://blog.richardfennell.net/posts/cannot-retrieve-umbraco-node/</link>
      <pubDate>Mon, 25 Jul 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-retrieve-umbraco-node/</guid>
      <description>&lt;p&gt;We recently hit a problem when we tried to edit a page on anold Umbraco 7 instance. When we tried to edit a page in the Umbraco web UI we got the error &amp;lsquo;failed to retrieve data for content id 1119&amp;rsquo;&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Umbraco Error&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/images/rfennell/umbracoerror.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;Now this page had been created a long time ago by a user who had since left the company, and this was the root cause. It seems there is an issue with Umbraco and deleted users. To avoid this problem, it is actually recommended you disable old Umbraco user accounts as opposed to deleting them.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We recently hit a problem when we tried to edit a page on anold Umbraco 7 instance. When we tried to edit a page in the Umbraco web UI we got the error &lsquo;failed to retrieve data for content id 1119&rsquo;</p>
<p><img alt="Umbraco Error" loading="lazy" src="/images/rfennell/umbracoerror.png"></p>
<p>Now this page had been created a long time ago by a user who had since left the company, and this was the root cause. It seems there is an issue with Umbraco and deleted users. To avoid this problem, it is actually recommended you disable old Umbraco user accounts as opposed to deleting them.</p>
<p>The fix was a bit of SQL to reassign the problem node to a valid user ID</p>
<pre tabindex="0"><code>update [dbo].[umbracoNode] set nodeUser = 8  where where id = 1119
</code></pre><p>Once this script was run, and I restarted the Azure hosted Web-App, to reload the cache, I was able to edit the problematic node</p>
]]></content:encoded>
    </item>
    <item>
      <title>Social Media Posts after Migrating from WordPress to Hugo Static Pages</title>
      <link>https://blog.richardfennell.net/posts/social-media-posts-after-migrating-from-wordpress-to-hugo/</link>
      <pubDate>Thu, 07 Jul 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/social-media-posts-after-migrating-from-wordpress-to-hugo/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/migrating-from-wordpress-to-hugo/&#34;&gt;posted recently on my experience moving to Hugo from WordPress&lt;/a&gt;. One feature lost in the move were the Wordpress plugins used to automatically post to Twitter and LinkedIn when a new blog post was created. I always found this very useful, so looked or a way to replicate this functionality for static pages.&lt;/p&gt;
&lt;p&gt;The solution I ended up with was &lt;a href=&#34;https://azure.microsoft.com/en-gb/services/logic-apps/&#34;&gt;Azure Logic Apps&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I created a Logic App with a scheduled triggered that checked my blogs&amp;rsquo;s RSS feed every 30 minutes. If it found the RSS feed had been updated, I then created a bitly link for the new post&amp;rsquo;s URL, then posts to Email (as a test), Twitter and LinkedIn.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="https://blogs.blackmarble.co.uk/rfennell/migrating-from-wordpress-to-hugo/">posted recently on my experience moving to Hugo from WordPress</a>. One feature lost in the move were the Wordpress plugins used to automatically post to Twitter and LinkedIn when a new blog post was created. I always found this very useful, so looked or a way to replicate this functionality for static pages.</p>
<p>The solution I ended up with was <a href="https://azure.microsoft.com/en-gb/services/logic-apps/">Azure Logic Apps</a>.</p>
<p>I created a Logic App with a scheduled triggered that checked my blogs&rsquo;s RSS feed every 30 minutes. If it found the RSS feed had been updated, I then created a bitly link for the new post&rsquo;s URL, then posts to Email (as a test), Twitter and LinkedIn.</p>
<p>The really nice thing is this is done with built in Logic App connectors, so was quick an easy to create.</p>
<p><img alt="Logic App" loading="lazy" src="/images/rfennell/LogicAppScreenShot.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Migrating from WordPress to Hugo Static Pages</title>
      <link>https://blog.richardfennell.net/posts/migrating-from-wordpress-to-hugo/</link>
      <pubDate>Fri, 01 Jul 2022 00:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/migrating-from-wordpress-to-hugo/</guid>
      <description>&lt;h1 id=&#34;background&#34;&gt;Background&lt;/h1&gt;
&lt;p&gt;Over the years, the Black Marble blog server has been hosted on many platforms. &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/10/18/moving-bm-bloggers-from-blogengine-net-to-wordpress/&#34;&gt;It&amp;rsquo;s previous incarnation was WordPress&lt;/a&gt;, running as a network of sites with an aggregated feed. Of late we had found this slow to serve the first page (due to website start-up time) and there was the constant need to keep the instance patched.&lt;/p&gt;
&lt;p&gt;The time had come for a new solutions, and we picked &lt;a href=&#34;https://gohugo.io/&#34;&gt;Hugo Static Pages&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="background">Background</h1>
<p>Over the years, the Black Marble blog server has been hosted on many platforms. <a href="https://blogs.blackmarble.co.uk/rfennell/2017/10/18/moving-bm-bloggers-from-blogengine-net-to-wordpress/">It&rsquo;s previous incarnation was WordPress</a>, running as a network of sites with an aggregated feed. Of late we had found this slow to serve the first page (due to website start-up time) and there was the constant need to keep the instance patched.</p>
<p>The time had come for a new solutions, and we picked <a href="https://gohugo.io/">Hugo Static Pages</a>.</p>
<h1 id="migration-process">Migration Process</h1>
<p>The process to migrate our content was not as hard as I had feared.</p>
<ol>
<li>The main (aggregate of all active sites) blog was exported using <a href="https://wordpress.com/support/export/">Wordpress&rsquo;s export feature</a> as an XML file.</li>
<li>The read only archives of blogs of ex. staff were individually exported, as they did not appear in the aggregate export file.</li>
<li>The exported XML content was converted to markdown using <a href="https://github.com/palaniraja/blog2md">blog2md</a>. <strong>Note:</strong> that a small edit was made to the <a href="https://gist.github.com/rfennell/0f2768e5e6da0c1eb384e62e2f632116">blog2md tool to place the contents of each sub blog in a separate folder</a></li>
<li>I wrote a <a href="https://gist.github.com/rfennell/40d43afff81809447d30753056a5e64f">PowerShell script to add an <code>alias</code> entry to each file so the old Wordpress permalinks were still valid</a>.</li>
<li>A new Hugo blog was created using the <a href="https://github.com/chipzoller/hugo-clarity">hugo-clarity theme</a> as this was a nice clean blog style theme.</li>
<li>I copied the exported folders (created by blog2md) into the <code>content</code> folder of the new site</li>
<li>I copied the <code>wp_content/uploads</code> folder structure containing all the Wordpress uploaded images to the <code>static</code> folder of the new site. This allowed any images to be found without the need to edit the <code>&lt;image src=...</code> settings in the post files.</li>
<li>Edited the Hugo site to
<ul>
<li>Update styles/images to our branding</li>
<li>Removed sample site pages we did not need</li>
<li>Add a data structure to define the blogs on our site e.g. name, titles, github IDs etc.</li>
<li>Created a page to list the blogs based on the new data file.</li>
<li>Edited the partial html blocks to render the RHS pane as we needed it.</li>
</ul>
</li>
</ol>
<p>The site could now be tested locally. The next step was to publish it to Azure.</p>
<h1 id="hosting">Hosting</h1>
<p>We had planned to use <a href="https://azure.microsoft.com/en-us/services/app-service/static/">Azure Static Web Sites</a> to do the hosting as these are cheap, fast and <a href="https://docs.microsoft.com/en-us/azure/static-web-apps/publish-hugo">the process is well documented</a>. However we had a problem, our site was too big.</p>
<p>The largest site allowed in Azure Static Web Sites is 500Mb, our blog site was over 1Gb with the numerous pages and associated image content. Hugo generated sites are not small.</p>
<p>This meant we had to host the site in an Azure Web Site, we picked a Linux one, set to run PHP8. This setting is required because the Linux Web Site must have some execution engine selected, even though you are not using it. In effect enabling nginx. If this is not done then the static pages are not shown. There is a further issue with this type of hosting, related to nginx&rsquo;s handling of trailing slashes in URLs. The fix suggested <a href="https://www.blimped.nl/fixing-nginx-trailing-slash-issue-on-azure-linux-web-apps/">here</a> addressed the problem.</p>
<p>The move to a Azure Web Site necessitated editing the Azure Static Web Site generated Azure DevOps Pipeline. We ended up with a Pipeline that</p>
<ol>
<li>Clone the repo</li>
<li>Downloads the Hugo tools</li>
<li>Runs the Hugo command to generate the static pages</li>
<li>Zip&rsquo;s up the generated pages</li>
<li>Uses <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/overview?tabs=bicep">Bicep </a> to build the Azure Web Site</li>
<li>Uses Zip deployment to push the pages to the site.</li>
</ol>
<h1 id="summary">Summary</h1>
<p>I am really pleased with this migration. We have ended up with a faster site with a far lower running and maintenance cost.</p>
<p>It was no where near as hard as I had feared, given we had a aggregated network of sites; and we managed to keep all the old permalinks with the exception of the RSS file with changed form <code>/feed/</code> to <code>/feed.xml</code>, but I can live with that. We don&rsquo;t see much RSS usage these days anyway.</p>
<p>So all in all well worth the effort.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Don&#39;t skimp on resources for GHES for demo instances</title>
      <link>https://blog.richardfennell.net/posts/dont-skimp-on-resources-for-ghes-for-demo-instances/</link>
      <pubDate>Thu, 16 Jun 2022 09:53:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dont-skimp-on-resources-for-ghes-for-demo-instances/</guid>
      <description>&lt;p&gt;I wanted to have a look at some GitHub Enterprise Server (GHES) upgrade scenarios so decided to create a quick GHES install on my local test Hyper-V instance. Due to me skimping on resources, and making a typo, creating this instance was much harder than it should have been.&lt;/p&gt;
&lt;p&gt;The first issue was I gave it a tiny data disk, this was down to me making a typo in my GB to Bytes conversion when specifying the size. Interestingly, the GHES setup does not initially complain but sits on the &amp;lsquo;reloading system services&amp;rsquo; stage until it times out. If you check the &lt;em&gt;/setup/config.log&lt;/em&gt; you see many Nomad related 500 errors. A reboot of the VM showed the real problem, the log then showed plenty of out-of-disk space messages.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I wanted to have a look at some GitHub Enterprise Server (GHES) upgrade scenarios so decided to create a quick GHES install on my local test Hyper-V instance. Due to me skimping on resources, and making a typo, creating this instance was much harder than it should have been.</p>
<p>The first issue was I gave it a tiny data disk, this was down to me making a typo in my GB to Bytes conversion when specifying the size. Interestingly, the GHES setup does not initially complain but sits on the &lsquo;reloading system services&rsquo; stage until it times out. If you check the <em>/setup/config.log</em> you see many Nomad related 500 errors. A reboot of the VM showed the real problem, the log then showed plenty of out-of-disk space messages.</p>
<p>reloading system devices does take a while</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2022/06/image-1024x785.png"></p>
<p>The easiest fix was to just start again with a data disk of a reasonable size</p>
<p>I next hit the problems due to my skimping on resources. I am not sure why I chose to limit them, old habits of using systems with scarce resources I guess.</p>
<p>I had only given the VM 10Gb of memory and 1 CPU. The Hyper-V host was not production-grade, but could certainly supply more than that.</p>
<ul>
<li>The lack of at least 14Gb causes the GHES to fail to boot with a nice clear error message</li>
<li>The single CPU meant the &lsquo;reloading application services&rsquo; step fails, the <em>/setup/config.log</em> shows the message</li>
</ul>
<pre tabindex="0"><code>Task Group &#34;treelights&#34; (failed to place 1 allocation):  
* Resources exhausted on 1 nodes  
* Dimension &#34;cpu&#34; exhausted on 1 nodes
</code></pre><p>As soon as I stopped the VM and provided 14Gb of memory and multiple vCPU to the VM instance and rebooted the setup completed as expected.</p>
<p>So the top tip is to <a href="https://docs.github.com/en/enterprise-server@3.4/admin/installation/setting-up-a-github-enterprise-server-instance/installing-github-enterprise-server-on-hyper-v#minimum-requirements">read the GHES systems requirements</a> and actually follow them, even if it is just a test/demo instance.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for can&#39;t add second outlook.com account to Outlook Desktop - requires non-existent PIN</title>
      <link>https://blog.richardfennell.net/posts/fix-for-cant-add-second-outlook-com-account-to-outlook-desktop-requires-non-existent-pin/</link>
      <pubDate>Fri, 10 Jun 2022 14:14:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-cant-add-second-outlook-com-account-to-outlook-desktop-requires-non-existent-pin/</guid>
      <description>&lt;p&gt;I was recently trying to set up a copy of the Outlook desktop app with two outlook.com email accounts and hit a strange issue.&lt;/p&gt;
&lt;p&gt;I was able to add the first one without any problems, but when I tried to add the second one I got an error message asking for PIN, and most importantly on the same dialog showing the second outlook.com email address.&lt;/p&gt;
&lt;p&gt;This is confusing as I know of no way to set a PIN on an outlook.com account.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was recently trying to set up a copy of the Outlook desktop app with two outlook.com email accounts and hit a strange issue.</p>
<p>I was able to add the first one without any problems, but when I tried to add the second one I got an error message asking for PIN, and most importantly on the same dialog showing the second outlook.com email address.</p>
<p>This is confusing as I know of no way to set a PIN on an outlook.com account.</p>
<p>Turns out it is just a badly designed dialog. It wants the PIN of the currently logged-in user on the Windows PC (made worse in my case as the PIN was not what I thought it was and had to be reset)</p>
<p>Once I had a working PIN for the local user login and entered it when prompted all was OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>Adding Job Summary support to my GitHub Release Notes Action</title>
      <link>https://blog.richardfennell.net/posts/adding-job-summary-support-to-my-github-release-notes-action/</link>
      <pubDate>Thu, 19 May 2022 11:03:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/adding-job-summary-support-to-my-github-release-notes-action/</guid>
      <description>&lt;p&gt;A recent addition to GitHub Actions is the ability to &lt;a href=&#34;https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries/&#34;&gt;create a custom job summary&lt;/a&gt;. Using a simple echo command in a script you can write to the job summary&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;steps:   - name: Adding markdown
  run: echo &amp;#39;### Hello world! :rocket:&amp;#39; &amp;gt;&amp;gt; $GITHUB\_STEP\_SUMMARY
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now, this got me thinking. I have a well-used custom &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;release notes extension for Azure DevOps&lt;/a&gt;. It has a &lt;a href=&#34;https://github.com/marketplace/actions/generate-release-notes-using-handlebars-template&#34;&gt;GitHub Action equivalent&lt;/a&gt;, but it does not seem to get as much usage. Would adding support for custom Job Summaries make it more useful?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A recent addition to GitHub Actions is the ability to <a href="https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries/">create a custom job summary</a>. Using a simple echo command in a script you can write to the job summary</p>
<pre tabindex="0"><code>steps:   - name: Adding markdown
  run: echo &#39;### Hello world! :rocket:&#39; &gt;&gt; $GITHUB\_STEP\_SUMMARY
</code></pre><p>Now, this got me thinking. I have a well-used custom <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">release notes extension for Azure DevOps</a>. It has a <a href="https://github.com/marketplace/actions/generate-release-notes-using-handlebars-template">GitHub Action equivalent</a>, but it does not seem to get as much usage. Would adding support for custom Job Summaries make it more useful?</p>
<p>Well, there was only one way to find out, I added it.</p>
<p>There is now an extra boolean parameter for the action <code>writeToJobSummary</code>. If this is set to <code>true</code> (default is <code>false</code>) the output of the release notes action is written as a custom job summary (as well as still being available as a markdown file as before).</p>
<p>So now using this feature you see something similar to the following. The summary is built using a Handlbars based template.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2022/05/image-1-1009x1024.png"></p>
<p>I hope people find this a useful means to format their release notes using handlebars as opposed to having to build them using a series of <code>echo</code> statements in a script or writing their own custom action.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for cannot &#39;TypeError: Cannot read property&#39; when Dependabot submits a PR to upgrade a Jest Module</title>
      <link>https://blog.richardfennell.net/posts/fix-for-cannot-typeerror-cannot-read-property-when-dependabot-submits-a-pr-to-upgrade-a-jest-module/</link>
      <pubDate>Thu, 19 May 2022 07:43:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-cannot-typeerror-cannot-read-property-when-dependabot-submits-a-pr-to-upgrade-a-jest-module/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://github.blog/2020-06-01-keep-all-your-packages-up-to-date-with-dependabot/&#34;&gt;GitHub&amp;rsquo;s Dependabot&lt;/a&gt; is a great tool to help keep your dependencies up to date, and most of the time the PR it generates just merges without a problem. However, sometimes there are issues with other related dependencies.&lt;/p&gt;
&lt;p&gt;This was the case with a recent PR to update jest-circus to 28.x. The PR failed with the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;TypeError: Cannot read property &amp;rsquo;enableGlobally&amp;rsquo; of undefined at jestAdapter (node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:39:25) at TestScheduler.scheduleTests (node_modules/@jest/core/build/TestScheduler.js:333:13) at runJest (node_modules/@jest/core/build/runJest.js:404:19) at _run10000 (node_modules/@jest/core/build/cli/index.js:320:7) at runCLI (node_modules/@jest/core/build/cli/index.js:173:3)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://github.blog/2020-06-01-keep-all-your-packages-up-to-date-with-dependabot/">GitHub&rsquo;s Dependabot</a> is a great tool to help keep your dependencies up to date, and most of the time the PR it generates just merges without a problem. However, sometimes there are issues with other related dependencies.</p>
<p>This was the case with a recent PR to update jest-circus to 28.x. The PR failed with the error</p>
<blockquote>
<p>TypeError: Cannot read property &rsquo;enableGlobally&rsquo; of undefined at jestAdapter (node_modules/jest-circus/build/legacy-code-todo-rewrite/jestAdapter.js:39:25) at TestScheduler.scheduleTests (node_modules/@jest/core/build/TestScheduler.js:333:13) at runJest (node_modules/@jest/core/build/runJest.js:404:19) at _run10000 (node_modules/@jest/core/build/cli/index.js:320:7) at runCLI (node_modules/@jest/core/build/cli/index.js:173:3)</p></blockquote>
<p>In the end, the fix was simple, make sure all the other Jest related packages were updated to 28.x versions. Once I did this, using a <a href="https://github.com/features/codespaces">GitHub Codespace</a>, the PR merged without a problem.</p>
]]></content:encoded>
    </item>
    <item>
      <title>On-Demand video available for May&#39;s &#39;Microsoft and GitHub DevOps Forum: DevSecOps&#39;</title>
      <link>https://blog.richardfennell.net/posts/on-demand-video-available-for-mays-microsoft-and-github-devops-forum-devsecops/</link>
      <pubDate>Tue, 17 May 2022 08:52:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/on-demand-video-available-for-mays-microsoft-and-github-devops-forum-devsecops/</guid>
      <description>&lt;p&gt;Did you miss the &amp;lsquo;Microsoft and GitHub DevOps Forum: DevSecOps&amp;rsquo; event I presented at earlier in May?&lt;br&gt;
Well worry not, the on-demand stream is now available - &lt;a href=&#34;https://info.microsoft.com/UK-AzureAppInno-VDEO-FY22-05May-16-Microsoft-and-GitHub-DevOps-Forum-DevSecOps-SRGCM6911-AID-3046920_LP01-Registration---Form-in-Body.html&#34;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2022/05/image.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Did you miss the &lsquo;Microsoft and GitHub DevOps Forum: DevSecOps&rsquo; event I presented at earlier in May?<br>
Well worry not, the on-demand stream is now available - <a href="https://info.microsoft.com/UK-AzureAppInno-VDEO-FY22-05May-16-Microsoft-and-GitHub-DevOps-Forum-DevSecOps-SRGCM6911-AID-3046920_LP01-Registration---Form-in-Body.html">here</a></p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2022/05/image.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>More examples of using custom variables in Azure DevOps multi-stage YML</title>
      <link>https://blog.richardfennell.net/posts/more-examples-of-using-custom-variables-in-azure-devops-multi-stage-yml/</link>
      <pubDate>Thu, 05 May 2022 08:54:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-examples-of-using-custom-variables-in-azure-devops-multi-stage-yml/</guid>
      <description>&lt;p&gt;I have blogged in the past ( &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2020/11/27/getting-confused-over-azure-devops-pipeline-variable-evaluation/&#34;&gt;here&lt;/a&gt; , &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2022/01/10/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/&#34;&gt;here&lt;/a&gt; and &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2022/02/19/a-workaround-for-not-being-able-to-access-custom-variables-via-stagedependencies-if-they-are-set-in-deployment-jobs-in-azure-devops-pipelines/&#34;&gt;here&lt;/a&gt;) about the complexities and possible areas of confusion with different types of Azure DevOps pipeline variables.&lt;/p&gt;
&lt;p&gt;Well here is another example of how to use variables and what can trip you up.&lt;/p&gt;
&lt;p&gt;The key in this example is the scope of a variable, whether it is available outside a job and the syntax to access it&lt;/p&gt;
&lt;h2 id=&#34;variables-local-to-the-job&#34;&gt;Variables local to the Job&lt;/h2&gt;
&lt;p&gt;So, if you create your variable as shown below&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have blogged in the past ( <a href="https://blogs.blackmarble.co.uk/rfennell/2020/11/27/getting-confused-over-azure-devops-pipeline-variable-evaluation/">here</a> , <a href="https://blogs.blackmarble.co.uk/rfennell/2022/01/10/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/">here</a> and <a href="https://blogs.blackmarble.co.uk/rfennell/2022/02/19/a-workaround-for-not-being-able-to-access-custom-variables-via-stagedependencies-if-they-are-set-in-deployment-jobs-in-azure-devops-pipelines/">here</a>) about the complexities and possible areas of confusion with different types of Azure DevOps pipeline variables.</p>
<p>Well here is another example of how to use variables and what can trip you up.</p>
<p>The key in this example is the scope of a variable, whether it is available outside a job and the syntax to access it</p>
<h2 id="variables-local-to-the-job">Variables local to the Job</h2>
<p>So, if you create your variable as shown below</p>
<pre tabindex="0"><code>write-host &#34;##vso\[task.setvariable variable=standardvar\]$MyPowerShellVar&#34;
</code></pre><p>It is only available in the current job in the form <strong>$(standardvar)</strong></p>
<h2 id="variable-with-a-wider-scope">Variable with a wider scope</h2>
<p>If you want it to be available in another job, or stage you have to declare it thus, adding <strong>;isOutput=true</strong></p>
<pre tabindex="0"><code>write-host &#34;##vso\[task.setvariable variable=stagevar;isOutput=true\]$MyPowerShellVar&#34;
</code></pre><p>But there is also a change in how you access it.</p>
<ul>
<li>You need to give the script that declares the variable a <strong>name</strong> so it can be referenced</li>
<li>You need to add <strong>dependons</strong> associations between stages/jobs</li>
<li>And the syntax used to access the variable changes depending on whether you are in the same job, same stage but a different job or a completely different stage.</li>
</ul>
<p>Below is a fully worked example</p>
<script src="https://gist.github.com/rfennell/abb41fa17f2c4403102c72c9360f59d4.js"></script>
]]></content:encoded>
    </item>
    <item>
      <title>Updating the Azure Application client_secret used by Packer</title>
      <link>https://blog.richardfennell.net/posts/updating-the-azure-application-client_secret-used-by-packer/</link>
      <pubDate>Tue, 03 May 2022 10:18:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updating-the-azure-application-client_secret-used-by-packer/</guid>
      <description>&lt;p&gt;As I have posted about previously, we create our Azure DevOps build agent images using the same Packer definitions as used by Microsoft. This time when I ran my Packer command to build an updated VHD I got the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Build &amp;lsquo;vhd&amp;rsquo; errored after 135 milliseconds 708 microseconds: adal: Refresh request failed. Status Code = &amp;lsquo;401&amp;rsquo;. Response body: {&amp;ldquo;error&amp;rdquo;:&amp;ldquo;invalid_client&amp;rdquo;,&amp;ldquo;error_description&amp;rdquo;:&amp;ldquo;AADSTS7000222: The provided client secret keys for app &amp;lsquo;6425416f-aa94-4c20-8395-XXXXXXX&amp;rsquo; are expired. Visit the Azure portal to create new keys for your app: &lt;a href=&#34;https://aka.ms/NewClientSecret&#34;&gt;https://aka.ms/NewClientSecret&lt;/a&gt;, or consider using certificate credentials for added security: &lt;a href=&#34;https://aka.ms/certCreds.rnTrace&#34;&gt;https://aka.ms/certCreds.rnTrace&lt;/a&gt; ID: 65a200cf-8423-4d52-af07-67bf26225200rnCorrelation ID: 0f86de87-33fa-443b-8186-4de3894972e1rnTimestamp: 2022-05-03 08:36:50Z&amp;rdquo;,&amp;ldquo;error_codes&amp;rdquo;:[7000222],&amp;ldquo;timestamp&amp;rdquo;:&amp;ldquo;2022-05-03 08:36:50Z&amp;rdquo;,&amp;ldquo;trace_id&amp;rdquo;:&amp;ldquo;65a200cf-8423-4d52-af07-67bf26225200&amp;rdquo;,&amp;ldquo;correlation_id&amp;rdquo;:&amp;ldquo;0f86de87-33fa-443b-8186-4de3894972e1&amp;rdquo;,&amp;ldquo;error_uri&amp;rdquo;:&amp;ldquo;&lt;a href=&#34;https://login.microsoftonline.com/error?code=7000222%22%7d&#34;&gt;https://login.microsoftonline.com/error?code=7000222&#34;}&lt;/a&gt; Endpoint &lt;a href=&#34;https://login.microsoftonline.com/545a7a95-3c4d-4e88-9890-baa86d5fdacb/oauth2/token==%3E&#34;&gt;https://login.microsoftonline.com/545a7a95-3c4d-4e88-9890-baa86d5fdacb/oauth2/token==&gt;&lt;/a&gt; Builds finished but no artifacts were created.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As I have posted about previously, we create our Azure DevOps build agent images using the same Packer definitions as used by Microsoft. This time when I ran my Packer command to build an updated VHD I got the error</p>
<blockquote>
<p>Build &lsquo;vhd&rsquo; errored after 135 milliseconds 708 microseconds: adal: Refresh request failed. Status Code = &lsquo;401&rsquo;. Response body: {&ldquo;error&rdquo;:&ldquo;invalid_client&rdquo;,&ldquo;error_description&rdquo;:&ldquo;AADSTS7000222: The provided client secret keys for app &lsquo;6425416f-aa94-4c20-8395-XXXXXXX&rsquo; are expired. Visit the Azure portal to create new keys for your app: <a href="https://aka.ms/NewClientSecret">https://aka.ms/NewClientSecret</a>, or consider using certificate credentials for added security: <a href="https://aka.ms/certCreds.rnTrace">https://aka.ms/certCreds.rnTrace</a> ID: 65a200cf-8423-4d52-af07-67bf26225200rnCorrelation ID: 0f86de87-33fa-443b-8186-4de3894972e1rnTimestamp: 2022-05-03 08:36:50Z&rdquo;,&ldquo;error_codes&rdquo;:[7000222],&ldquo;timestamp&rdquo;:&ldquo;2022-05-03 08:36:50Z&rdquo;,&ldquo;trace_id&rdquo;:&ldquo;65a200cf-8423-4d52-af07-67bf26225200&rdquo;,&ldquo;correlation_id&rdquo;:&ldquo;0f86de87-33fa-443b-8186-4de3894972e1&rdquo;,&ldquo;error_uri&rdquo;:&ldquo;<a href="https://login.microsoftonline.com/error?code=7000222%22%7d">https://login.microsoftonline.com/error?code=7000222"}</a> Endpoint <a href="https://login.microsoftonline.com/545a7a95-3c4d-4e88-9890-baa86d5fdacb/oauth2/token==%3E">https://login.microsoftonline.com/545a7a95-3c4d-4e88-9890-baa86d5fdacb/oauth2/token==></a> Builds finished but no artifacts were created.</p></blockquote>
<p>As the error message made clear, the <strong>client_secret</strong> had expired.</p>
<p>This value was originally set/generated when the <a href="https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest">Azure Service Prinicple</a> was created. However, as I don&rsquo;t want a new SP, this time I just wanted to update the secret via the Azure Portal (Home &gt; AAD &gt; Application Registration &gt; [My Packer App].</p>
<p>The overview showed the old Secret had expired and I was able to create a new one on the Certificates and Secrets tab. However, when I update my Packer configuration file and re-ran the command it still failed.</p>
<p>It only worked after I deleted the expired secret. I am not sure if this is a requirement ( it is not something I have seen before) or just some propagation/cache delay.</p>
<p>But worth a blog post as a reminder to my future self and any other with a similar issue.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at the rescheduled Microsoft and GitHub DevOps Forum on the 4th of May</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-the-rescheduled-microsoft-and-github-devops-forum-on-the-4th-of-may/</link>
      <pubDate>Thu, 07 Apr 2022 12:14:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-the-rescheduled-microsoft-and-github-devops-forum-on-the-4th-of-may/</guid>
      <description>&lt;p&gt;I&amp;rsquo;m speaking at the resheduled Microsoft and GitHub DevOps Forum on the 4th of March.&lt;/p&gt;
&lt;p&gt;To Register: &lt;a href=&#34;https://mktoevents.com/Microsoft&amp;#43;Event/333304/157-GQE-382?wt.mc_id=AID3045587_QSG_EML_586670&#34;&gt;https://mktoevents.com/Microsoft+Event/333304/157-GQE-382?wt.mc_id=AID3045587_QSG_EML_586670&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2022/04/Social-Speaker-Cards_Richard-1024x538.jpg&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I&rsquo;m speaking at the resheduled Microsoft and GitHub DevOps Forum on the 4th of March.</p>
<p>To Register: <a href="https://mktoevents.com/Microsoft&#43;Event/333304/157-GQE-382?wt.mc_id=AID3045587_QSG_EML_586670">https://mktoevents.com/Microsoft+Event/333304/157-GQE-382?wt.mc_id=AID3045587_QSG_EML_586670</a></p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2022/04/Social-Speaker-Cards_Richard-1024x538.jpg"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for Azure DevOps deployment to an environment stuck in &#34;Job is pending&#34; state</title>
      <link>https://blog.richardfennell.net/posts/fixe-for-azure-devops-deployment-to-an-environment-stuck-in-job-is-pending-state/</link>
      <pubDate>Thu, 07 Apr 2022 12:06:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixe-for-azure-devops-deployment-to-an-environment-stuck-in-job-is-pending-state/</guid>
      <description>&lt;h2 id=&#34;issue&#34;&gt;Issue&lt;/h2&gt;
&lt;p&gt;I had an Azure DevOps YAML based pipeline that had been working but was now getting stuck with the message &amp;ldquo;Job is pending&amp;hellip;&amp;rdquo; when trying to start a stage in which there is a deployment to an &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments&#34;&gt;environment&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Looking at the logs and Azure DevOps UI it was not obvious what the issue was.&lt;/p&gt;
&lt;h2 id=&#34;solution&#34;&gt;Solution&lt;/h2&gt;
&lt;p&gt;Turns out it was due to &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&amp;amp;tabs=check-pass&#34;&gt;environment checks and approvals&lt;/a&gt;. There was a branch policy on the environment. This was set to only allow use of Azure DevOps Templates on a given branch. The edit that had been done to the YAML meant it was trying to extend a template in a branch that was not in the approved list.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="issue">Issue</h2>
<p>I had an Azure DevOps YAML based pipeline that had been working but was now getting stuck with the message &ldquo;Job is pending&hellip;&rdquo; when trying to start a stage in which there is a deployment to an <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments">environment</a>.</p>
<p>Looking at the logs and Azure DevOps UI it was not obvious what the issue was.</p>
<h2 id="solution">Solution</h2>
<p>Turns out it was due to <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&amp;tabs=check-pass">environment checks and approvals</a>. There was a branch policy on the environment. This was set to only allow use of Azure DevOps Templates on a given branch. The edit that had been done to the YAML meant it was trying to extend a template in a branch that was not in the approved list.</p>
<p>As soon as the working branch was added to the proved list it all worked as expected.</p>
<p>So if you see &ldquo;Job is pending&rdquo; errors with no obvious reason, check the environment approvals and checks. Remember, any issues with these don&rsquo;t show up in the build log</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at the Microsoft and GitHub DevOps Forum on the 10th of March</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-the-microsoft-and-github-devops-forum-on-the-10th-of-march/</link>
      <pubDate>Thu, 03 Mar 2022 11:40:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-the-microsoft-and-github-devops-forum-on-the-10th-of-march/</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Update&lt;/strong&gt;: This event has been rescheduled for the 4th May&lt;/em&gt;. The new registration link is &lt;a href=&#34;https://mktoevents.com/Microsoft&amp;#43;Event/333304/157-GQE-382?wt.mc_id=AID3045587_QSG_EML_586670&#34;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m speaking at the Microsoft and GitHub DevOps Forum on the 10th of March.&lt;/p&gt;
&lt;p&gt;To register - &lt;a href=&#34;https://mktoevents.com/Microsoft&amp;#43;Event/325749/157-GQE-382&#34;&gt;https://mktoevents.com/Microsoft+Event/325749/157-GQE-382&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://mktoevents.com/Microsoft&amp;#43;Event/325749/157-GQE-382&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2022/03/Social-Speaker-Cards_LI-Richard-1024x538.jpg&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em><strong>Update</strong>: This event has been rescheduled for the 4th May</em>. The new registration link is <a href="https://mktoevents.com/Microsoft&#43;Event/333304/157-GQE-382?wt.mc_id=AID3045587_QSG_EML_586670">here</a></p>
<p>I&rsquo;m speaking at the Microsoft and GitHub DevOps Forum on the 10th of March.</p>
<p>To register - <a href="https://mktoevents.com/Microsoft&#43;Event/325749/157-GQE-382">https://mktoevents.com/Microsoft+Event/325749/157-GQE-382</a></p>
<p><a href="https://mktoevents.com/Microsoft&#43;Event/325749/157-GQE-382"><img loading="lazy" src="/wp-content/uploads/sites/2/2022/03/Social-Speaker-Cards_LI-Richard-1024x538.jpg"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>A workaround for not being able to access custom variables via stagedependencies if they are set in deployment jobs in Azure DevOps Pipelines</title>
      <link>https://blog.richardfennell.net/posts/a-workaround-for-not-being-able-to-access-custom-variables-via-stagedependencies-if-they-are-set-in-deployment-jobs-in-azure-devops-pipelines/</link>
      <pubDate>Sat, 19 Feb 2022 16:37:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-workaround-for-not-being-able-to-access-custom-variables-via-stagedependencies-if-they-are-set-in-deployment-jobs-in-azure-devops-pipelines/</guid>
      <description>&lt;p&gt;I have blogged in the past ( &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2020/11/27/getting-confused-over-azure-devops-pipeline-variable-evaluation/&#34;&gt;here&lt;/a&gt; and &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2022/01/10/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/&#34;&gt;here&lt;/a&gt;) about the complexities and possible areas of confusion with different types of Azure DevOps pipeline variables. I have also &lt;a href=&#34;https://developercommunity.visualstudio.com/t/unable-to-retrieve-stage-result-from-stagedependen/1064759#T-N1130023&#34;&gt;seen issues raised&lt;/a&gt; over how to access custom variables across jobs and stages. Safe to say, this is an area where it is really easy to get it wrong and end up with a null value.&lt;/p&gt;
&lt;p&gt;I have recently come across another edge case to add to the list of gotchas.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have blogged in the past ( <a href="https://blogs.blackmarble.co.uk/rfennell/2020/11/27/getting-confused-over-azure-devops-pipeline-variable-evaluation/">here</a> and <a href="https://blogs.blackmarble.co.uk/rfennell/2022/01/10/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/">here</a>) about the complexities and possible areas of confusion with different types of Azure DevOps pipeline variables. I have also <a href="https://developercommunity.visualstudio.com/t/unable-to-retrieve-stage-result-from-stagedependen/1064759#T-N1130023">seen issues raised</a> over how to access custom variables across jobs and stages. Safe to say, this is an area where it is really easy to get it wrong and end up with a null value.</p>
<p>I have recently come across another edge case to add to the list of gotchas.</p>
<p>It seems you cannot use <strong>stagedependencies</strong> to access a variable declared in a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops">deployment job</a> i.e. when you are using an <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops">environment</a> to get approval for a release.</p>
<p>The workaround is to add a job that is dependent on the deployment and set the custom variable within it. This variable can be accessed by a later stage as shown below</p>
<pre tabindex="0"><code>- stage: S1
  jobs:
  - deployment: D1
    strategy:
      runOnce:
        deploy:
          steps:
              - checkout: none
              - bash: echo &#34;Can&#39;t access the variable if set in here&#34;
  - job: J1
    dependsOn:
      D1
    steps:
      - checkout: none
      - bash: echo &#34;##vso[task.setvariable variable=myvar;isOutput=true]True&#34; 
        name: BashStep

- stage: S2
  condition: always()

  dependsOn: 
   - S1
  jobs:
   - job: Use_Variable
     variables: # add an alias for the var
       myvar: $[stagedependencies.S1.J1.outputs[&#39;BashStep.myvar&#39;]]
        steps:
          - checkout: none
          - dash: echo &#34;Script gets run when myvar is true&#34;
            condition: eq (variables[&#39;myvar&#39;],&#39;True&#39;)
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>The importance of blogging - or how to do your future self a favour</title>
      <link>https://blog.richardfennell.net/posts/the-importance-of-blogging-or-how-to-do-your-future-self-a-favour/</link>
      <pubDate>Fri, 14 Jan 2022 10:34:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-importance-of-blogging-or-how-to-do-your-future-self-a-favour/</guid>
      <description>&lt;p&gt;Yesterday, yet again, I was thankful for my past self taking time to blog about a technical solution I had found.&lt;/p&gt;
&lt;p&gt;I had an error when trying to digitally sign a package. On searching on the error code I came across my &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2019/04/30/a-fix-for-error-signersign-failed-2146958839-0x80080209-with-signtool-exe/&#34;&gt;own blog post&lt;/a&gt; with the solution. This was, as usual, one I had no recollection of writing.&lt;/p&gt;
&lt;p&gt;I find this happens all the time. It is a little disturbing when you search for an issue and the only reference is to a post you made and have forgotten, so you are the defacto expert, nobody knows anymore on the subject, but better than having no solution.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Yesterday, yet again, I was thankful for my past self taking time to blog about a technical solution I had found.</p>
<p>I had an error when trying to digitally sign a package. On searching on the error code I came across my <a href="https://blogs.blackmarble.co.uk/rfennell/2019/04/30/a-fix-for-error-signersign-failed-2146958839-0x80080209-with-signtool-exe/">own blog post</a> with the solution. This was, as usual, one I had no recollection of writing.</p>
<p>I find this happens all the time. It is a little disturbing when you search for an issue and the only reference is to a post you made and have forgotten, so you are the defacto expert, nobody knows anymore on the subject, but better than having no solution.</p>
<p>Too often I ask people if they have documented the hints, tips and solutions they find and the response I get is &lsquo;I will remember&rsquo;. Trust me you won&rsquo;t. Write something down where it is discoverable for your team and your future self. This can be any format that works for you: an Email, OneNote, a Wiki or the one I find most useful a blog. Just make sure it is easily searchable.</p>
<p>Your future self will thank you.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Azure DevOps Stage Dependency Variables with Conditional Stage and Job Execution</title>
      <link>https://blog.richardfennell.net/posts/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/</link>
      <pubDate>Mon, 10 Jan 2022 13:16:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-azure-devops-stage-dependency-variables-with-conditional-stage-and-job-execution/</guid>
      <description>&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2022/01/image-1-1024x446.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;I have been doing some work with Azure DevOps multi-stage YAML pipelines using stage dependency variables and conditions. They can get confusing quickly, you need one syntax in one place and another elsewhere.&lt;/p&gt;
&lt;p&gt;So, here are a few things I have learnt&amp;hellip;&lt;/p&gt;
&lt;h2 id=&#34;what-are-stage-dependency-variables&#34;&gt;What are stage dependency variables?&lt;/h2&gt;
&lt;p&gt;&lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&amp;amp;tabs=yaml#specify-dependencies&#34;&gt;Stage Dependencies&lt;/a&gt; are the way you define which stage follows another in a multi-stage YAML pipeline. This is as opposed to just relying on the order they appear in the YAML file, the default order. Hence, they are critical to creating complex pipelines.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><img loading="lazy" src="/wp-content/uploads/sites/2/2022/01/image-1-1024x446.png"></p>
<p>I have been doing some work with Azure DevOps multi-stage YAML pipelines using stage dependency variables and conditions. They can get confusing quickly, you need one syntax in one place and another elsewhere.</p>
<p>So, here are a few things I have learnt&hellip;</p>
<h2 id="what-are-stage-dependency-variables">What are stage dependency variables?</h2>
<p><a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&amp;tabs=yaml#specify-dependencies">Stage Dependencies</a> are the way you define which stage follows another in a multi-stage YAML pipeline. This is as opposed to just relying on the order they appear in the YAML file, the default order. Hence, they are critical to creating complex pipelines.</p>
<p><a href="https://developercommunity.visualstudio.com/t/unable-to-retrieve-stage-result-from-stagedependen/1064759">Stage Dependency variables</a> are the way you can pass variables from one stage to another. Special handling is required, as you can’t just use the ordinary <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&amp;tabs=yaml%2Cbatch#use-output-variables-from-tasks">output variables</a> (which are in effect environment variables on the agent) as you might within a job as there is no guarantee the stages and jobs are running on the same agent.</p>
<p>For stage dependency variables, is not how you create output variables, that does not differ from the standard manner, the difference is in how you retrieve them.</p>
<p><a href="https://gist.github.com/rfennell/b57db0c2e4e3bae1968a4908b0df3595">In my sample</a>, I used a BASH script to set the output variable based on a parameter passed into the pipeline, but you can create output variables using scripts or tasks</p>
<pre tabindex="0"><code> - stage: SetupStage
    displayName: &#39;Setup Stage&#39;
    jobs:
      - job: SetupJob
        displayName: &#39;Setup Job&#39;
        steps:
          - checkout: none
          - bash:  |
              set -e # need to avoid trailing &#34; being added to the variable https://github.com/microsoft/azure-pipelines-tasks/issues/10331
              echo &#34;##vso[task.setvariable variable=MyVar;isOutput=true]${{parameters.value}}&#34;
            name: SetupStep
            displayName: &#39;Setup Step&#39; 
</code></pre><h2 id="possible-ways-to-access-a-stage-dependency-variable">Possible ways to access a stage dependency variable</h2>
<p>There are two basic ways to access stage dependency variables, both using array objects</p>
<pre tabindex="0"><code>stageDependencies.STAGENAME.JOBNAME.outputs[&#39;STEPNAME.VARNAME&#39;]
dependencies.STAGENAME.outputs[&#39;JOBNAME.STEPNAME.VARNAME&#39;]
</code></pre><p>Which one you use, in which place, and whether via a local alias is the complexity</p>
<h2 id="how-to-access-a-stage-dependency-in-a-script">How to access a stage dependency in a script?</h2>
<p>To access a stage dependency variable in a script, or a task, there are two key requirements</p>
<ul>
<li>The stage containing the consuming job and hence script/task, must be set as dependant on the stage that created the output variable</li>
<li>You have to declare a local alias for the value in the <strong>stageDependencies</strong> array within the consuming stage. This local alias will be used as the local name by scripts and tasks</li>
</ul>
<p>Once this is configured you can access the variable like any other local YAML variable</p>
<pre tabindex="0"><code> - stage: Show_With_Dependancy
    displayName: ‘Show Stage With dependancy’
    dependsOn:
      - SetupStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs[‘SetupStep.MyVar’]]
    jobs:
      - job: Job
        displayName: ‘Show Job With dependancy’
        steps:
        - bash: |
              echo “localMyVarViaStageDependancies - $(localMyVarViaStageDependancies)” 
</code></pre><p><strong>Tip:</strong> If you are having a problem with the value not being set for a stage dependency variable look in the pipeline execution log, at the job level, and check the ‘Job preparation parameters’ section to see what is being evaluated. This will show if you are using the wrong array object, or have a typo, as any incorrect declarations evaluate as null</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2022/01/image.png"></p>
<h2 id="how-to-use-a-stage-dependency-as-a-stage-condition">How to use a stage dependency as a stage condition</h2>
<p>You can use stage dependency variables as controlling conditions for running a stage. In this use-case you use the <strong>dependencies</strong> array and not the <strong>stagedependencies</strong> used when aliasing variables.</p>
<pre tabindex="0"><code> - stage: Show_With_Dependancy_Condition
    condition: and (succeeded(), eq (dependencies.SetupStage.outputs[&#39;SetupJob.SetupStep.MyVar&#39;], &#39;True&#39;))
    displayName: &#39;Show Stage With dependancy Condition&#39; 
</code></pre><p>From my experiments for this use-case, you don’t seem to need the <strong>DependsOn</strong> entry to decare the stage that exposed the output variable for this to work. So, this is very useful for complex pipelines where you want to skip a later stage based on a much earlier stage for which there is no direct dependency.</p>
<p>A side effect of using a stage condition is that many subsequent stages have to have their execution conditions edited as you cannot rely on the default completion stage state <strong>succeeded.</strong> This is because the prior stages could now be <strong>succeeded</strong> or <strong>skipped</strong>. Hence all following stages need to use the condition</p>
<pre tabindex="0"><code>condition: and( not(failed()), not(canceled()))
</code></pre><h2 id="how-to-use-a-stage-dependency-as-a-job-condition">How to use a stage dependency as a job condition</h2>
<p>To avoid the need to alter all the subsequent stage&rsquo;s execution conditions you can set a condition at the job or task level. Unlike setting the condition at that stage level, you have to create a local alias (see above) and check the condition on that</p>
<pre tabindex="0"><code> - stage: Show_With_Dependancy_Condition_Job
    displayName: &#39;Show Stage With dependancy Condition&#39;
    dependsOn:
      - SetupStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs[&#39;SetupStep.MyVar&#39;]]
    jobs:
      - job: Job
        condition: and (succeeded(),
          eq (variables.localMyVarViaStageDependancies, &#39;True&#39;))
        displayName: &#39;Show Job With dependancy&#39; 
</code></pre><p>This technique will work for both <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&amp;tabs=yaml#types-of-jobs">Agent-based and Agent-Less (Server) jobs</a></p>
<p>A warning though, if your job makes use of an <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops">environment</a> with a manual approval, the environment approval check is evaluated before the job condition. This is probably not what you are after, so if using conditions with environments that use manual approvals then the condition is probably best set at the stage level, with the knock-on issues of states of subsequent stages as mentioned above.</p>
<p>An alternative, if you are just using the environment for manual approval, is to look at using an AgentLess job with <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/manual-validation?view=azure-devops&amp;tabs=yaml">a manual approval</a>. AgentLess job manual approvals are evaluated after the job condition, so do not suffer the same problem.</p>
<p>If you need to use a stage dependency variable in a later stage, as a job condition or script variable, but do not wish to add a direct dependency between the stages, you could consider ‘republishing’ the variable as an output of the intermedia stage(s)</p>
<pre tabindex="0"><code> - stage: Intermediate_Stage
    dependsOn:
      - SetUpStage
    variables:
      localMyVarViaStageDependancies : $[stageDependencies.SetupStage.SetupJob.outputs[&#39;SetupStep.MyVar&#39;]]
    jobs:
      - job: RepublishMyVar
       steps:
          - checkout: none
          - bash:  |
              set -e # need to avoid trailing &#34; being added to the variable https://github.com/microsoft/azure-pipelines-tasks/issues/10331
              echo &#34;##vso[task.setvariable variable=MyVar;isOutput=true]$( localMyVarViaStageDependancies)&#34;
            name: RepublishStep 
</code></pre><h2 id="summing-up">Summing Up</h2>
<p>So I hope this post will help you, and the future me, navigate the complexities of stage variables</p>
<p><a href="https://gist.github.com/rfennell/b57db0c2e4e3bae1968a4908b0df3595">You can find the YAML for the test harness I have been using in this GitHub GIST</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Setting Azure DevOps &#39;All Repositories&#39; Policies via the CLI</title>
      <link>https://blog.richardfennell.net/posts/setting-azure-devops-all-repositories-policies-via-the-cli/</link>
      <pubDate>Fri, 12 Nov 2021 12:49:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-azure-devops-all-repositories-policies-via-the-cli/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;https://docs.microsoft.com/en-us/cli/azure/devops?view=azure-cli-latest&#34;&gt;Azure DevOps CLI&lt;/a&gt; provides plenty of commands to update Team Projects, but it does not cover all things you might want to set. A good example is setting branch policies. For a given repo you can set the policies using the &lt;a href=&#34;https://docs.microsoft.com/en-us/cli/azure/service-page/azure%20repos?view=azure-cli-latest&#34;&gt;Azure Repo&lt;/a&gt; command eg:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;az repos policy approver-count update --project &amp;lt;projectname&amp;gt; --blocking true --enabled true --branch main --repository-id &amp;lt;guid&amp;gt; --minimum-approver-count w --reset-on-source-push true  --creator-vote-counts false --allow-downvotes false 
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;However, you hit a problem if you wish to set the &amp;lsquo;All Repositories&amp;rsquo; policies for a Team Project. The issue is that the above command requires a specific &lt;strong&gt;--project&lt;/strong&gt; parameter.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="https://docs.microsoft.com/en-us/cli/azure/devops?view=azure-cli-latest">Azure DevOps CLI</a> provides plenty of commands to update Team Projects, but it does not cover all things you might want to set. A good example is setting branch policies. For a given repo you can set the policies using the <a href="https://docs.microsoft.com/en-us/cli/azure/service-page/azure%20repos?view=azure-cli-latest">Azure Repo</a> command eg:</p>
<pre tabindex="0"><code>az repos policy approver-count update --project &lt;projectname&gt; --blocking true --enabled true --branch main --repository-id &lt;guid&gt; --minimum-approver-count w --reset-on-source-push true  --creator-vote-counts false --allow-downvotes false 
</code></pre><p>However, you hit a problem if you wish to set the &lsquo;All Repositories&rsquo; policies for a Team Project. The issue is that the above command requires a specific <strong>--project</strong> parameter.</p>
<p>I can find no way around this using any published CLI tools, but using the <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-6.1">REST API</a> there is an option.</p>
<p>You could of course check the API documentation to work out the exact call and payload. However, I usually find it quicker to perform the action I require in the Azure DevOps UI and monitor the network traffic in the browser developer tools to see what calls are made to the API.</p>
<p>Using this technique, I have created the following script that sets the All Repositories branch policies.</p>
<script src="https://gist.github.com/rfennell/def3ae3e7303db66f3eda3d2eb4a2475.js"></script>
<p>Note that you can use this same script to set a specific repo&rsquo;s branch policies by setting the <strong>repositoryId</strong> in the JSON payloads.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to fix Azure Pipeline YAML parsing errors seen after renaming the default Git branch</title>
      <link>https://blog.richardfennell.net/posts/how-to-fix-azure-pipeline-yaml-parsing-errors-seen-after-renaming-the-default-git-branch/</link>
      <pubDate>Wed, 03 Nov 2021 09:25:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-fix-azure-pipeline-yaml-parsing-errors-seen-after-renaming-the-default-git-branch/</guid>
      <description>&lt;p&gt;If in Azure DevOps you rename your Git Repo&amp;rsquo;s default branch, say from &amp;lsquo;master&amp;rsquo; to &amp;lsquo;main&amp;rsquo;, you will probably see an error in the form &lt;em&gt;&amp;lsquo;Encountered error(s) while parsing pipeline YAML: Could not get the latest source version for repository BlackMarble.NET.App hosted on Azure Repos using ref refs/heads/master.&lt;/em&gt;&amp;rsquo; when you try to manually queue a pipeline run.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2021/11/image-1.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;You could well think, as I did, &amp;lsquo;all I need to do is update the YAML build files with a find and replace for master to main&amp;rsquo;, but this does not fix the problem.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If in Azure DevOps you rename your Git Repo&rsquo;s default branch, say from &lsquo;master&rsquo; to &lsquo;main&rsquo;, you will probably see an error in the form <em>&lsquo;Encountered error(s) while parsing pipeline YAML: Could not get the latest source version for repository BlackMarble.NET.App hosted on Azure Repos using ref refs/heads/master.</em>&rsquo; when you try to manually queue a pipeline run.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/11/image-1.png"></p>
<p>You could well think, as I did, &lsquo;all I need to do is update the YAML build files with a find and replace for master to main&rsquo;, but this does not fix the problem.</p>
<p>The issue is in the part of Azure DevOps pipeline settings that are still managed by the UI and not the YAML file. The association of the Git repo and branch. To edit this setting use the following process (and yes it is well hidden)</p>
<ul>
<li>In the Azure DevOps browser UI open the pipeline for editing (it shows the YAML page)</li>
<li>On the ellipsise menu ( &hellip; top right) pick Tiggers</li>
<li>Select the YAML tab (on left)</li>
<li>Then select the &lsquo;Get Sources&rsquo; section where you can change the default branch</li>
<li>Save the changes</li>
</ul>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/11/image-2-1024x462.png"></p>
<p>Hope this post saves someone some time</p>
]]></content:encoded>
    </item>
    <item>
      <title>New features for my Azure DevOps Release Notes Extension</title>
      <link>https://blog.richardfennell.net/posts/new-features-for-my-azure-devops-release-notes-extension/</link>
      <pubDate>Sat, 30 Oct 2021 15:42:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-features-for-my-azure-devops-release-notes-extension/</guid>
      <description>&lt;p&gt;Over the past couple of weeks, I have shipped three user-requested features for my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Azure DevOps Release Notes Extension&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;generate-multiple-documents-in-a-single-run&#34;&gt;Generate multiple documents in a single run&lt;/h2&gt;
&lt;p&gt;You can now specify multiple templates and output files. This allows a single instance of the task to generate multiple release note documents with different formats/content.&lt;/p&gt;
&lt;p&gt;This is useful when the generation of the dataset is slow, but you need a variety of document formats for different consumers.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the past couple of weeks, I have shipped three user-requested features for my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Azure DevOps Release Notes Extension</a></p>
<h2 id="generate-multiple-documents-in-a-single-run">Generate multiple documents in a single run</h2>
<p>You can now specify multiple templates and output files. This allows a single instance of the task to generate multiple release note documents with different formats/content.</p>
<p>This is useful when the generation of the dataset is slow, but you need a variety of document formats for different consumers.</p>
<p>To use this feature you just need to specify comma-separated template and output file names.</p>
<pre tabindex="0"><code> - task: XplatGenerateReleaseNotes@3
    displayName: &#39;Release notes with multiple templates&#39;
    inputs:
      templatefile: &#39;$(System.DefaultWorkingDirectory)/template1.md,$(System.DefaultWorkingDirectory)/template2.md&#39;
      outputfile: &#39;$(System.DefaultWorkingDirectory)/out1.md, $(System.DefaultWorkingDirectory)/out2.md&#39;
      outputVariableName: &#39;outputvar&#39;
      templateLocation: &#39;File&#39;
</code></pre><p><strong>Notes</strong></p>
<ul>
<li>The number of template and output files listed must match.</li>
<li>The pipeline output variable is set to the contents generated from the first template listed</li>
</ul>
<h2 id="select-work-items-using-a-query">Select Work Items using a query</h2>
<p>There is now a new parameter where you can provide the WHERE part of a WIQL query. This allows work items to be returned completely separately to the current build/release based on the query.</p>
<pre tabindex="0"><code> - task: richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes-DEV1.XplatGenerate-Release-Notes.XplatGenerateReleaseNotes@3
    inputs:
      wiqlWhereClause: &#39;[System.TeamProject] = &#34;MyProject&#34; and [System.WorkItemType] = &#34;Product Backlog Item&#34;&#39; 
</code></pre><p><strong>Notes</strong></p>
<ul>
<li>You cannot use <code>@project</code>, <code>@currentiteration</code> or <code>@me</code> variables in the WHERE clause, but <code>@today</code> is ok.</li>
<li>To work out the WHERE clause I recommend using the <a href="https://marketplace.visualstudio.com/items?itemName=ottostreifel.wiql-editor">WIQL Editor</a> extension</li>
</ul>
<p>The results of this WIQL are available in a new independent Handlebars template array <code>queryWorkItems</code>. By independent I mean it is completely separate from all the other WI arrays generated from build associations.</p>
<p>This array can be used in the same way as the other work items arrays</p>
<pre tabindex="0"><code># WIQL list of WI ({{queryWorkItems.length}})
{{#forEach queryWorkItems}}
   *  **{{this.id}}** {{lookup this.fields &#39;System.Title&#39;}}
{{/forEach}}
</code></pre><h2 id="manually-associated-work-items">Manually associated Work Items</h2>
<p>It has recently been found that if you manually associate work items with a build, that these work items are not listed using the API calls my task previously used. Hence, they don&rsquo;t appear in release notes.</p>
<p>If you have this form of association there is now a new option to enable them to be detected. To enable it use the new parameter <em>checkForManuallyLinkedWI</em></p>
<pre tabindex="0"><code> - task: XplatGenerateReleaseNotes@3
    displayName: &#39;Release notes with multiple templates&#39;
    inputs:
      checkForManuallyLinkedWI: true 
</code></pre><p>If this parameter is set to true, extra calls will be made to add these WI into the main work item array.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The case of the self-cancelling Azure DevOps pipeline</title>
      <link>https://blog.richardfennell.net/posts/the-case-of-the-self-cancelling-azure-devops-pipeline/</link>
      <pubDate>Fri, 29 Oct 2021 20:12:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-case-of-the-self-cancelling-azure-devops-pipeline/</guid>
      <description>&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;Today I came across a strange issue with a reasonably old multi-stage YAML pipeline, it appeared to be cancelling itself.&lt;/p&gt;
&lt;p&gt;The Build stage ran OK, but the Release stage kept being shown as cancelled with a strange error. The strangest thing was it did not happen all the time. I guess this is the reason the problem had not been picked up sooner.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2021/10/image-1-1024x877.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;If I looked at the logs for the Release stage, I saw that the main job, and meant to be the only job, had completed successfully. But I had gained an extra unexpected job that was being cancelled in 90+% of my runs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="the-issue">The Issue</h2>
<p>Today I came across a strange issue with a reasonably old multi-stage YAML pipeline, it appeared to be cancelling itself.</p>
<p>The Build stage ran OK, but the Release stage kept being shown as cancelled with a strange error. The strangest thing was it did not happen all the time. I guess this is the reason the problem had not been picked up sooner.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/10/image-1-1024x877.png"></p>
<p>If I looked at the logs for the Release stage, I saw that the main job, and meant to be the only job, had completed successfully. But I had gained an extra unexpected job that was being cancelled in 90+% of my runs.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/10/image-2-1024x787.png"></p>
<p>This extra job was trying to run on an Ubuntu hosted agent and failing to make a connection. All very strange as all the jobs were meant to be using private Windows-based agents.</p>
<h2 id="the-solution">The Solution</h2>
<p>Turns out, as you might expect, the issue was a typo in the YAML.</p>
<pre tabindex="0"><code>- stage: Release
  dependsOn: Build
  condition: succeeded()
  jobs:
  **- job:**
  - template: releasenugetpackage.yml@YAMLTemplates
    parameters:
</code></pre><p>The problem was the stray <strong>job:</strong> line. This was causing the attempt to connect to a hosted agent and then check out the code. Interesting a hosted Ubuntu agent was requested given there was no <strong>Pool</strong> defined</p>
<p>As soon as the extra line was removed the problems went away.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Automating adding issues to Beta GitHub Projects using GitHub Actions</title>
      <link>https://blog.richardfennell.net/posts/automating-adding-issues-to-beta-github-projects-using-github-actions/</link>
      <pubDate>Fri, 15 Oct 2021 20:42:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/automating-adding-issues-to-beta-github-projects-using-github-actions/</guid>
      <description>&lt;p&gt;The new &lt;a href=&#34;https://github.com/features/issues&#34;&gt;GitHub Issues Beta&lt;/a&gt; is a big step forward in project management over what was previously possible with the old &amp;lsquo;simple&amp;rsquo; form of Issues. The Beta adds many great features such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Project Boards/Lists&lt;/li&gt;
&lt;li&gt;Actionable Tasks&lt;/li&gt;
&lt;li&gt;Custom Fields including Iterations&lt;/li&gt;
&lt;li&gt;Automation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, one thing that is not available out the box is a means to automatically add newly created issues to a project.&lt;/p&gt;
&lt;p&gt;Looking at the automations available within a project you might initially think that there is a workflow to do this job, but no.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The new <a href="https://github.com/features/issues">GitHub Issues Beta</a> is a big step forward in project management over what was previously possible with the old &lsquo;simple&rsquo; form of Issues. The Beta adds many great features such as:</p>
<ul>
<li>Project Boards/Lists</li>
<li>Actionable Tasks</li>
<li>Custom Fields including Iterations</li>
<li>Automation</li>
</ul>
<p>However, one thing that is not available out the box is a means to automatically add newly created issues to a project.</p>
<p>Looking at the automations available within a project you might initially think that there is a workflow to do this job, but no.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/10/image-1024x352.png"></p>
<p>The &lsquo;item added to project&rsquo; workflow triggers when the issue, or PR, is added to the project not when it is created. Now, this might change when custom workflows are available in the future, but not at present.</p>
<p>However, all is not lost. We can use GitHub Actions to do the job. In fact, the beta documentation even gives a <a href="https://docs.github.com/en/issues/trying-out-the-new-projects-experience/automating-projects">sample</a> to do just this job. But, I hit a problem.</p>
<p>The <a href="https://docs.github.com/en/issues/trying-out-the-new-projects-experience/automating-projects">sample</a> shows adding PRs to a project on their creation, but it assumes you are using GitHub Enterprise, as they make use of the &lsquo;organization&rsquo; object to find the target project.</p>
<p>The problem is the &lsquo;organization&rsquo; object was not available to me as I was using a GitHub Pro account (but it would be the same for anyone using free account).</p>
<p>So below is a reworked sample that adds issues to a project when no organization is available. Instead, making use of the &lsquo;user&rsquo; object which also exposes the <a href="https://docs.github.com/en/issues/trying-out-the-new-projects-experience/using-the-api-to-manage-projects#objects">ProjectNext</a> method</p>
<script src="https://gist.github.com/rfennell/71d9ddcac4dcc01c21b29b73c9386898.js"></script>
]]></content:encoded>
    </item>
    <item>
      <title>Making SonarQube Quality Checks a required PR check on Azure DevOps</title>
      <link>https://blog.richardfennell.net/posts/making-sonarqube-quality-checks-a-required-pr-check-on-azure-devops/</link>
      <pubDate>Tue, 21 Sep 2021 12:28:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/making-sonarqube-quality-checks-a-required-pr-check-on-azure-devops/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is another of those posts to remind me in the future. I searched the documentation for this answer for ages and found nothing, eventually getting the solution by asking on the &lt;a href=&#34;https://community.sonarsource.com/t/make-quality-gate-a-required-check-in-an-azure-devops-pr/49964&#34;&gt;SonarQube Forum&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;When you link SonarQube into an Azure DevOps &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops&#34;&gt;pipeline that is used from branch protection&lt;/a&gt; the success, or failure, of the PR branch analysis is shown as an optional PR Check&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2021/09/image.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;The question was &amp;lsquo;how to do I make it a required check?&amp;rsquo;. Turns out the answer is to add an extra Azure DevOps branch policey status check for the &amp;lsquo;SonarQube/quality gate&amp;rsquo;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>This is another of those posts to remind me in the future. I searched the documentation for this answer for ages and found nothing, eventually getting the solution by asking on the <a href="https://community.sonarsource.com/t/make-quality-gate-a-required-check-in-an-azure-devops-pr/49964">SonarQube Forum</a></em></p>
<p>When you link SonarQube into an Azure DevOps <a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops">pipeline that is used from branch protection</a> the success, or failure, of the PR branch analysis is shown as an optional PR Check</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/09/image.png"></p>
<p>The question was &lsquo;how to do I make it a required check?&rsquo;. Turns out the answer is to add an extra Azure DevOps branch policey status check for the &lsquo;SonarQube/quality gate&rsquo;</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/09/image-1-1024x345.png"></p>
<p>When you press the + (add) button it turns out the &lsquo;SonarQube/quality gate&rsquo; is available in the drop-down</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/09/image-3-502x1024.png"></p>
<p>Once this change was made, the SonarQube Quality Check becomes a required PR Check.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My cancer story – thus far</title>
      <link>https://blog.richardfennell.net/posts/my-cancer-story-thus-far/</link>
      <pubDate>Mon, 23 Aug 2021 21:00:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-cancer-story-thus-far/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a somewhat different post to my usual technical ones…&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In December 2017 I had major surgery. This was to remove an &lt;a href=&#34;https://www.cancerresearchuk.org/about-cancer/adrenal-gland-cancer/adrenal-cortical-cancer&#34;&gt;adrenal cortical carcinoma (ACC)&lt;/a&gt; that had grown on one of my adrenal glands and then up my inferior vena cava (IVC) into my heart.&lt;/p&gt;
&lt;p&gt;Early on I decided, though not hiding the fact I was ill, to not live every detail on social media. So, it is only now that I am back to a reasonable level of health and with some distance that I feel I can write about my experiences. I hope they might give people some hope that there can be a good outcome when there is a cancer diagnosis.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>This is a somewhat different post to my usual technical ones…</em></p>
<p>In December 2017 I had major surgery. This was to remove an <a href="https://www.cancerresearchuk.org/about-cancer/adrenal-gland-cancer/adrenal-cortical-cancer">adrenal cortical carcinoma (ACC)</a> that had grown on one of my adrenal glands and then up my inferior vena cava (IVC) into my heart.</p>
<p>Early on I decided, though not hiding the fact I was ill, to not live every detail on social media. So, it is only now that I am back to a reasonable level of health and with some distance that I feel I can write about my experiences. I hope they might give people some hope that there can be a good outcome when there is a cancer diagnosis.</p>
<p>I had known I was ill for a good while before I was diagnosed in May 2017. I had seen my Parkrun times slowing week on week to the point where I could not run at all, and I had also had a couple of failed blood donations due to low haemoglobin levels.</p>
<p>It was clear I was unwell, and getting worse, but there was no obvious root cause. All sorts of things had been considered from heart to thyroid. Cancer was suspected, but a tumour could not be found. Try as they might, my GP had failed to find a test that showed anything other than my blood numbers were not right. I was just continuing to get weaker, by that spring I was unable to walk more than a few hundred meters without getting out of breath with my heart beating at well over 170 BPM.</p>
<p>The problem was that ACC is a rare form of cancer and mine had presented in a hard to find way. There are two basic forms of ACC. One shuts down your adrenal system, and you notice this very quickly. The other form shows no symptoms until the tumour starts to physically impact something. This was the form I had. In my case, the tumour was increasingly blocking blood flow in my IVC and heart.</p>
<p>In the end, the tumour was found because of a lower abdominal ultrasound. By the time I had the ultrasound scan it was about the only diagnostic that had not been tried. It was a strange mixture of shock and relief to be immediately told after the scan by the sonographer that ‘the doctor would like a word before you go home’. So, at least I knew the cause of why I felt so ill. I left the hospital that day with a diagnosis of an adrenal tumour that was most likely benign but may be malignant, on blood thinning injections and with a whole set of appointments to find out just how bad it was.</p>
<p>At this point the NHS did what it does best, react to a crisis. Over the next couple of weeks, I seemed to live at the regional cancer centre at St James Hospital in Leeds having all sorts of tests.</p>
<p>My health, and the time I was spending at the hospital, meant there was no way I could continue to work. I was lucky I was able to transition quickly onto long term sick in such a way that meant I did not have the financial worries many cancer patients have to contend with on top of their illness. I would not be seeing work again for over 9 months.</p>
<p>The next phase of diagnostic tests were wide ranging. Plenty of blood was taken, I had to collect my urine for 48 hours, there were CT scans and PET Scans, all to get a clearer idea of how bad it was. The real clincher test as to whether the tumour was benign or malignant was a biopsy. One of those strangely pain free tests, due to the local anaesthetics, but accompanied by much poking, pushing and strange crunching noises. Then a 6 hour wait flat on my back on a recovery ward before I could sit up, let alone go home.</p>
<p>It was whilst laying down post-test I had probably my best meal on the NHS. Having just missed the lunch service on the recovery ward, a good move from past experience, a nurse produced a huge pile of toast and jam. A perfect meal for the reclined patient.</p>
<p>It was also during this post test recovery time that I first met other cancer patients and had a chance to have a proper chat with them. No matter how bad your case seems to be you always seem to be meeting people with a worse prognosis. Whilst on the biopsy recovery ward I met a man who told me his story. A check-up because he did not feel well led to the discovery of a large brain tumour which then spread throughout his body. He knew he only had a short time left. The conversation opened my eyes to the reality of my and other patients’ situations.</p>
<p>A couple of weeks later we got the bad news that the cancer was malignant and very advanced. We had clung onto the hope it was benign. The news was delivered in a very matter of fact way, that I probably would not see Christmas unless a treatment plan could be found, and the options were not good. There were tears.</p>
<p>However, there was at least some good news, the tumour was a single mass, it had not spread around my body. The problem was that there was no obvious surgical option due to its size and position. All that could be done was to start chemotherapy to see if the tumour could be shrunk. So, a very ‘old school’, and hence harsh, three cycle course of chemotherapy was started in July 2017.</p>
<p>I dealt with all of this in a very step by step way. People seemed surprised by this, that I was not more emotionally in pieces. I assume that is just my nature. I think this whole phase of my illness was much harder on my partner and family. They had to watch me getting more ill with no obvious route to recovery. For me it was just a case of get up and doing whatever the tasks were for the day. Whether they be tests, treatments or putting things in place like a Lasting Power of Attorney.</p>
<p>Life became a cycle of three-day eight-hour blocks of chemotherapy, then a month to try to recover. On each cycle I recovered less than the previous one.</p>
<p>The chemotherapy ward is strangely like flying business class. The seats look comfortable, but after eight hours they are not. You can’t go to the toilet without issues, on an airplane it is getting out of the row, on the chemotherapy ward it is taking the drip with you. In both cases, the toilet is too small. You feel tired all the time, just like jet lag, and of course, the food is questionable at best.</p>
<p>As I had seen on other wards, there was a strong camaraderie on the chemotherapy ward. Everyone is going through life changing treatment. Some people looked very ill, others as if there is nothing obviously wrong with them, but irrespective of their condition I found the patients, as well as the staff, very supportive. It was far from an unhappy place. Not something I had expected.</p>
<p>In many ways the worst side effect of chemotherapy, beyond the expected weight loss, hair loss, nausea and lack of energy was that my attention span disappeared. For the first time in my adult life I stopped reading. I struggled to make it through a single paragraph without forgetting where I was. I remember one afternoon in a hospital waiting room, whilst waiting for yet more test results, trying to read a page in a novel. I never got to the end of the page, just starting it over and over. It was also at this time I realised I had to stop driving, I felt my attention was too poor and my reactions too slow.</p>
<p>As I said, by this point I was very weak. This made most day-to-day activities very hard, but the strange thing was I found I could still swim. I had had the theory that though my IVC was blocked, hence not bringing blood from the lower half of my body, if I swam with a pull-buoy just using my arms, I would be OK. This turned out to be correct, much to the surprise of the medical professionals. So, I started to do some easy swimming in the recovery phases between chemotherapy cycles when I was able. It turned out the biggest issue was I got cold quickly due to my weight loss. So, swim sessions were limited to 15 to 20 minutes and just a few hundred metres.</p>
<p>After the planned three chemotherapy cycles all the tests were rerun and it was found that the tumour seemed unaffected. It was always a very low chance of success. I had already decided I was unlikely to start a 4th cycle as I felt so ill, it was just no life. I did not want any more chemotherapy when the chance of success was so low. Better to have some quality of life before the end.</p>
<p>This is where I got lucky because I was being treated at a major cancer research centre. I had been told there was no adrenal cancer surgical option for the way my ACC had presented. However, the hospital’s renal cancer surgical team had seen something similar and were willing to operate with the support of the cardiac and vascular teams. A veritable who’s who of senior surgeons at St James as I was informed by the nurse when I was being admitted for the operation in December 2017.</p>
<p>My operation meant stopping the heart, removing the tumour along with an adrenal gland, and a kidney (collateral damage as there was nothing wrong with it other than its proximity to the tumour) and then patching me all back together. Over 10 hours on the operating table and a transfusion of a couple of pints of blood.</p>
<p>When you see a very similar version of your operation on the <a href="https://www.bbc.co.uk/programmes/b09m60sk">BBC series on cutting edge surgery ‘Edge of Life’</a> you realise how lucky you are. Just a few years ago or living in another city and the operation would not have been possible.</p>
<p>Given my heart had to be stopped, I was treated as a cardiac patient, and the cardiac department moves you through recovery fast. Most of the people on the ward were having heart bypasses, so I was &rsquo;the interestingly different&rsquo; case to many of the staff. I did take longer than the usual 5 days on the ward taken by bypass patients, but I still managed to get out of hospital in 10 days, in time for Christmas. It is surprising how fast you can get over being opened up from the top of your chest to your groin, and how little pain there was.</p>
<p>At this point I was in theory cured, the tumour was removed, blood was flowing again but I was very weak and recovery was going to be a long road. I started with walks of only a few minutes and then the rest of the day resting. The great news was that I could walk again without getting out of breath and my heart rate going through the roof.</p>
<p>So, over the next few months, I gradually regained my health, some weight, some hair and my attention span. I was able to ease back into work part time in the early summer of 2018.<br>
However, the surgery was not the end of my treatment. The surgeons were confident they had got all the tumour they could see. They said it was well defined, so cancerous and normal tissue could be differentiated, but there was always the chance of microscopic cancerous cells remaining. So, I was put on two years of Mitotane tablet-based chemotherapy. This was the treatment with the best evidence base, but that is not saying much. There are not that many research studies into ACC treatment options as it is so rare. My treatment plan was based on <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6182924/">a small Italian and German study of 177 people</a>, most of which did not complete the plan, but it did show a statistically significant reduction in the chance of remission after 5 years.</p>
<p>Mitotane stops cell division and I had not realised how hard this would make my recovery and specifically regaining some fitness. I was OK for day to day living, but an activity like running was not possible. I twice started <a href="https://www.nhs.uk/live-well/exercise/couch-to-5k-week-by-week/">Couch to 5K</a> but had to give up as I could not progress beyond the walking stages.</p>
<p>The mental weight of everything did not catch up with me until a good year or so after surgery, by which time I was back at work and living a ‘normal’ life. Previously people had kept asking ‘how are you doing?’. As I said, I felt they expected me to be in pieces, and I was just going step by step. It is only when the main treatment stopped and life returned to normal that everything that had occurred hit me. A seemingly unrelated fairly small in the scheme of things family incident caused it all to come flooding back and completely stopped me in my tracks.</p>
<p>It was that this time I reached out to the support services of the <a href="https://www.macmillan.org.uk/">Macmillan</a> charity and specifically the <a href="https://www.leedsth.nhs.uk/a-z-of-services/leeds-cancer-centre/leeds-cancer-support/the-sir-robert-ogden-macmillan-centre/">Robert Ogden Centre at St James</a> for help. This was something I had not done prior to this time, though my partner had used their family support services earlier in my treatment. With their counselling help, I worked my way through my problems and got back to some form of normal.</p>
<p>In the autumn of 2019 I came off Mitotane and once it was out of my system I could at last try to get fit again. So, it was back to Couch to 5K and with a few repeated weeks I was able to run 5K again. I was back running Parkrun in November 2019. It was great to get back to my local Roundhay Parkrun community, though I had been volunteering whenever my health allowed throughout my illness. I was running much slower than before I was ill, but running.</p>
<p>Since then, I have to say Covid lockdown has helped me, giving me a structure to my training. I have certainly got a reasonable level of endurance back, but any speed seems to elude me.</p>
<p>I have always had a fairly high maximum heart rate, over 200 well into my 40s, and before getting cancer it was still in the 190s. Now, post illness, I struggle to reach 160 and my bike and run maximum heart rates are very similar. I have tried to do a maximum heart rate test, it is as if I get to a heart rate around 150-160 for a tempo run, but it barely goes any higher when I sprint. So, I have a question for anyone with experience of training after cancer and heart surgery. Is it expected after stopping the heart that my maximum heart rate should be way lower? Or is the problem my hormone levels are different due to the lack of one of my adrenal glands? Or is it just I am getting older and have just lost muscle mass? I am not sure I will ever know the answer to that one, it is not exactly a question the NHS is set up to answer. All their post-operative guidance is aimed at day-to-day levels of exertion not the elevated levels caused by sports.</p>
<p>But that is a minor gripe, I am reasonably fit again. I have recently completed my first triathlon in 5 years and between lockdowns walked the 268 miles of the Pennine Way with my partner. I am not as fast as I was, but I am 5 years older and have had major heart surgery. Hell, I am alive.</p>
<p>Like all cancer patients, this is not the end of the road for my treatment. I am still on steroids and have annual CT scans, but all the signs seem good that the surgery got the tumour and there is no reason I should not live to a ripe old age.</p>
<p>I would not have got here without the support of my partner and family, and the unbelievable work of the NHS and the support services I have used. I can’t thank you all enough.</p>
<p><a href="https://www.leedshospitalscharity.org.uk/Listing/Category/make-a-donation">Leeds Hospital Charity</a> - the charity of Leeds Teaching Hospitals<br>
<a href="https://donation.macmillan.org.uk/">Macmillan Cancer Support</a> - support or cancer patients and their families<br>
<a href="https://www.blood.co.uk/">NHS Blood Transfusion Service</a> – please consider giving blood, without regular donations surgery like mine is not possible.</p>
<h3 id="update-18-aug-2022">Update 18 Aug 2022</h3>
<p>I have just had my five year checkup at the Leeds Cancer Centre and I am pleased to say all is good, there are no signs of any reoccurrence of my cancer.</p>
<p>For my type of adrenal cancer, the statistics say if you survive to five years after treatment without problems, your reoccurrence risk levels fall back to the baseline level for the general population.</p>
<p>So, I am currently cancer free and my chances of a reoccurrence are no higher than that for the general population. This is a real success story, and I hope gives hope to others starting their cancer journey</p>
]]></content:encoded>
    </item>
    <item>
      <title>But what if I can&#39;t use GitHub Codespaces? Welcome to github.dev</title>
      <link>https://blog.richardfennell.net/posts/but-what-if-i-cant-use-github-codespaces-welcome-to-github-dev/</link>
      <pubDate>Thu, 12 Aug 2021 09:21:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/but-what-if-i-cant-use-github-codespaces-welcome-to-github-dev/</guid>
      <description>&lt;p&gt;Yesterday &lt;a href=&#34;https://github.com/features/codespaces&#34;&gt;GitHub released Codespaces&lt;/a&gt; as a commercial offering. A new feature I have been using during its beta phase.&lt;/p&gt;
&lt;p&gt;Codespaces provides a means for developers to easily edit GitHub hosted repos in Visual Studio Code on a high-performance VM.&lt;/p&gt;
&lt;p&gt;No longer does the new developer on the team have to spend ages getting their local device setup &amp;lsquo;just right&amp;rsquo;. They can, in a couple of clicks, provision a Codespace that is preconfigured for the exact needs of the project i.e the correct VM performance, the right VS Code extensions and the debug environment configured. All billed on a pay as you go basis and accessible from any client.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Yesterday <a href="https://github.com/features/codespaces">GitHub released Codespaces</a> as a commercial offering. A new feature I have been using during its beta phase.</p>
<p>Codespaces provides a means for developers to easily edit GitHub hosted repos in Visual Studio Code on a high-performance VM.</p>
<p>No longer does the new developer on the team have to spend ages getting their local device setup &lsquo;just right&rsquo;. They can, in a couple of clicks, provision a Codespace that is preconfigured for the exact needs of the project i.e the correct VM performance, the right VS Code extensions and the debug environment configured. All billed on a pay as you go basis and accessible from any client.</p>
<p>It could be a game-changer for many development scenarios.</p>
<p>However, there is one major issue. Codespaces, at launch, are only available on GitHub Teams or Enterprise subscriptions. They are not available on Individual accounts, as yet.</p>
<p>But all is not lost, hidden within the documentation, but <a href="https://twitter.com/github/status/1425505817827151872">widely tweeted about</a> is the github.dev editor. You can think of this as Codespace Lite i.e. it is completely browser-based so there is no backing VM resource.</p>
<p>To use this feature, alter your URL <a href="https://github.com/myname/myrepo">https://github.com/myname/myrepo</a> to <a href="https://github.dev/myname/myrepo">https://github.dev/myname/myrepo</a> . Or when browsing the repo just press the . (period) and you swap into a browser-hosted version of VS Code.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/08/image-1024x538.png"></p>
<p>You can install a good number of extensions, just as long as they don&rsquo;t require external compute resources.</p>
<p>So, this is a great tool for any quick edit that requires multiple files to be touch in the same commit.</p>
<p>I think it is going to be interesting to see how github.dev and Codespaces are used. Maybe we will see the end of massive developer PCs?</p>
<p>Or will that have to wait until the Codespace VM available offer GPUs?</p>
]]></content:encoded>
    </item>
    <item>
      <title>How I dealt with a strange problem with PSRepositories and dotnet NuGet sources</title>
      <link>https://blog.richardfennell.net/posts/how-i-dealt-with-a-strange-problem-with-psrepositories-and-dotnet-nuget-sources/</link>
      <pubDate>Fri, 16 Jul 2021 15:03:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-i-dealt-with-a-strange-problem-with-psrepositories-and-dotnet-nuget-sources/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;We regularly re-build our Azure DevOps private agents using Packer and Lability, as I have &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2020/03/02/you-need-to-pass-a-github-pat-to-create-azure-devops-agent-images-using-packer/&#34;&gt;posted about before.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Since the latest re-build, we have seen all sorts of problems. All related to pulling packages and tools from NuGet based repositories. Problems we have never seen with any previous generation of our agents.&lt;/p&gt;
&lt;h2 id=&#34;the-issue&#34;&gt;The Issue&lt;/h2&gt;
&lt;p&gt;The issue turned out to be related to registering a private PowerShell repository.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$RegisterSplat = @{
Name = &amp;#39;PrivateRepo&amp;#39;
SourceLocation = &amp;#39;https://psgallery.mydomain.co.uk/nuget/PowerShell&amp;#39;
PublishLocation = &amp;#39;https://psgallery.mydomain.co.uk/nuget/PowerShell&amp;#39;
InstallationPolicy = &amp;#39;Trusted&amp;#39;
}

Register-PSRepository @RegisterSplat
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Running this command caused the default dotnet NuGet repository to be unregistered i.e. the command &lt;strong&gt;dotnet nuget list source&lt;/strong&gt; was expected to return&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>We regularly re-build our Azure DevOps private agents using Packer and Lability, as I have <a href="https://blogs.blackmarble.co.uk/rfennell/2020/03/02/you-need-to-pass-a-github-pat-to-create-azure-devops-agent-images-using-packer/">posted about before.</a></p>
<p>Since the latest re-build, we have seen all sorts of problems. All related to pulling packages and tools from NuGet based repositories. Problems we have never seen with any previous generation of our agents.</p>
<h2 id="the-issue">The Issue</h2>
<p>The issue turned out to be related to registering a private PowerShell repository.</p>
<pre tabindex="0"><code>$RegisterSplat = @{
Name = &#39;PrivateRepo&#39;
SourceLocation = &#39;https://psgallery.mydomain.co.uk/nuget/PowerShell&#39;
PublishLocation = &#39;https://psgallery.mydomain.co.uk/nuget/PowerShell&#39;
InstallationPolicy = &#39;Trusted&#39;
}

Register-PSRepository @RegisterSplat
</code></pre><p>Running this command caused the default dotnet NuGet repository to be unregistered i.e. the command <strong>dotnet nuget list source</strong> was expected to return</p>
<pre tabindex="0"><code>Registered Sources:
  1.  PrivateRepo
      https://psgallery.mydomain.co.uk/nuget/Nuget
  2.  nuget.org [Enabled]
      https://www.nuget.org/api/v2/
  3.  Microsoft Visual Studio Offline Packages [Enabled]
      C:Program Files (x86)Microsoft SDKsNuGetPackages 
</code></pre><p>But it returned</p>
<pre tabindex="0"><code>Registered Sources:
  1.  PrivateRepo
      https://psgallery.mydomain.co.uk/nuget/Nuget
  2.  Microsoft Visual Studio Offline Packages [Enabled]
      C:Program Files (x86)Microsoft SDKsNuGetPackages
</code></pre><h2 id="the-workaround">The Workaround</h2>
<p>You can&rsquo;t call this a solution, as I cannot see why it is really needed, but the following command does fix the problem</p>
<pre tabindex="0"><code> dotnet nuget add source https://api.nuget.org/v3/index.json -n nuget.org
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Porting my Visual Studio Parameters.xml Generator tool to Visual Studio 2022 Preview</title>
      <link>https://blog.richardfennell.net/posts/porting-my-visual-studio-parameters-xml-generator-tool-to-visual-studio-2022-preview/</link>
      <pubDate>Tue, 22 Jun 2021 11:08:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/porting-my-visual-studio-parameters-xml-generator-tool-to-visual-studio-2022-preview/</guid>
      <description>&lt;p&gt;As I am sure you are all aware the preview of &lt;a href=&#34;https://devblogs.microsoft.com/visualstudio/visual-studio-2022-preview-1-now-available/&#34;&gt;Visual Studio 2022&lt;/a&gt; has just dropped, so it is time for me to update my Parameter.xml Generator Tool to support this new version of Visual Studio.&lt;/p&gt;
&lt;h2 id=&#34;but-what-does-my-extension-do&#34;&gt;But what does my extension do?&lt;/h2&gt;
&lt;p&gt;As the Marketplace description says&amp;hellip;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;A tool to generate parameters.xml files for MSdeploy from the existing web.config file or from an app.config file for use with your own bespoke configuration transformation system.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As I am sure you are all aware the preview of <a href="https://devblogs.microsoft.com/visualstudio/visual-studio-2022-preview-1-now-available/">Visual Studio 2022</a> has just dropped, so it is time for me to update my Parameter.xml Generator Tool to support this new version of Visual Studio.</p>
<h2 id="but-what-does-my-extension-do">But what does my extension do?</h2>
<p>As the Marketplace description says&hellip;</p>
<p><em>A tool to generate parameters.xml files for MSdeploy from the existing web.config file or from an app.config file for use with your own bespoke configuration transformation system.</em></p>
<p><em>Once the VSIX package is installed, to use right-click on a web.config, or app.config, file in Solution Explorer and the parameters.xml file will be generated using the current web.config entries from for both <strong>configuration/applicationSettings</strong> and <strong>configuration/AppSettings</strong>. The values attributes will contain TAG style entries suitable for replacement at deployment time.</em></p>
<p>I_f the parameters.xml already exists in the folder (even if it is not a file in the project) you will be prompted before it is overwritten._</p>
<p>Currently the version in the Marketplace of <a href="https://marketplace.visualstudio.com/items?itemName=RichardFennellMVP.ParametersXmlGenerator">Parameter.xml Generator Tool supports Visual Studio 2015, 2017 &amp; 2019</a></p>
<h2 id="adding-visual-studio-2022-support">Adding Visual Studio 2022 Support</h2>
<p>The process to add 2022 support is more complicated than adding past new versions, where all that was usually required was an update to the manifest. This is due to the move to 64Bit.</p>
<p>Luckily the process <a href="https://docs.microsoft.com/en-gb/visualstudio/extensibility/migration/update-visual-studio-extension?view=vs-2022">is fairly well documented</a>, but of course I still had a few problems.</p>
<h3 id="msb4062-the-comparebuildtaskversion-task-could-not-be-loaded-from-the-assembly">MSB4062: The &ldquo;CompareBuildTaskVersion&rdquo; task could not be loaded from the assembly</h3>
<p>When I tried build the existing solution, without any changes, in Visual Studio 2022 I got the error</p>
<p><em>MSB4062: The &ldquo;CompareBuildTaskVersion&rdquo; task could not be loaded from the assembly D:myprojectpackagesMicrosoft.VSSDK.BuildTools.15.8.3253toolsVSSDKMicrosoft.VisualStudio.Sdk.BuildTasks.15.0.dll. Could not load file or assembly.</em></p>
<p>This was fixed by updating the package Microsoft.VSSDK.BuildTools from 15.1.192 to 16.9.1050.</p>
<h3 id="modernizing-the-existing-vsix-project">Modernizing the Existing VSIX project</h3>
<p>I did not modernize the existing VSIX project before I started the migration. When I clicked the <strong>Migrate packages.config to PackageReference….</strong> it said my project was not a suitable version. So I just moved to the next step.</p>
<h3 id="adding-link-files">Adding Link Files</h3>
<p>After creating the shared code project, that contains the bulk of the files, I needed to add links to some of the resources i.e. the license file, the package icon and .VSCT file.</p>
<p>When I tried add the link, I got an error in the form</p>
<p>_ Cannot add another link for the same file in another project_</p>
<p>I tried exiting Visual Studio, cleaning the solution, nothing helped. The solution was to edit the .CSPROJ file manually in a text editor e.g.</p>
<pre tabindex="0"><code> &lt;ItemGroup&gt;
    &lt;Content Include=&#34;ResourcesLicense.txt&#34;&gt;
      &lt;CopyToOutputDirectory&gt;Always&lt;/CopyToOutputDirectory&gt;
    &lt;Content Include=&#34;..ParametersXmlAddinSharedResourcesPackage.ico&#34;&gt;
      &lt;Link&gt;Package.ico&lt;/Link&gt;
      &lt;IncludeInVSIX&gt;true&lt;/IncludeInVSIX&gt;
    &lt;/Content&gt;
    &lt;Content Include=&#34;ResourcesPackage.ico&#34;&gt;
      &lt;CopyToOutputDirectory&gt;Always&lt;/CopyToOutputDirectory&gt;
    &lt;Content Include=&#34;..ParametersXmlAddinSharedResourcesLicense.txt&#34;&gt;
      &lt;Link&gt;License.txt&lt;/Link&gt;
      &lt;IncludeInVSIX&gt;true&lt;/IncludeInVSIX&gt;
    &lt;/Content&gt;
    &lt;EmbeddedResource Include=&#34;ResourcesParametersUppercaseTransform.xslt&#34; /&gt;
    &lt;VSCTCompile Include=&#34;..ParametersXmlAddinSharedParametersXmlAddin.vsct&#34;&gt;
      &lt;Link&gt;ParametersXmlAddin.vsct&lt;/Link&gt;
      &lt;ResourceName&gt;Menus.ctmenu&lt;/ResourceName&gt;
    &lt;/VSCTCompile&gt;
  &lt;/ItemGroup&gt;
</code></pre><h2 id="publishing-the-new-extension">Publishing the new Extension</h2>
<p>Once I had completed the migration steps, I had a pair of VSIX files. The previously existing one that supported Visual Studio 2015, 2017 &amp; 2019 and the new Visual Studio 2022 version.</p>
<p>The <a href="https://docs.microsoft.com/en-gb/visualstudio/extensibility/migration/update-visual-studio-extension?view=vs-2022#publish-your-extension">migration notes</a> say that in the future we will be able to upload both VSIX files to a single Marketplace entry and the Marketplace will sort out delivering the correct version.</p>
<p>Unfortunately, that feature is not available at present. So for now the new Visual Studio 2022 VSIX is published separately from the old one with a <a href="https://marketplace.visualstudio.com/items?itemName=RichardFennellMVP.ParametersXmlGeneratorDev17">preview flag</a>.</p>
<p>As soon as I can, I will merge the new VSIX into the old Marketpalce entry and removed the preview 2022 version of the VSIX</p>
]]></content:encoded>
    </item>
    <item>
      <title>Automating the creation of Team Projects in Azure DevOps</title>
      <link>https://blog.richardfennell.net/posts/automating-the-creation-of-team-projects-in-azure-devops/</link>
      <pubDate>Thu, 10 Jun 2021 16:28:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/automating-the-creation-of-team-projects-in-azure-devops/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/organizations/projects/create-project?view=azure-devops&amp;amp;tabs=preview-page&#34;&gt;Creating a new project&lt;/a&gt; in Azure DevOps with your desired &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/organizations/settings/work/customize-process?view=azure-devops&#34;&gt;process template&lt;/a&gt; is straightforward. However, it is only the start of the job for most administrators. They will commonly want to set up other configuration settings such as &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops&#34;&gt;branch protection rules&lt;/a&gt;, default pipelines etc. before giving the team access to the project. All this administration can be very time consuming and of course prone to human error.&lt;/p&gt;
&lt;p&gt;To make this process easier, quicker and more consistent I have developed a process to automated all of this work. It uses a mixture of the following:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://docs.microsoft.com/en-us/azure/devops/organizations/projects/create-project?view=azure-devops&amp;tabs=preview-page">Creating a new project</a> in Azure DevOps with your desired <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/settings/work/customize-process?view=azure-devops">process template</a> is straightforward. However, it is only the start of the job for most administrators. They will commonly want to set up other configuration settings such as <a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops">branch protection rules</a>, default pipelines etc. before giving the team access to the project. All this administration can be very time consuming and of course prone to human error.</p>
<p>To make this process easier, quicker and more consistent I have developed a process to automated all of this work. It uses a mixture of the following:</p>
<p><strong>A sample team project</strong> that contains a Git repo containing the base code I want in my new Team Project&rsquo;s default Git repo. In my case this includes</p>
<ul>
<li>An empty Azure Resource Management (ARM) template</li>
<li>A .NET Core Hello World console app with an associated .NET Core Unit Test project</li>
<li>A YAML pipeline to build and test the above items, as well as <a href="https://blogs.blackmarble.co.uk/rfennell/2020/04/22/i-decided-to-create-a-video-of-my-blog-post-on-multistage-yaml-pipelines/">generating release notes</a> into the Team Project WIKI</li>
</ul>
<p>A <strong>PowerShell script</strong> that uses both <a href="https://docs.microsoft.com/en-us/cli/azure/devops?view=azure-cli-latest">az devops</a> and the <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-6.1">Azure DevOps REST API</a> to</p>
<ul>
<li>Create a new Team Project</li>
<li>Import the sample project Git repo into the new Team Project</li>
<li>Create a WIKI in the new Team Project</li>
<li>Add a <a href="https://www.sonarqube.org/">SonarQube</a>/<a href="https://sonarcloud.io/">SonarCloud</a> Service Endpoint</li>
<li>Update the YAML file for the pipeline to point to the newly created project resources</li>
<li>Update the branch protection rules</li>
<li>Grant access privaledges as needed for service accounts</li>
</ul>
<p>The script is far from perfect, it could do much more, but for me, it does the core requirements I need.</p>
<p>You could of course enhance it as required, removing features you don&rsquo;t need and adding code to do jobs such as <a href="https://github.com/rfennell/AzureDevOpsPowershell/blob/main/REST/Add-StandardBacklogTasks.ps1">adding any standard Work Items</a> you require at the start of a project. Or altering the contents of the sample repo to be cloned to better match your most common project needs.</p>
<p>You can find the PowerShell script in <a href="https://github.com/rfennell/AzureDevOpsPowershell/blob/main/REST/Create-TeamProject.ps1">AzureDevOpsPowershell GitHub repo</a>, hope you find it useful.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting the approver for release to an environment within an Azure DevOps Multi-Stage YAML pipeline</title>
      <link>https://blog.richardfennell.net/posts/getting-the-approver-for-release-to-an-environment-within-an-azure-devops-multi-stage-yaml-pipeline/</link>
      <pubDate>Sat, 15 May 2021 14:53:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-the-approver-for-release-to-an-environment-within-an-azure-devops-multi-stage-yaml-pipeline/</guid>
      <description>&lt;p&gt;I recently had the need to get the email address of the approver of a deployment to an environment from within a multi-stage YAML pipeline. Turns out it was not as easy as I might have hoped given the available documented APIs.&lt;/p&gt;
&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;My YAML pipeline included a &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&amp;amp;tabs=check-pass&#34;&gt;manual approval&lt;/a&gt; to allow deployment to a given environment. Within the stage protected by the approval, I needed the approver&amp;rsquo;s details, specifically their email address.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently had the need to get the email address of the approver of a deployment to an environment from within a multi-stage YAML pipeline. Turns out it was not as easy as I might have hoped given the available documented APIs.</p>
<h3 id="background">Background</h3>
<p>My YAML pipeline included a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&amp;tabs=check-pass">manual approval</a> to allow deployment to a given environment. Within the stage protected by the approval, I needed the approver&rsquo;s details, specifically their email address.</p>
<p>I managed to achieve this but had to use undocumented API calls. These were discovered by looking at Azure DevOps UI operations using development tools within my browser.</p>
<h3 id="the-solution">The Solution</h3>
<p>The process was as follows</p>
<ul>
<li>Make a call to the build&rsquo;s timeline to get the current stage&rsquo;s GUID - this is documented <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/build/timeline/get?view=azure-devops-rest-6.0">API call</a></li>
<li>Make a call to the <strong>Contribution/HierarchyQuery</strong> API to get the approver details. This is the undocumented API call.</li>
</ul>
<p>The code to do this is as shown below. It makes use of <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&amp;tabs=yaml">predefined variables</a> to pass in the details of the current run and stage.</p>
<p>Note that I had to re-create the web client object between each API call. If I did not do this I got a 400 Bad Request on the second API call - it took me ages to figure this out!</p>
<script src="https://gist.github.com/rfennell/1bc5cedf41dc169737e6bbf355f7d151.js"></script>
]]></content:encoded>
    </item>
    <item>
      <title>Loading drivers for cross-browser testing with Selenium</title>
      <link>https://blog.richardfennell.net/posts/loading-drivers-for-cross-browser-testing-with-selenium/</link>
      <pubDate>Wed, 21 Apr 2021 13:32:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/loading-drivers-for-cross-browser-testing-with-selenium/</guid>
      <description>&lt;p&gt;&lt;em&gt;Another post so I don&amp;rsquo;t forget how I fixed a problem&amp;hellip;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I have been making sure some Selenium UX tests that were originally written against Chrome also work with other browsers. I have had a few problems, the browser under test failing to load or Selenium not being able to find elements.&lt;/p&gt;
&lt;p&gt;Turns out the solution is to just use the custom driver start-up options, the default constructors don&amp;rsquo;t seem to work for browsers other theran Chrome and Firefox.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>Another post so I don&rsquo;t forget how I fixed a problem&hellip;.</em></p>
<p>I have been making sure some Selenium UX tests that were originally written against Chrome also work with other browsers. I have had a few problems, the browser under test failing to load or Selenium not being able to find elements.</p>
<p>Turns out the solution is to just use the custom driver start-up options, the default constructors don&rsquo;t seem to work for browsers other theran Chrome and Firefox.</p>
<p>Hence, I not have helper method that creates a driver from me based the a configuration parameter</p>
<pre tabindex="0"><code> internal static IWebDriver GetWebDriver()
    {
        var driverName = GetWebConfigSetting(&#34;webdriver&#34;);
        switch (driverName)
        {
            case &#34;Chrome&#34;:
                return new ChromeDriver();
            case &#34;Firefox&#34;:
                return new FirefoxDriver();
            case &#34;IE&#34;:
                InternetExplorerOptions caps = new InternetExplorerOptions();
                caps.IgnoreZoomLevel = true;
                caps.EnableNativeEvents = false;
                caps.IntroduceInstabilityByIgnoringProtectedModeSettings = true;
                caps.EnablePersistentHover = true;
                return new InternetExplorerDriver(caps);
            case &#34;Edge-Chromium&#34;:
                var service = EdgeDriverService.CreateDefaultService(Directory.GetCurrentDirectory(), &#34;msedgedriver.exe&#34;);
                return new EdgeDriver(service);
            default:
                throw new ConfigurationErrorsException($&#34;{driverName} is not a known Selenium WebDriver&#34;);
        }
    }
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>A first look at the beta of GitHub Issue Forms</title>
      <link>https://blog.richardfennell.net/posts/a-first-look-at-the-beta-of-github-issue-forms/</link>
      <pubDate>Tue, 06 Apr 2021 11:16:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-first-look-at-the-beta-of-github-issue-forms/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Update 10 May 2021&lt;/strong&gt; - &lt;em&gt;Remember that GitHub Issue Forms are in early beta, you need to keep an eye on the regular new releases as they come out. For example, my GitHub Forms stopped showing last week. This was due to me using now deprecate lines in the YAML definition files. Once I edited the files to update to support YAML they all leap back into life&lt;/em&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;GitHub Issues are core to tracking work in GitHub. Their flexibility is their biggest advantage and disadvantage. As a maintainer of a projects, I always need specific information when an issue is raised. Whether it be a bug, or feature request.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Update 10 May 2021</strong> - <em>Remember that GitHub Issue Forms are in early beta, you need to keep an eye on the regular new releases as they come out. For example, my GitHub Forms stopped showing last week. This was due to me using now deprecate lines in the YAML definition files. Once I edited the files to update to support YAML they all leap back into life</em></p>
<hr>
<p>GitHub Issues are core to tracking work in GitHub. Their flexibility is their biggest advantage and disadvantage. As a maintainer of a projects, I always need specific information when an issue is raised. Whether it be a bug, or feature request.</p>
<p>Historically, I have used <a href="https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository">Issue Templates</a>, but these templates are not enforced. They add a suggestion for the issue text, but this can be ignored by the person raising the issue, and I can assure you they often do.</p>
<p>I have been lucky enough to have a look at GitHub Issue Forms, which is currently in early private beta. This new feature aims to address the problem by making the creation of issues form-based using YML templates.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/04/image.png"></p>
<p>I have swapped to using them on my most active repos <a href="https://github.com/rfennell/AzurePipelines/issues">Azure DevOps Pipeline extensions</a> and <a href="https://github.com/rfennell/ReleaseNotesAction/issues">GitHub Release Notes Action</a>. My initial experience has been very good, the usual YML issue of incorrect indenting, but nothing more serious. They allow the easy creation of rich forms that are specific to the project.</p>
<p>They next step is to see if the quality of the logged issues improves.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Tidying up local branches with a Git Alias and a PowerShell Script</title>
      <link>https://blog.richardfennell.net/posts/tidying-up-local-branches-with-a-git-alias-and-a-powershell-script/</link>
      <pubDate>Tue, 16 Mar 2021 12:20:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tidying-up-local-branches-with-a-git-alias-and-a-powershell-script/</guid>
      <description>&lt;p&gt;It is easy to get your local branches in Git out of sync with the upstream repository, leaving old dead branches locally that you can&amp;rsquo;t remember creating. You can use the &lt;em&gt;prune&lt;/em&gt; option on your Git Fetch command to remove the remote branch references, but that command does nothing to remove local branches.&lt;/p&gt;
&lt;p&gt;A good while ago, I wrote a small PowerShell script to wrapper the running of the Git Fetch and then based on the deletions remove any matching local branches. Then finally returning me to my trunk branch.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is easy to get your local branches in Git out of sync with the upstream repository, leaving old dead branches locally that you can&rsquo;t remember creating. You can use the <em>prune</em> option on your Git Fetch command to remove the remote branch references, but that command does nothing to remove local branches.</p>
<p>A good while ago, I wrote a small PowerShell script to wrapper the running of the Git Fetch and then based on the deletions remove any matching local branches. Then finally returning me to my trunk branch.</p>
<p><strong>Note:</strong> This script was based on some sample I found, but I can&rsquo;t remember where to give credit, sorry.</p>
<script src="https://gist.github.com/rfennell/281e7e6c34d5ab511b2a4b38b8ceae72.js"></script>
<p>I used to just run this command from the command line, but I recently thought it would be easier if it became a Git Alias. As Git Aliases run a bash shell, this meant I needed to shell out to PowerShell 7. Hence, my Git Config ended up being as shown below</p>
<pre tabindex="0"><code>[user]
        name = Richard Fennell
        email = richard@blackmarble.co.uk
[filter &#34;lfs&#34;]
        required = true
        clean = git-lfs clean -- %f
        smudge = git-lfs smudge -- %f
        process = git-lfs filter-process
[init]
        defaultBranch = main
[alias]
        tidy = !pwsh.exe C:/Users/fez/OneDrive/Tools/Remove-DeletedGitBranches.ps1 -force
</code></pre><p>I can just run <code>'git tidy</code>&rsquo; and all my branches get sorted out.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Added Manual Test Plan support to my Azure DevOps Cross-Platform Release notes Task</title>
      <link>https://blog.richardfennell.net/posts/added-manual-test-plan-support-to-my-azure-devops-cross-platform-release-notes-task/</link>
      <pubDate>Wed, 03 Mar 2021 09:58:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/added-manual-test-plan-support-to-my-azure-devops-cross-platform-release-notes-task/</guid>
      <description>&lt;p&gt;I have just release 3.46.4 of my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Azure DevOps Cross-Platform release Notes task&lt;/a&gt; which adds support for generating release notes based on the results of &lt;a href=&#34;https://azure.microsoft.com/en-gb/services/devops/test-plans/&#34;&gt;Azure DevOps Test Plans&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There has been support in the task for automated tests, run as part of the build or release process, for a while. However, until this release, there was no way to generate release notes based on manual tests.&lt;/p&gt;
&lt;p&gt;Manual Test results are now made available to the templating engine using two new objects:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just release 3.46.4 of my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Azure DevOps Cross-Platform release Notes task</a> which adds support for generating release notes based on the results of <a href="https://azure.microsoft.com/en-gb/services/devops/test-plans/">Azure DevOps Test Plans</a>.</p>
<p>There has been support in the task for automated tests, run as part of the build or release process, for a while. However, until this release, there was no way to generate release notes based on manual tests.</p>
<p>Manual Test results are now made available to the templating engine using two new objects:</p>
<ul>
<li>manualtests - the array of manual Test Plan runs associated with any of the builds linked to the release. This includes sub-objects detailing each test.<br>
<strong>Note:</strong> Test Runs are also available under the builds array when the task is used in a release, for each build object there is a list of its manual tests as well commits and WI etc.</li>
<li>manualTestConfigurations - the array of manual test configurations test have been run against.</li>
</ul>
<p>The second object, to store the test configurations, is required because the test results only contain the ID of the configuration used, not any useful detail such as a name or description. The extra object allows a lookup to be done if this information is required in the release notes e.g. if you have chosen to list out each test, and each test is run multiple times in a test run against different configurations e.g. UAt and Live</p>
<p>So you can now generate release notes with summaries of manual test runs</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/03/image-1024x196.png"></p>
<p>Using a template in this form</p>
<pre tabindex="0"><code>## Manual Test Plans
| Run ID | Name | State | Total Tests | Passed Tests |
| --- | --- | --- | --- | --- |
{{#forEach manualTests}}
| [{{this.id}}]({{this.webAccessUrl}}) | {{this.name}} | {{this.state}} | {{this.totalTests}} | {{this.passedTests}} |
{{/forEach}}
</code></pre><p>Or detailing out all the individual test</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2021/03/image-2-1024x388.png"></p>
<p>with a template like this</p>
<pre tabindex="0"><code>## Manual Test Plans with test details
{{#forEach manualTests}}
### [{{this.id}}]({{this.webAccessUrl}}) {{this.name}} - {{this.state}}
| Test | Outcome | Configuration |
| - | - | - |
{{#forEach this.TestResults}}
| {{this.testCaseTitle}} | {{this.outcome}} | {{#with (lookup_a_test_configuration ../../manualTestConfigurations this.configuration.id)}} {{this.name}} {{/with}} |
{{/forEach}}
{{/forEach}} 
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Fixing my SQLite Error 5: &#39;database is locked&#39; error in Entity Framework</title>
      <link>https://blog.richardfennell.net/posts/fixing-my-sqlite-error-5-database-is-locked-error-in-entity-framework/</link>
      <pubDate>Fri, 12 Feb 2021 17:21:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixing-my-sqlite-error-5-database-is-locked-error-in-entity-framework/</guid>
      <description>&lt;p&gt;I have spent too long today trying to track down an intermittent “SQLite Error 5: &amp;lsquo;database is locked&amp;rsquo;” error in .Net Core Entity Framework.&lt;/p&gt;
&lt;p&gt;I have read plenty of documentation and even tried swapping to use SQL Server, as opposed to SQLite, but this just resulted in the error ‘There is already an open DataReader associated with this Connection which must be closed first.’.&lt;/p&gt;
&lt;p&gt;So everything pointed to it being a mistake I had made.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have spent too long today trying to track down an intermittent “SQLite Error 5: &lsquo;database is locked&rsquo;” error in .Net Core Entity Framework.</p>
<p>I have read plenty of documentation and even tried swapping to use SQL Server, as opposed to SQLite, but this just resulted in the error ‘There is already an open DataReader associated with this Connection which must be closed first.’.</p>
<p>So everything pointed to it being a mistake I had made.</p>
<p>And it was, it turns out the issue was I had the dbContext.SaveChanges() call inside a foreach loop</p>
<p>It was</p>
<pre tabindex="0"><code>using (var dbContext = scope.ServiceProvider.GetRequiredService()) {
    var itemsToQueue = dbContext.CopyOperations.Where(o =&gt; o.RequestedStartTime &lt; DateTime.UtcNow &amp;&amp; o.Status == OperationStatus.Queued);
    foreach (var item in itemsToQueue) {
        item.Status = OperationStatus.StartRequested;
        item.StartTime = DateTime.UtcNow;
        dbContext.SaveChanges();
    }
}
</code></pre><p>And it should have been</p>
<pre tabindex="0"><code> using (var dbContext = scope.ServiceProvider.GetRequiredService()) {
    var itemsToQueue = dbContext.CopyOperations.Where(o =&gt; o.RequestedStartTime &lt; DateTime.UtcNow &amp;&amp; o.Status == OperationStatus.Queued);
    foreach (var item in itemsToQueue) {
        item.Status = OperationStatus.StartRequested;
        item.StartTime = DateTime.UtcNow;
    }
    dbContext.SaveChanges();
}
</code></pre><p>Once this change was made my error disappeared.</p>
]]></content:encoded>
    </item>
    <item>
      <title>What to do when moving your Azure DevOps organisation from one region to another is delayed.</title>
      <link>https://blog.richardfennell.net/posts/what-to-do-when-moving-your-azure-devops-organisation-from-one-region-to-another-is-delayed/</link>
      <pubDate>Mon, 25 Jan 2021 10:09:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-to-do-when-moving-your-azure-devops-organisation-from-one-region-to-another-is-delayed/</guid>
      <description>&lt;p&gt;There are good reasons why you might wish to move an existing Azure DevOps organisation from one region to another. The most common ones are probably:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A new Azure DevOps region has become available since you created your organisation that is a &amp;lsquo;better home&amp;rsquo; for your projects.&lt;/li&gt;
&lt;li&gt;New or changing national regulations require your source stored in a specific location.&lt;/li&gt;
&lt;li&gt;You want your repositories as close to your workers as possible, to reduce network latency.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One of these reasons meant I recently had to move an Azure DevOps organisation, so followed the &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-organization-location?view=azure-devops#change-organization-region&#34;&gt;documented process&lt;/a&gt;. This requires you to&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There are good reasons why you might wish to move an existing Azure DevOps organisation from one region to another. The most common ones are probably:</p>
<ul>
<li>A new Azure DevOps region has become available since you created your organisation that is a &lsquo;better home&rsquo; for your projects.</li>
<li>New or changing national regulations require your source stored in a specific location.</li>
<li>You want your repositories as close to your workers as possible, to reduce network latency.</li>
</ul>
<p>One of these reasons meant I recently had to move an Azure DevOps organisation, so followed the <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/change-organization-location?view=azure-devops#change-organization-region">documented process</a>. This requires you to</p>
<ol>
<li>Whilst logged in as the Azure DevOps organisation owner, open the <a href="https://azuredevopsvirtualagent.azurewebsites.net/">Azure DevOps Virtual Support Agent</a></li>
<li>Select the quick action &lsquo;Change Organization Region&rsquo;</li>
<li>Follow the wizard to pick the new region and the date for the move.</li>
</ol>
<p>You are warned that there could be a short loss of service during the move. Much of the move is done as a background process. It is only the final switch over that can interrupt service, hence this interruption being short.</p>
<p>I followed this process, but after the planned move date I found my organisation has not moved. In the Virtual Support Agent, I found the message.</p>
<blockquote>
<p>Please note that region move requests are currently delayed due to ongoing deployments. We may not be able to perform the change at your requested time and may ask you to reschedule. We apologize for the potential delay and appreciate your patience!</p></blockquote>
<p>I received no other emails, I suspect overly aggressive spam filters were the cause of that, but it meant I was unclear what to do next. Should I:</p>
<ol>
<li>Just wait i.e. do not reschedule anything, even though the target date is now in the past</li>
<li>Reschedule the existing move request to a date in the future using the virtual assistant wizard</li>
<li>Cancel the old request and start the process again from scratch</li>
</ol>
<p>After asking the question in the <a href="https://developercommunity2.visualstudio.com/report?entry=problem&amp;space=21">Visual Studio Developer Community Forums</a> I was told the correct action is to cancel the old request and request a new move date. It seems that once your requested date is passed the move will not take place no matter how long you wait.</p>
<p>Hence, I created a new request, which all went through exactly as planned.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Porting my Release Notes Azure DevOps Pipelines Extension to GitHub Actions</title>
      <link>https://blog.richardfennell.net/posts/porting-my-release-notes-azure-devops-pipelines-extension-to-github-actions/</link>
      <pubDate>Thu, 31 Dec 2020 14:29:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/porting-my-release-notes-azure-devops-pipelines-extension-to-github-actions/</guid>
      <description>&lt;p&gt;One of my most popular Azure DevOps Extensions is my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Release Notes Pipeline task&lt;/a&gt;. This allows the creation of release notes using information obtained from the Azure DevOps API and formatted using a &lt;a href=&#34;https://handlebarsjs.com/&#34;&gt;Handlebars Template&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Given the popularity of GitHub Actions, I got to wondering whether porting this extension was viable?&lt;/p&gt;
&lt;p&gt;Well the release of my new &lt;a href=&#34;https://github.com/marketplace/actions/generate-release-notes-using-handlebars-template&#34;&gt;Generate Release Notes with a Handlebars Template&lt;/a&gt; action shows that it was.&lt;/p&gt;
&lt;p&gt;The basic concept of this new action is the same as for the older task, get information on the pipeline/workflow run using the API and then format it using a Handlebars template. However, the information that can be returned is different. But this stands to reason as GitHub is not Azure DevOps. This is especially true when you consider the differences between the simplicity of GitHub Issues and the complexity, and variability of format, of Azure DevOps Work Items&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One of my most popular Azure DevOps Extensions is my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Release Notes Pipeline task</a>. This allows the creation of release notes using information obtained from the Azure DevOps API and formatted using a <a href="https://handlebarsjs.com/">Handlebars Template</a>.</p>
<p>Given the popularity of GitHub Actions, I got to wondering whether porting this extension was viable?</p>
<p>Well the release of my new <a href="https://github.com/marketplace/actions/generate-release-notes-using-handlebars-template">Generate Release Notes with a Handlebars Template</a> action shows that it was.</p>
<p>The basic concept of this new action is the same as for the older task, get information on the pipeline/workflow run using the API and then format it using a Handlebars template. However, the information that can be returned is different. But this stands to reason as GitHub is not Azure DevOps. This is especially true when you consider the differences between the simplicity of GitHub Issues and the complexity, and variability of format, of Azure DevOps Work Items</p>
<p>The new action is focused on the workflow run it is called from. It make an API call to get the details of the run. This contains a lot of information about the run and URL links to associated items. Using these link, the associated information is retrieved using the API and the results added to the objects available in the Handlebars template. In this initial version of the Action the objects tree available in a template includes:</p>
<ul>
<li>runDetails – the details of the current workflow run
<ul>
<li>pull_requests - the array of pull requests associated with the run
<ul>
<li>commits - the array of commits associated with the PR</li>
<li>comments - the array of comment associated with the PR</li>
<li>linkedIssues - the array of linked issues with the PR</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>As with my Azure DevOps Extension I have made use of the extensibility of Handlebars. My new action includes all the <a href="https://github.com/helpers/handlebars-helpers">Handlebar Helpers,</a> plus some action specific helpers I have written and you have the ability to add your own custom handlebar helpers if needed.</p>
<p>So I hope people find this new Action useful, I guess only time will tell</p>
]]></content:encoded>
    </item>
    <item>
      <title>My DDD2020 Session - How can I automatically create Azure DevOps Release Notes and how can I publish them</title>
      <link>https://blog.richardfennell.net/posts/my-ddd2020-session-how-can-i-automatically-create-azure-devops-release-notes-how-can-i-publish-them/</link>
      <pubDate>Mon, 21 Dec 2020 09:30:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-ddd2020-session-how-can-i-automatically-create-azure-devops-release-notes-how-can-i-publish-them/</guid>
      <description>&lt;p&gt;Really please to say that my DDD2020 session is now available to stream.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://youtube.com/watch?v=xaV3dFoQdV8&#34;&gt;https://youtube.com/watch?v=xaV3dFoQdV8&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Thanks to the organisers and sponsors that allowed this event to ahead this year in this difficult year.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Really please to say that my DDD2020 session is now available to stream.</p>
<p><a href="https://youtube.com/watch?v=xaV3dFoQdV8">https://youtube.com/watch?v=xaV3dFoQdV8</a></p>
<p>Thanks to the organisers and sponsors that allowed this event to ahead this year in this difficult year.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running UWP Unit Tests as part of an Azure DevOps Pipeline</title>
      <link>https://blog.richardfennell.net/posts/running-uwp-unit-tests-as-part-of-an-azure-devops-pipeline/</link>
      <pubDate>Tue, 08 Dec 2020 09:36:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-uwp-unit-tests-as-part-of-an-azure-devops-pipeline/</guid>
      <description>&lt;p&gt;I was reminded recently of the hoops you have to jump through to run &lt;a href=&#34;https://docs.microsoft.com/en-us/visualstudio/test/walkthrough-creating-and-running-unit-tests-for-windows-store-apps?view=vs-2019&#34;&gt;UWP unit tests&lt;/a&gt; within an Azure DevOps automated build.&lt;/p&gt;
&lt;p&gt;The key steps you need to remember are as follows&lt;/p&gt;
&lt;h3 id=&#34;desktop-interaction&#34;&gt;Desktop Interaction&lt;/h3&gt;
&lt;p&gt;The build agent should not be running as a service it must be able to interact with the desktop.&lt;/p&gt;
&lt;p&gt;If you did not set this mode during configuration &lt;a href=&#34;https://www.donovanbrown.com/post/auto-start-build-agent-in-interactive-mode&#34;&gt;this post from Donovan Brown&lt;/a&gt; shows how to swap the agent over without a complete reconfiguration.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was reminded recently of the hoops you have to jump through to run <a href="https://docs.microsoft.com/en-us/visualstudio/test/walkthrough-creating-and-running-unit-tests-for-windows-store-apps?view=vs-2019">UWP unit tests</a> within an Azure DevOps automated build.</p>
<p>The key steps you need to remember are as follows</p>
<h3 id="desktop-interaction">Desktop Interaction</h3>
<p>The build agent should not be running as a service it must be able to interact with the desktop.</p>
<p>If you did not set this mode during configuration <a href="https://www.donovanbrown.com/post/auto-start-build-agent-in-interactive-mode">this post from Donovan Brown</a> shows how to swap the agent over without a complete reconfiguration.</p>
<h3 id="test-assemblies">Test Assemblies</h3>
<p>The UWP unit test projects are not built as a DLL, but as an EXE.</p>
<p>I stupidly just made my VSTest task look for the generated EXE and run the tests they contained. This does not work generating the somewhat confusing error</p>
<blockquote>
<p>Test run will use DLL(s) built for framework .NETFramework,Version=v4.0 and platform X86. Following DLL(s) do not match framework/platform settings.<br>
BlackMarble.Spectrum.FridgeManagement.Client.OneWire.UnitTests.exe is built for Framework .NETCore,Version=v5.0 and Platform X86.</p></blockquote>
<p>What you should search for as the entry point for the tests is the <strong>.appxrecipe</strong> file. Once tI used this my tests ran.</p>
<p>So my pipeline YML to run all the tests in a built solutions was</p>
<p><code>- task: VisualStudioTestPlatformInstaller@1</code><br>
<code>   inputs:</code><br>
<code>      packageFeedSelector: 'nugetOrg'</code><br>
<code>      versionSelector: 'latestPreRelease'                 </code><br>
<code>- task: VSTest@2</code><br>
<code>    displayName: 'VSTest - testAssemblies'</code><br>
<code>    inputs:</code><br>
<code>       platform: 'x86'</code><br>
<code>       configuration: '$(BuildConfiguration)'</code><br>
<code>       testSelector: 'testAssemblies' </code><br>
<code>testAssemblyVer2: | # Required when testSelector == TestAssemblies</code><br>
<code>         ***unittests.dll</code><br>
<code>        ***unittests.build.appxrecipe</code><br>
<code>         !***TestAdapter.dll</code><br>
<code>         !**obj** </code><br>
<code>       searchFolder: '$(Build.SourcesDirectory)/src'</code><br>
<code>       resultsFolder: '$(System.DefaultWorkingDirectory)TestResults'</code><br>
<code>       runInParallel: false</code><br>
<code>       codeCoverageEnabled: true</code><br>
<code>       rerunFailedTests: false</code><br>
<code>       runTestsInIsolation: true</code><br>
<code>       runOnlyImpactedTests: false</code><br>
<code>           - task: PublishTestResults@2</code><br>
<code>   displayName: 'Publish Test Results **/TEST-*.xml'</code><br>
<code>  condition: always()</code></p>
]]></content:encoded>
    </item>
    <item>
      <title>Out of Memory running SonarQube Analysis on a large projects</title>
      <link>https://blog.richardfennell.net/posts/out-of-memory-running-sonarqube-analysis-on-a-large-projects/</link>
      <pubDate>Tue, 01 Dec 2020 16:14:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/out-of-memory-running-sonarqube-analysis-on-a-large-projects/</guid>
      <description>&lt;p&gt;Whilst adding SonarQube analysis to a large project I started getting memory errors during the analysis phase. The solution was to up the memory available to the SonarQube Scanner on the my build agent, not the memory on the SonarQube server as I had first thought. This is done with an environment variable &lt;a href=&#34;https://docs.sonarqube.org/latest/analysis/scan/sonarscanner/&#34;&gt;as per the documentation&lt;/a&gt;, but how best to do this within our Azure DevOps build systems?&lt;/p&gt;
&lt;p&gt;The easiest way to set the environment variable &lt;code&gt; `SONAR_SCANNER_OPTS&lt;/code&gt;` on every build agent is to just set it via a Azure Pipeline variable. This works because the build agent makes all pipeline variables available as environment variables at runtime.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst adding SonarQube analysis to a large project I started getting memory errors during the analysis phase. The solution was to up the memory available to the SonarQube Scanner on the my build agent, not the memory on the SonarQube server as I had first thought. This is done with an environment variable <a href="https://docs.sonarqube.org/latest/analysis/scan/sonarscanner/">as per the documentation</a>, but how best to do this within our Azure DevOps build systems?</p>
<p>The easiest way to set the environment variable <code> `SONAR_SCANNER_OPTS</code>` on every build agent is to just set it via a Azure Pipeline variable. This works because the build agent makes all pipeline variables available as environment variables at runtime.</p>
<p>So as I was using YML Pipeline, I set a variable within the build job</p>
<p>- `job: build<br>
timeoutInMinutes: 240<br>
variables:</p>
<ul>
<li>name: BuildConfiguration<br>
value: &lsquo;Release&rsquo;</li>
<li>name: SONAR_SCANNER_OPTS<br>
value: -Xmx4096m<br>
steps:`</li>
</ul>
<p>I found I had to quadruple the memory allocated to the scanner. Once this was done my analysis completed</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting confused over Azure DevOps Pipeline variable evaluation</title>
      <link>https://blog.richardfennell.net/posts/getting-confused-over-azure-devops-pipeline-variable-evaluation/</link>
      <pubDate>Fri, 27 Nov 2020 15:50:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-confused-over-azure-devops-pipeline-variable-evaluation/</guid>
      <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The use of variables is important in Azure DevOps pipelines, especially when using YML templates. They allow a single pipeline to be used for multiple branches/configurations etc.&lt;/p&gt;
&lt;p&gt;The most common form of variables you see is are the &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&amp;amp;tabs=yaml&#34;&gt;predefined built in variables&lt;/a&gt; e.g. &lt;strong&gt;$(Build.BuildNumber)&lt;/strong&gt; and your own custom ones e.g. &lt;strong&gt;$(var)&lt;/strong&gt;. Usually the value of these variables are set before/as the build is run, as an input condition.&lt;/p&gt;
&lt;p&gt;But this is not the only way variables can be used. As noted in the &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&amp;amp;tabs=yaml%2Cbatch#understand-variable-syntax&#34;&gt;documentation&lt;/a&gt; there are different ways to access a variable&amp;hellip;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>The use of variables is important in Azure DevOps pipelines, especially when using YML templates. They allow a single pipeline to be used for multiple branches/configurations etc.</p>
<p>The most common form of variables you see is are the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&amp;tabs=yaml">predefined built in variables</a> e.g. <strong>$(Build.BuildNumber)</strong> and your own custom ones e.g. <strong>$(var)</strong>. Usually the value of these variables are set before/as the build is run, as an input condition.</p>
<p>But this is not the only way variables can be used. As noted in the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&amp;tabs=yaml%2Cbatch#understand-variable-syntax">documentation</a> there are different ways to access a variable&hellip;</p>
<blockquote>
<p>In a pipeline, template expression variables <strong>${{ variables.var }}</strong> get processed at compile time, before runtime starts. Macro syntax variables <strong>$(var)</strong> get processed during runtime before a task runs. Runtime expressions <strong>$[variables.var]</strong> also get processed during runtime but were designed for use with conditions and expressions.</p>
<p>Azure DevOps Documentation</p></blockquote>
<p>99% of the time I have been fine using just the <strong>$(var)</strong> syntax, but I recently was working on a case where this would not work for me.</p>
<h2 id="the-issue">The Issue</h2>
<p>I had a pipeline that made heavy use of <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops">YML templates</a> and <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#conditional-insertion">conditional task insertion</a> to include sets of task based upon the manually entered and pre-defined variables.</p>
<p>The problems that one of the tasks, used in a template, set a boolean output variable <strong>$(outVar)</strong> by calling</p>
<pre tabindex="0"><code>echo &#39;##vso\[task.setvariable variable=outvar;isOutput=true\]true&#39;
</code></pre><p>This task created the output variable could be accessed by other tasks as the variable <strong>$(mytask.outvar)</strong>, but it was set at runtime it not available at the time of the YML compilation.</p>
<p>This caused me a problem as it meant that it could not be used in the template&rsquo;s <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#conditional-insertion">conditional task inclusion</a> blocks as it as not present art compile time when this code is evaluated e.g.</p>
<pre tabindex="0"><code>\- ${{ if eq(mytask.outvar, &#39;true&#39;) }} :
  # the task to run if the condition is met
  - task: Some.Task@1 
    ....
</code></pre><p>I tied referencing the variable using all forms of $ followed by brackets syntax I could think of, but it did not help.</p>
<p>The lesson here is that you cannot make a runtime value a compile time value by wishing it to change.</p>
<p>The only solution I could find was to make use of the runtime variable in a place where it can be resolved. If you wish to enable or disable a task based on the variable value then the only option is to use the <strong>condition</strong> parameter</p>
<pre tabindex="0"><code>  # the task to run if the condition is met
  - task: Some.Task@1 
    condition: and(succeeded(), eq(mytask.outvar, &#39;true&#39;))
    ....
</code></pre><p>The only downside of this way of working as opposed to the conditional insertion is that</p>
<ul>
<li>If you conditional insertion, non required tasks are never shown in the pipeline as they are not compiled into it</li>
<li>If using the condition property to exclude a task, it will still appear in the log, but it can be seen that it has not been run.</li>
</ul>
<p>So I got there in the end, it was just not as neat as I had hoped, but I do have a clearer understanding of compile and runtime variables in Azure DevOps YML</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to export Azure DevOps Classic Builds and Release to YAML</title>
      <link>https://blog.richardfennell.net/posts/how-to-export-azure-devops-classic-builds-and-release-to-yaml/</link>
      <pubDate>Fri, 13 Nov 2020 12:03:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-export-azure-devops-classic-builds-and-release-to-yaml/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is another one of those posts so I can remember where some useful information is&amp;hellip;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you are migrating your &lt;a href=&#34;https://www.youtube.com/watch?v=WMQ0G9eXczE&amp;amp;t=21s&#34;&gt;Azure DevOps Classic Builds and Release to Multi-Stage YAML&lt;/a&gt; then an import step is to export all the exiting build, task groups and release as YAML files.&lt;/p&gt;
&lt;p&gt;You can do this by hand within the Pipeline UI, with a lot of cut and pasting, but much easier is to use the excellent &lt;a href=&#34;https://github.com/f2calv/yamlizr&#34;&gt;Yamlizr - Azure DevOps Classic-to-YAML Pipelines CLI&lt;/a&gt; from &lt;a href=&#34;https://github.com/f2calv&#34;&gt;Alex Vincent&lt;/a&gt;. A single CLI command exports everything with a Team project into a neat folder structure of template base YAML.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>This is another one of those posts so I can remember where some useful information is&hellip;.</em></p>
<p>If you are migrating your <a href="https://www.youtube.com/watch?v=WMQ0G9eXczE&amp;t=21s">Azure DevOps Classic Builds and Release to Multi-Stage YAML</a> then an import step is to export all the exiting build, task groups and release as YAML files.</p>
<p>You can do this by hand within the Pipeline UI, with a lot of cut and pasting, but much easier is to use the excellent <a href="https://github.com/f2calv/yamlizr">Yamlizr - Azure DevOps Classic-to-YAML Pipelines CLI</a> from <a href="https://github.com/f2calv">Alex Vincent</a>. A single CLI command exports everything with a Team project into a neat folder structure of template base YAML.</p>
<p>I cannot recommend the tool enough</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting my ThinkPad Active Pen working with my Lenovo X1 Carbon Extreme</title>
      <link>https://blog.richardfennell.net/posts/getting-my-thinkpad-active-pen-working-with-my-lenovo-x1-carbon-extreme/</link>
      <pubDate>Fri, 06 Nov 2020 15:38:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-my-thinkpad-active-pen-working-with-my-lenovo-x1-carbon-extreme/</guid>
      <description>&lt;p&gt;I have had a ThinkPad Active Pen (model SD60G957200) ever since I got my Lenovo X1 Carbon Extreme.&lt;/p&gt;
&lt;p&gt;The pen, when it works, has worked well. However, the problem has been that whether the pen and PC detected each other seemed very hit and miss.&lt;/p&gt;
&lt;p&gt;Today I found the root cause. It was not drivers or dodgy Bluetooth as I had thought, but a weak spring inside the pen. It was not so weak that the battery rattled, but weak enough that the electrical circuit was not being closed reliably on the battery.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have had a ThinkPad Active Pen (model SD60G957200) ever since I got my Lenovo X1 Carbon Extreme.</p>
<p>The pen, when it works, has worked well. However, the problem has been that whether the pen and PC detected each other seemed very hit and miss.</p>
<p>Today I found the root cause. It was not drivers or dodgy Bluetooth as I had thought, but a weak spring inside the pen. It was not so weak that the battery rattled, but weak enough that the electrical circuit was not being closed reliably on the battery.</p>
<p>The fix was to replace the weak spring with new one out of an old ball point pen. Once this was done the pen became instantly reliable.</p>
<p>Wish I had spotted that sooner.</p>
<p><strong>Updated 11 Nov 2020</strong>: I may have spoken too soon, it is back to it&rsquo;s old behaviour today :(</p>
<p>However, I think it could just be the AAAA battery. Seems it is not a good idea to leave a battery in when the pen is not is use given the pen has no power switch.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Black Marble BiteSize MVP Interview</title>
      <link>https://blog.richardfennell.net/posts/black-marble-bitesize-mvp-interview/</link>
      <pubDate>Thu, 15 Oct 2020 09:40:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/black-marble-bitesize-mvp-interview/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://www.youtube.com/embed/hU8gSpuPS-4&#34;&gt;https://www.youtube.com/embed/hU8gSpuPS-4&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://www.youtube.com/embed/hU8gSpuPS-4">https://www.youtube.com/embed/hU8gSpuPS-4</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Using GitVersion when your default branch is not called &#39;master&#39;</title>
      <link>https://blog.richardfennell.net/posts/using-gitversion-when-your-default-branch-is-not-called-master/</link>
      <pubDate>Wed, 14 Oct 2020 12:53:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-gitversion-when-your-default-branch-is-not-called-master/</guid>
      <description>&lt;p&gt;The Black Live Matter movement has engendered many conversations, hopefully starting changes for the good. Often these changes involve the use of language. One such change has been the move to stop using the name &lt;code&gt;master&lt;/code&gt; and switching to the name &lt;code&gt;main&lt;/code&gt; for the trunk/default branch in Git repos. This change is moving apace driven by tools such as &lt;a href=&#34;https://github.com/github/renaming&#34;&gt;GitHub&lt;/a&gt; and &lt;a href=&#34;https://devblogs.microsoft.com/devops/azure-repos-default-branch-name/&#34;&gt;Azure DevOps&lt;/a&gt; .&lt;/p&gt;
&lt;p&gt;I have recently had need, for the first time since swapping my default branch name in new repos to &lt;code&gt;main&lt;/code&gt;, to use Semantic Version and the GitVersion tool.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The Black Live Matter movement has engendered many conversations, hopefully starting changes for the good. Often these changes involve the use of language. One such change has been the move to stop using the name <code>master</code> and switching to the name <code>main</code> for the trunk/default branch in Git repos. This change is moving apace driven by tools such as <a href="https://github.com/github/renaming">GitHub</a> and <a href="https://devblogs.microsoft.com/devops/azure-repos-default-branch-name/">Azure DevOps</a> .</p>
<p>I have recently had need, for the first time since swapping my default branch name in new repos to <code>main</code>, to use Semantic Version and the GitVersion tool.</p>
<p>&lsquo;Out of the box&rsquo; I hit a problem. The current shipping version of <a href="https://gitversion.net/docs/">GitVersion</a> (5.3.2) by default makes the assumption that&rsquo;s the trunk branch is called <code>master</code>. Hence, throws an exception if this branch cannot be found.</p>
<p>Looking at the <a href="https://github.com/GitTools/GitVersion">project&rsquo;s repo</a> you can find PRs, tagged for a future release, that address this constraint. However, you don&rsquo;t have to wait for a new version to ship to use this excellent tool in repos with other branch naming conventions.</p>
<p>The solution is to create an override file <code>GitVersion.yml</code> in the root of your repo with the following content to alter the Regex used to find branches. Note that the content below is as a minimum, you can override any other default <a href="https://gitversion.net/docs/configuration">configuration</a> values in this file as needed.</p>
<pre tabindex="0"><code>branches:  
   master:  
      regex: ^master$|^main$
</code></pre><p>With this override file the default branch can be either <code>master</code> or <code>main</code>.</p>
<p>You can of course use a different name or limit the Regex to a single name as you need.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How do handle PRs for Azure DevOps YAML Pipelines if the YAML templates are in a different repo?</title>
      <link>https://blog.richardfennell.net/posts/how-do-handle-prs-for-azure-devops-yaml-pipelines-if-the-yaml-templates-are-in-a-different-repo/</link>
      <pubDate>Fri, 18 Sep 2020 12:11:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-do-handle-prs-for-azure-devops-yaml-pipelines-if-the-yaml-templates-are-in-a-different-repo/</guid>
      <description>&lt;p&gt;Azure DevOps YAML base pipelines allow the pipeline definitions to be treated like any other code. So you make changes in a branch and PR them into the main/trunk when they are approved.&lt;/p&gt;
&lt;p&gt;This works well if all the YAML files are in the same repo, but not so well if you are using &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops&#34;&gt;YAML templates&lt;/a&gt; and the templated YAML is stored in a different repo. This is because an Azure DevOps PR is limited to a single repo. So testing a change to a YAML template in a different repo needs a bit of thought.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Azure DevOps YAML base pipelines allow the pipeline definitions to be treated like any other code. So you make changes in a branch and PR them into the main/trunk when they are approved.</p>
<p>This works well if all the YAML files are in the same repo, but not so well if you are using <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops">YAML templates</a> and the templated YAML is stored in a different repo. This is because an Azure DevOps PR is limited to a single repo. So testing a change to a YAML template in a different repo needs a bit of thought.</p>
<p>Say for example you have a template called <strong>core.yml</strong> in a repo called <strong>YAMLTemplates</strong> and you make a change to it and start a PR. Unless you have a test YAML pipeline in that repo, which is not a stupid idea, but not always possible depending on the complexity of your build process, there is no way to test the change inside that repo.</p>
<p>The answer is to create a temporary branch in a repo that consumes the shared YAML template. In this temporary branch make an edit to the repository setting that references the shared YAML repo to point to the update branch contain the PR</p>
<p><code>resources: </code><br>
<code>repositories:</code><br>
<code>  - repository: YAMLTemplates</code><br>
<code>   type: git</code><br>
<code>   name: 'Git Project/YAMLTemplates'</code><br>
# <code>defaults to ref: 'refs/heads/master'</code><br>
<code>ref: 'refs/heads/newbranch'</code></p>
<p>You don&rsquo;t need to make any change to the line where the template is used</p>
<p><code>extends:  </code><br>
<code>template: core.yml@YAMLTemplates</code><br>
<code>  parameters:</code><br>
<code>    customer: ${{parameters.Customer}}</code><br>
<code>    useSonarQube: ${{parameters.useSonarQube}}</code></p>
<p>You can then use this updated pipeline to validated your PR. Once you are happy it works you can</p>
<ol>
<li>Complete the PR in the YAML Templates repo</li>
<li>Delete the temporary branch in your consuming repo.</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>How can I automatically create Azure DevOps Release Notes and how can I publish them?</title>
      <link>https://blog.richardfennell.net/posts/how-can-i-automatically-create-azure-devops-release-notes-and-how-can-i-publish-them/</link>
      <pubDate>Tue, 15 Sep 2020 14:28:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-can-i-automatically-create-azure-devops-release-notes-and-how-can-i-publish-them/</guid>
      <description>&lt;p&gt;A question I am often asked when consulting on Azure DevOps is ‘how can I automatically create release notes and how can I publish them?’.&lt;/p&gt;
&lt;p&gt;Well it is for just this requirement that I have written a set of Azure DevOps Pipeline Tasks&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Release Note Generator&lt;/a&gt; - to generate release notes. I strongly recommend this Cross-platform Node-based version. I plan to deprecate my older PowerShell version in the not too distant future as it uses ‘homegrown logic’, as opposed to standard Azure DevOps API calls, to get associated items.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks&#34;&gt;Wiki Updater&lt;/a&gt; - to upload a page tot a WIKI.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WikiPDFExport-Tasks&#34;&gt;WIKI PDF Generator&lt;/a&gt; - to convert a generated page, or whole WIKI, to PDF format.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So lets deal with these tools in turn&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A question I am often asked when consulting on Azure DevOps is ‘how can I automatically create release notes and how can I publish them?’.</p>
<p>Well it is for just this requirement that I have written a set of Azure DevOps Pipeline Tasks</p>
<ul>
<li><a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Release Note Generator</a> - to generate release notes. I strongly recommend this Cross-platform Node-based version. I plan to deprecate my older PowerShell version in the not too distant future as it uses ‘homegrown logic’, as opposed to standard Azure DevOps API calls, to get associated items.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks">Wiki Updater</a> - to upload a page tot a WIKI.</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WikiPDFExport-Tasks">WIKI PDF Generator</a> - to convert a generated page, or whole WIKI, to PDF format.</li>
</ul>
<p>So lets deal with these tools in turn</p>
<h3 id="generating-release-notes">Generating Release Notes</h3>
<p>The Release Note task generates release notes by getting the items associated with a build (or release) from the Azure DevOps API and generating a document based on a <a href="https://handlebarsjs.com/">Handlebars</a> based template.</p>
<ul>
<li>The artefacts that can be included in the release notes are details of the build/release and associated Work Items, Commits/Changesets, Tests and Pull Requests.</li>
<li>Most of the <a href="https://github.com/rfennell/AzurePipelines/tree/master/SampleTemplates/XplatGenerateReleaseNotes%20%28Node%20based%29/Version%203">sample templates provided</a> are for markdown format files. However, they could easily be converted for other text-based formats such as HTML if needed.</li>
<li>The use of Handlebars are the templating language makes for a very flexible and easily extensible means of document generation. There are <a href="https://github.com/rfennell/AzurePipelines/tree/master/SampleTemplates/XplatGenerateReleaseNotes%20%28Node%20based%29/Version%203">sample of custom extensions provided with the templates</a></li>
</ul>
<p>Sample YAML for this task is as follows, not it is using an inline template but it is possible to also load the template from a file path</p>
<pre tabindex="0"><code> - task: richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes.XplatGenerate-Release-Notes.XplatGenerateReleaseNotes@3
          displayName: &#39;Generate Release Notes&#39;
          inputs:
            outputfile: &#39;$(System.DefaultWorkingDirectory)inline.md&#39;
            outputVariableName: OutputText
            templateLocation: InLine
            inlinetemplate: |
              # Notes for build 
              **Build Number**: {{buildDetails.id}}
              **Build Trigger PR Number**: {{lookup buildDetails.triggerInfo &#39;pr.number&#39;}} 

              # Associated Pull Requests ({{pullRequests.length}})
              {{#forEach pullRequests}}
              {{#if isFirst}}### Associated Pull Requests (only shown if  PR) {{/if}}
              *  **PR {{this.id}}**  {{this.title}}
              {{/forEach}}

              # Builds with associated WI/CS ({{builds.length}})
              {{#forEach builds}}
              {{#if isFirst}}## Builds {{/if}}
              ##  Build {{this.build.buildNumber}}
              {{#forEach this.commits}}
              {{#if isFirst}}### Commits {{/if}}
              - CS {{this.id}}
              {{/forEach}}
              {{#forEach this.workitems}}
              {{#if isFirst}}### Workitems {{/if}}
              - WI {{this.id}}
              {{/forEach}} 
              {{/forEach}}

              # Global list of WI ({{workItems.length}})
              {{#forEach workItems}}
              {{#if isFirst}}## Associated Work Items (only shown if  WI) {{/if}}
              *  **{{this.id}}**  {{lookup this.fields &#39;System.Title&#39;}}
                - **WIT** {{lookup this.fields &#39;System.WorkItemType&#39;}} 
                - **Tags** {{lookup this.fields &#39;System.Tags&#39;}}
              {{/forEach}}

              {{#forEach commits}}
              {{#if isFirst}}### Associated commits{{/if}}
              * ** ID{{this.id}}** 
                -  **Message:** {{this.message}}
                -  **Commited by:** {{this.author.displayName}} 
                -  **FileCount:** {{this.changes.length}} 
              {{#forEach this.changes}}
                    -  **File path (TFVC or TfsGit):** {{this.item.path}}  
                    -  **File filename (GitHub):** {{this.filename}}  
              {{/forEach}}
              {{/forEach}}
</code></pre><h3 id="how-to-publish-the-notes">How to Publish The Notes</h3>
<p>Once the document has been generated there is a need for a decision as to how to publish it. TThere are a few options</p>
<ul>
<li>Attach the markdown file as an artefact to the Build or Pipeline. Note you can’t do this with a UI based Releases as they have no concept of artefacts, but this is becoming less of a concern as people move to multistage YAML.</li>
<li>Save in some other location e.g Azure Storage or if on-premises a UNC file share</li>
<li>Send the document as an email – I have used <a href="https://marketplace.visualstudio.com/items?itemName=rvo.SendEmailTask">Rene van Osnabrugge Send Email Task</a> for this job.</li>
<li>Upload it to a WIKI using my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks">WIKI Updater Task</a></li>
<li>Convert the markdown release note document, or the whole WIKI, to a PDF and use any of the above options using first my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WikiPDFExport-Tasks">WIKI PDF Exporter Task</a> then another task.</li>
</ul>
<p>I personally favour the 1st and 4th options used together. Attachment to the pipeline and then upload the document to a WIKI</p>
<p>A sample of suitable YAML is shown below, uploading the document to an Azure DevOps WIKI. Please note that the repo URL and authentication can trip you up here so <a href="https://github.com/rfennell/AzurePipelines/wiki/WIKI-Updater-Tasks">have a good read of the provided documentation</a> before you use this task.</p>
<pre tabindex="0"><code> - task: richardfennellBM.BM-VSTS-WIKIUpdater-Tasks.WikiUpdaterTask.WikiUpdaterTask@1
          displayName: &#39;Git based WIKI Updater&#39;
          inputs:
            repo: &#39;dev.azure.com/richardfennell/Git%20project/_git/Git-project.wiki&#39;
            filename: &#39;xPlatReleaseNotes/build-Windows-handlebars.md&#39;
            dataIsFile: true
            sourceFile: &#39;$(System.DefaultWorkingDirectory)inline.md&#39;
            message: &#39;Update from Build&#39;
            gitname: builduser
            gitemail: &#39;build@demo&#39;
            useAgentToken: true
</code></pre><h3 id="but-when-do-i-generate-the-release-notes">But when do I generate the release notes?</h3>
<p>I would suggest you always generate release notes every build/pipeline i.e. a document of the changes since the last successful build/pipeline of that build definition. This should be attached as an artefact.</p>
<p>However, this per build document will usually too granular for use as ‘true’ release notes i.e. something to hand to a QA team, auditor or client.</p>
<p>To address this second use case I suggest, within a multistage YAML pipeline (or a UI based release), having a stage specifically for generating release notes.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2020/09/image-1-1024x298.png"></p>
<p>My task has a feature that it will check for the last successful release of a pipeline/release to the stage it is defined in, so will base the release note on the last successful release to that given stage. If this &lsquo;documentation&rsquo; stage is only run when you are doing a ‘formal’ release, the release note generated will be since the last formal release. Exactly what a QA team or auditor or client might want.</p>
<h3 id="in-conclusion">In conclusion</h3>
<p>So I hope that this post provides some ideas as to how you can use my tasks generate some useful release notes.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix For: &amp;lsquo;The pipeline is not valid error: Unable to resolve latest version&amp;rsquo; on an Azure DevOps YAML pipeline</title>
      <link>https://blog.richardfennell.net/posts/fix-for-the-pipeline-is-not-valid-error-unable-to-resolve-latest-version-on-an-azure-devops-yaml-pipeline/</link>
      <pubDate>Thu, 27 Aug 2020 15:54:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-the-pipeline-is-not-valid-error-unable-to-resolve-latest-version-on-an-azure-devops-yaml-pipeline/</guid>
      <description>&lt;h3 id=&#34;the-issue&#34;&gt;The Issue&lt;/h3&gt;
&lt;p&gt;I have an Azure DevOps multi-stage YAML pipeline that started giving the error `The pipeline is not valid error: Unable to resolve latest version for pipeline templates: this could be due to inaccessible pipeline or no version is available` and failing instantly.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/08/image-1.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/08/image_thumb-1.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3 id=&#34;the-solution&#34;&gt;The Solution&lt;/h3&gt;
&lt;p&gt;This is not the most helpful message, but after some digging I found the problem.&lt;/p&gt;
&lt;p&gt;The pipeline used another pipeline as a resources&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="the-issue">The Issue</h3>
<p>I have an Azure DevOps multi-stage YAML pipeline that started giving the error `The pipeline is not valid error: Unable to resolve latest version for pipeline templates: this could be due to inaccessible pipeline or no version is available` and failing instantly.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/08/image-1.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/08/image_thumb-1.png" title="image"></a></p>
<h3 id="the-solution">The Solution</h3>
<p>This is not the most helpful message, but after some digging I found the problem.</p>
<p>The pipeline used another pipeline as a resources</p>
<pre tabindex="0"><code>resources:   
  pipelines:  
  - pipeline: templates  
    source: QueuesAndFunctionsDemo-CI    
    branch: master
</code></pre><p>This referenced build had failed, so there was no successful build resources to load, hence the error.</p>
<p>Once the problem with this reference build was fixed the error message went away and I could trigger my build</p>
]]></content:encoded>
    </item>
    <item>
      <title>Exporting Release Notes and WIKIs as PDFs using a new Azure DevOps Extension that wrappers AzureDevOps.WikiPDFExport</title>
      <link>https://blog.richardfennell.net/posts/exporting-release-notes-and-wikis-as-pdfs-using-a-new-azure-devops-extension-that-wrappers-azuredevops-wikipdfexport/</link>
      <pubDate>Thu, 27 Aug 2020 10:17:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/exporting-release-notes-and-wikis-as-pdfs-using-a-new-azure-devops-extension-that-wrappers-azuredevops-wikipdfexport/</guid>
      <description>&lt;p&gt;A common question I get when people are using my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Release Notes task for Azure DevOps&lt;/a&gt; is whether it is possible to get the release notes as a PDF. In the past, the answer was that I did not know of any easy way. However, I have recently come across a command line tool by Max Melcher called &lt;a href=&#34;https://github.com/MaxMelcher/AzureDevOps.WikiPDFExport&#34;&gt;AzureDevOps.WikiPDFExport&lt;/a&gt; that allows you to export a whole WIKI (or a single file) as a PDF. Its basic usage is&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A common question I get when people are using my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Release Notes task for Azure DevOps</a> is whether it is possible to get the release notes as a PDF. In the past, the answer was that I did not know of any easy way. However, I have recently come across a command line tool by Max Melcher called <a href="https://github.com/MaxMelcher/AzureDevOps.WikiPDFExport">AzureDevOps.WikiPDFExport</a> that allows you to export a whole WIKI (or a single file) as a PDF. Its basic usage is</p>
<ul>
<li>Clone a WIKI Repo</li>
<li>Run the command line tool passing in a path to the root of the cloned repo</li>
<li>The .order file is read</li>
<li>A PDF is generated</li>
</ul>
<p>This is a nice and simple process, but it would be nice to be able to automate this process as part of a build pipeline. After a bit of thought, I realised I had much of the code I needed to automated the process in my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks">WIKIUpdater extension</a> as these tasks are based around cloning repos. So I am please to say I have just released a <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WikiPDFExport-Tasks">new Azure DevOps extension WikiPDFExport</a> that wrappers Max’s command line tool. It does the following</p>
<ul>
<li>Downloads the latest release of the WikiPDFExport tool from GitHub to the build agent (the exe is too big to include in the VSIX package)</li>
<li>Optionally clone a Git based WIKI repo. As with my WikIUpdater tasks, you can pass credentials for Azure DevOps or GitHub</li>
<li>Generate a PDF of a single file or the whole of a Wiki folder structure (based on the .order file) that was either cloned or was already present on the agent</li>
</ul>
<p>A sample of the YAML usage of the task is as shown below. For full documentation see the extensions wiki pages for <a href="https://github.com/rfennell/AzurePipelines/wiki/WIKI-PdfExport-Task">general usage and troubleshooting</a> and the full <a href="https://github.com/rfennell/AzurePipelines/wiki/WIKI-PdfExport-Task-YAML">YAML specification</a>```
- task: <a href="mailto:richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1">richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1</a>
displayName: &lsquo;Export Single File generated by the release notes task&rsquo;
inputs:
cloneRepo: false
localpath: &lsquo;$(System.DefaultWorkingDirectory)&rsquo;
singleFile: &lsquo;inline.md&rsquo;
outputFile: &lsquo;$(Build.ArtifactStagingDirectory)PDFsingleFile.pdf&rsquo;</p>
<ul>
<li>task: <a href="mailto:richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1">richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1</a>
displayName: &lsquo;Export a public GitHub WIKI&rsquo;
inputs:
cloneRepo: true
repo: &lsquo;<a href="https://github.com/rfennell/AzurePipelines.wiki.git%27">https://github.com/rfennell/AzurePipelines.wiki.git'</a>
useAgentToken: false
localpath: &lsquo;$(System.DefaultWorkingDirectory)GitHubRepo&rsquo;
outputFile: &lsquo;$(Build.ArtifactStagingDirectory)PDFpublicGitHub.pdf&rsquo;</li>
<li>task: <a href="mailto:richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1">richardfennellBM.BM-VSTS-WikiPDFExport-Tasks.WikiPDFExportTask.WikiPdfExportTask@1</a>
displayName: &lsquo;Export a private Azure DevOps WIKI&rsquo;
inputs:
cloneRepo: true
repo: &lsquo;<a href="https://dev.azure.com/richardfennell/GitHub/">https://dev.azure.com/richardfennell/GitHub/</a>_git/GitHub.wiki&rsquo;
useAgentToken: true
localpath: &lsquo;$(System.DefaultWorkingDirectory)AzRepo&rsquo;
outputFile: &lsquo;$(Build.ArtifactStagingDirectory)PDFAzrepo.pdf&rsquo;</li>
</ul>
<pre tabindex="0"><code class="language-So" data-lang="So"></code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Using the Post Build Cleanup Task from the Marketplace in YAML based Azure DevOps Pipelines</title>
      <link>https://blog.richardfennell.net/posts/using-the-post-build-cleanup-task-from-the-marketplace-in-yaml-based-azure-devops-pipelines/</link>
      <pubDate>Wed, 19 Aug 2020 14:58:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-the-post-build-cleanup-task-from-the-marketplace-in-yaml-based-azure-devops-pipelines/</guid>
      <description>&lt;p&gt;Disks filling up on our private Azure DevOps agents is a constant battle. We have maintenance jobs setup on the agent pools, to clean out old build working folders nightly, but these don’t run often enough. We need a clean out more than once a day due to the number and size of our builds. To address this, with UI based builds, we successfully used the &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup&#34;&gt;Post Build Cleanup Extension&lt;/a&gt;. However since we have moved many of our builds to YAML we found it not working so well. Turned out the problem was due to the way got source code. The Post Build Cleanup task is intelligent, it does not just delete folders on demand. It check to see what the Get Source ‘Clean’ setting was when the repo was cloned and bases what it deletes on this value e.g. nothing, source, or everything. This behaviour is not that obvious. In a UI based builds it is easy to check this setting. You are always in the UI when editing the build. However, in YAML it is easy to forget the setting, as it is one of those few values that cannot be set in YAML. To make the post build cleanup task actually delete folders in a YAML pipeline you need to&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Disks filling up on our private Azure DevOps agents is a constant battle. We have maintenance jobs setup on the agent pools, to clean out old build working folders nightly, but these don’t run often enough. We need a clean out more than once a day due to the number and size of our builds. To address this, with UI based builds, we successfully used the <a href="https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup">Post Build Cleanup Extension</a>. However since we have moved many of our builds to YAML we found it not working so well. Turned out the problem was due to the way got source code. The Post Build Cleanup task is intelligent, it does not just delete folders on demand. It check to see what the Get Source ‘Clean’ setting was when the repo was cloned and bases what it deletes on this value e.g. nothing, source, or everything. This behaviour is not that obvious. In a UI based builds it is easy to check this setting. You are always in the UI when editing the build. However, in YAML it is easy to forget the setting, as it is one of those few values that cannot be set in YAML. To make the post build cleanup task actually delete folders in a YAML pipeline you need to</p>
<ol>
<li>Edit the pipeline</li>
<li>Click the ellipse menu top right</li>
<li>Pick Triggers</li>
<li>Pick YAML and select the ‘Get Source’ block</li>
<li>Make sure the ‘Clean’ setting is set to ‘true’ and the right set of items to delete are selected <em>– if this is not done the post clean up task does nothing</em><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/08/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/08/image_thumb.png" title="image"></a></li>
<li>You can then add the post build cleanup task the end of the steps</li>
</ol>
<pre tabindex="0"><code>steps:
  - script: echo This where you do stuff
  - task: mspremier.PostBuildCleanup.PostBuildCleanup-task.PostBuildCleanup@3
    displayName: &#39;Clean Agent Directories&#39;
    condition: always()


```Once this is done it behaves as expected
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Zwift and the joys of home networking</title>
      <link>https://blog.richardfennell.net/posts/zwift-and-the-joys-of-home-networking/</link>
      <pubDate>Mon, 13 Jul 2020 09:10:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/zwift-and-the-joys-of-home-networking/</guid>
      <description>&lt;p&gt;During the Covid 19 lock down I have been doing plenty of &lt;a href=&#34;https://zwift.com/feed?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=zwift_eur_uk_cycling_search_brandcore_performance_eng-imprshare-20&amp;amp;gclid=CjwKCAjwjLD4BRAiEiwAg5NBFkOs6pvK2l4x_VOwElyHdtZaocpSugxuU4wq5nOLyeWLDw8sPesOHRoCvjgQAvD_BwE&#34;&gt;Zwift&lt;/a&gt;&amp;lsquo;ing. However, I have started having problems getting the &lt;a href=&#34;https://zwift.com/companion?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=zwift_eur_uk_cycling_search_brand_performance_eng-imprshare-20&amp;amp;gclid=CjwKCAjwjLD4BRAiEiwAg5NBFs0iBpEj-wITNCD0VowmooCQMHQExt5JjRs3Ff0uV4ZZ8DQzQTWaNxoC76cQAvD_BwE&#34;&gt;Zwift Companion App&lt;/a&gt; working reliably, when it used to work.&lt;/p&gt;
&lt;p&gt;Basically, Zwift itself was fine, though very slow to save when exiting, but the companion app could not seem to detect that I was actively Zwift&amp;rsquo;ing, but it&amp;rsquo;s other functions were OK.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2020/07/uVSoNkxbVgjS78uvjUY30M8OMG5JdiHogtQjZzmsAWE-2048x1121-1-1024x561.jpg&#34;&gt;&lt;/p&gt;
&lt;p&gt;After much fiddling I found the issue was the network connection from my PC up to Zwift and nothing to do with the phone app. But in case it is of any use to others here are the steps I took to&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>During the Covid 19 lock down I have been doing plenty of <a href="https://zwift.com/feed?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=zwift_eur_uk_cycling_search_brandcore_performance_eng-imprshare-20&amp;gclid=CjwKCAjwjLD4BRAiEiwAg5NBFkOs6pvK2l4x_VOwElyHdtZaocpSugxuU4wq5nOLyeWLDw8sPesOHRoCvjgQAvD_BwE">Zwift</a>&lsquo;ing. However, I have started having problems getting the <a href="https://zwift.com/companion?utm_source=google&amp;utm_medium=cpc&amp;utm_campaign=zwift_eur_uk_cycling_search_brand_performance_eng-imprshare-20&amp;gclid=CjwKCAjwjLD4BRAiEiwAg5NBFs0iBpEj-wITNCD0VowmooCQMHQExt5JjRs3Ff0uV4ZZ8DQzQTWaNxoC76cQAvD_BwE">Zwift Companion App</a> working reliably, when it used to work.</p>
<p>Basically, Zwift itself was fine, though very slow to save when exiting, but the companion app could not seem to detect that I was actively Zwift&rsquo;ing, but it&rsquo;s other functions were OK.</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/2020/07/uVSoNkxbVgjS78uvjUY30M8OMG5JdiHogtQjZzmsAWE-2048x1121-1-1024x561.jpg"></p>
<p>After much fiddling I found the issue was the network connection from my PC up to Zwift and nothing to do with the phone app. But in case it is of any use to others here are the steps I took to</p>
<ul>
<li>Ran a <a href="https://play.google.com/store/apps/details?id=com.farproc.wifi.analyzer&amp;hl=en_GB">WiFi network analysis app</a> and realised that
<ul>
<li>My local wireless environment is now very congested, I assume as more people are working from home.</li>
<li>Both the 2.4GHz and 5Ghz network were on the same channels as other strong signals.</li>
<li>Also they were using the same SSID, which is meant to provide seamless swap-over between 2.4 and 5Ghz. But, in reality this meant there were connection problems as a connection flipped between frequencies.</li>
</ul>
</li>
</ul>
<p>This explained other problems I had seem</p>
<ul>
<li>The <a href="https://docs.microsoft.com/en-us/windows-server/remote/remote-access/directaccess/directaccess">Microsoft Direct Access</a> VPN I use to connect to the office failing intermittently. <em>Obviously, any problems I have connecting to the office to do work is far less important than Zwift connection issues.</em></li>
<li>My Samsung phone would drop calls for no reason. I now think this was when it had decided to use Wifi calling and got confused over networks.<br>
<strong>Note:</strong> I had fixed this by switching off Wifi calling.</li>
</ul>
<p>To address the problems I changed the SSIDs so that my 2.4 and 5Ghz networks had different names, so that I know which one I was using. Also I moved the channels to ones not used by my neighbours</p>
<p><strong>Test</strong></p>
<p><strong>Result</strong></p>
<p>Put the phone and the PC on the 2.4Ghz network</p>
<p>No improvement, app did not work and PC slow to save</p>
<p>Put the phone and the PC on 5Ghz</p>
<p>Small improvement, app still did not work but at least tried to show the in game view before it dropped out. The PC was still slow to save</p>
<p>Put the phone on either Wifi network but the the PC on <a href="https://www.cclonline.com/product/210080/TL-PA4010P-KIT-V2-20/Mains-Networking/TP-LINK-AV600-TL-PA4010P-600Mbps-Passthrough-Powerline-Starter-Kit-Twin-Pack-V2-2-/NET2499/?gclid=CjwKCAjwjLD4BRAiEiwAg5NBFpqSdEjxVtvq47Esz7JQ-skCQDZsMDx91bcKW0Gh-ktJZz_nznds8BoCMn8QAvD_BwE">Ethernet over Power using TPLink adaptors</a></p>
<p>This fixed it</p>
<p>So it seems the problem was upload speed from my PC all along. Strange as I would have expected the 5Ghz network to be fine, even if the 2.4Ghz was not. The 5Ghz Wifi seems to perform OK on a speed test.</p>
<p>Anyway it is working now, but maybe it is time to consider a proper mesh network?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Bringing Stage based release notes in Multi-Stage YAML to my Cross Platform Release Notes Exension</title>
      <link>https://blog.richardfennell.net/posts/bringing-stage-based-release-notes-in-multi-stage-yaml-to-my-cross-platform-release-notes-exension/</link>
      <pubDate>Mon, 06 Jul 2020 13:21:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/bringing-stage-based-release-notes-in-multi-stage-yaml-to-my-cross-platform-release-notes-exension/</guid>
      <description>&lt;p&gt;I have just released Version 3.1.7 of my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Azure DevOps Pipeline XplatGenerateReleaseNotes Extension&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This new version allows you to build release notes within a Multi-Stage YAML build since the last successful release to the current (or named) stage in the pipeline as opposed to just last fully successful build.&lt;/p&gt;
&lt;p&gt;This gives more feature parity with the older UI based Releases functionality.&lt;/p&gt;
&lt;p&gt;To enable this new feature you need to set the &lt;code&gt;checkStage: true&lt;/code&gt; flag and potentially the &lt;code&gt;overrideStageName: AnotherStage&lt;/code&gt; if you wish the comparison to compare against a stage other than the current one.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just released Version 3.1.7 of my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Azure DevOps Pipeline XplatGenerateReleaseNotes Extension</a>.</p>
<p>This new version allows you to build release notes within a Multi-Stage YAML build since the last successful release to the current (or named) stage in the pipeline as opposed to just last fully successful build.</p>
<p>This gives more feature parity with the older UI based Releases functionality.</p>
<p>To enable this new feature you need to set the <code>checkStage: true</code> flag and potentially the <code>overrideStageName: AnotherStage</code> if you wish the comparison to compare against a stage other than the current one.</p>
<pre tabindex="0"><code>\- task: XplatGenerateReleaseNotes@3
  inputs:
    outputfile: &#39;$(Build.ArtifactStagingDirectory)releasenotes.md&#39;
    outputVariableName: &#39;outputvar&#39;
    templateLocation: &#39;InLine&#39;
    checkStage: true
    inlinetemplate: |
      # Notes for build 
      \*\*Build Number\*\*: {{buildDetails.id}}
      ...
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Timeout Errors &#39;Extracting Schema&#39; when running SQLPackage for a Migration to Azure DevOps Services</title>
      <link>https://blog.richardfennell.net/posts/timeout-errors-extracting-schema-when-running-sqlpackage-for-a-migration-to-azure-devops-services/</link>
      <pubDate>Thu, 25 Jun 2020 09:39:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/timeout-errors-extracting-schema-when-running-sqlpackage-for-a-migration-to-azure-devops-services/</guid>
      <description>&lt;h3 id=&#34;the-problem&#34;&gt;The Problem&lt;/h3&gt;
&lt;p&gt;Whilst doing a migration from an on-premised TFS to Azure DevOps Services for a client I had a strange issue with SQLPackage.exe.&lt;/p&gt;
&lt;p&gt;I had previously completed the dry run of the migration without any issues and started the live migration with a fully defined process and timings for a each stage.&lt;/p&gt;
&lt;p&gt;When I came to export the detached Team Project Collection DB I ran the same command as I had for the dry run&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="the-problem">The Problem</h3>
<p>Whilst doing a migration from an on-premised TFS to Azure DevOps Services for a client I had a strange issue with SQLPackage.exe.</p>
<p>I had previously completed the dry run of the migration without any issues and started the live migration with a fully defined process and timings for a each stage.</p>
<p>When I came to export the detached Team Project Collection DB I ran the same command as I had for the dry run</p>
<pre tabindex="0"><code>&amp; &#34;C:Program FilesMicrosoft SQL Server150DACbinSqlPackage.exe&#34; /sourceconnectionstring:”Data Source=localhostSQLExpress;Initial Catalog=Tfs\_DefaultCollection;Integrated Security=True” /targetFile:C:tempTfs\_DefaultCollection.dacpac /action:extract /p:ExtractAllTableData=true /p:IgnoreUserLoginMappings=true /p:IgnorePermissions=true /p:Storage=Memory 
</code></pre><p>I had expected this to take around 30 minutes. However it failed after 10 minutes with an error when trying to export the schema &lsquo;Timeout, cannot reconnect to the database&rsquo;.</p>
<p>This was strange as nothing had changed on the system since the dry-run. I tried all of the following with no effect</p>
<ul>
<li>Just running the command again, you can hope!</li>
<li>Restarted SQL and ran the command again</li>
<li>Tried the export from SQL Management Studio as opposed to the command line , this just seems to hang at the same point.</li>
</ul>
<h3 id="the-solution">The Solution</h3>
<p>What resolved the problem was a complete reboot of the virtual machine. I assume the issue was some locked resource, but not idea why.</p>
<hr>
<h3 id="updated-29th-july-2020">Updated 29th July 2020</h3>
<p>I had the same problem with another client upgrade. This time a reboot did not fix it.</p>
<p>The solution at this site was to upgrade SQLPackage from the 140 32bit version to the 64bit 150 version. Once this was done the command ran without a problem</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting started with Aggregator CLI for Azure DevOps Work Item Roll-up</title>
      <link>https://blog.richardfennell.net/posts/getting-started-with-aggregator-cli-for-azure-devops-work-item-roll-up/</link>
      <pubDate>Fri, 12 Jun 2020 17:23:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-started-with-aggregator-cli-for-azure-devops-work-item-roll-up/</guid>
      <description>&lt;p&gt;&lt;em&gt;Updated 30/Sep/21 to reflect changes in the Aggregator CLI setup process&lt;/em&gt;
&lt;em&gt;Updated 27/Mar/22 to fix broken links&lt;/em&gt;&lt;/p&gt;
&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;Back in the day I wrote a tool, &lt;a href=&#34;https://archive.codeplex.com/?p=tfsalertsdsl&#34;&gt;TFS Alerts DSL&lt;/a&gt;, to do Work Item roll-up for TFS. Overtime I updated this to support VSTS (as Azure DevOps was then called), it’s final version is still available in the Azure DevOps Marketplace as the &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-ServiceHooks-DSL&#34;&gt;Azure DevOps Service Hooks DSL&lt;/a&gt;. So when I recently had a need for Work Item roll-up I did consider using my own tool, just for a short while. However, I quickly realised a much better option was to use the &lt;a href=&#34;https://github.com/tfsaggregator/aggregator-cli&#34;&gt;Aggregator CLI&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>Updated 30/Sep/21 to reflect changes in the Aggregator CLI setup process</em>
<em>Updated 27/Mar/22 to fix broken links</em></p>
<h2 id="background">Background</h2>
<p>Back in the day I wrote a tool, <a href="https://archive.codeplex.com/?p=tfsalertsdsl">TFS Alerts DSL</a>, to do Work Item roll-up for TFS. Overtime I updated this to support VSTS (as Azure DevOps was then called), it’s final version is still available in the Azure DevOps Marketplace as the <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-ServiceHooks-DSL">Azure DevOps Service Hooks DSL</a>. So when I recently had a need for Work Item roll-up I did consider using my own tool, just for a short while. However, I quickly realised a much better option was to use the <a href="https://github.com/tfsaggregator/aggregator-cli">Aggregator CLI</a>.</p>
<p>Aggregator CLI is a successor to the <a href="https://tfsaggregator.github.io/">TFS Aggregator Plug-in</a> and is a far more mature project than my tool and actively under development, allowing hosting as an Azure Function or a Docker container.</p>
<p>As I have found the Aggregator CLI a little hard to get started with, I thought this blog post was a good idea, so I don’t forget the details in the future.</p>
<h3 id="architecture">Architecture</h3>
<p>In this latest version of the Aggregator the functionality is delivered using Azure Functions, one per rule. <strong>Note:</strong> A docker container is an other option, but one I have not explored These Azure Functions are linked to Azure DevOps Service hook events. The command line tool setup process configures all of the parts required setting up Azure resources, Azure DevOps events and managing rules.</p>
<h3 id="preparation">Preparation</h3>
<ul>
<li>Download the latest release from <a href="https://github.com/tfsaggregator/aggregator-cli/releases">https://github.com/tfsaggregator/aggregator-cli/releases</a>, pick the version for the operating system you are planning to use to setup the tool.</li>
<li>Next you need to setup an Azure Service Principle App registration for the Aggregator and connect it to a Subscription</li>
</ul>
<ol>
<li>
<ol>
<li>Login to Azure <em>az login</em></li>
<li>Pick the correct subscription <em>az account set &ndash;subscription <ID></em></li>
<li>Create the service principle az ad sp create-for-rbac &ndash;name AggregatorServicePrincipal</li>
<li>From the root of the Azure Portal pick the Subscription you wish to create the Azure Functions in.</li>
<li>In the Access (IAM ) section grant the ‘contributor role’ for the subscription to the newly created Service Principle</li>
</ol>
</li>
</ol>
<h3 id="using-the-aggregator-cli">Using the Aggregator CLI</h3>
<p>At a command prompt we need to now start to use the tool to link up Azure Services and Azure DevOps</p>
<ul>
<li>
<p>First we log the CLI tool into Azure. You can find the values required from Azure Portal, in the Subscription overview and App Registration overview. You create a password from ‘client and secrets’ section for the App Registration. <em>.aggregator-cli.exe logon.azure -s <sub-id> -c <client-id> -t <tenant-id> -p <pwd></em></p>
</li>
<li>
<p>Next login to Azure DevOps, <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&amp;tabs=preview-page">create the PAT as detailed in the documentation</a> <em>.aggregator-cli.exe logon.ado -u</em> <a href="https://dev.azure.com/%3corg"><em>https://dev.azure.com/&lt;org</em></a><em>&gt; -mode PAT -t <pat></em></p>
</li>
<li>
<p>Now we can create the Instance of the Aggregator in  Azure <strong>Note:</strong> I had long delays and timeout problems here due to what turned out to be a  poor WIFI links. The strange thing was it was not obviously failing WIFI, but just unstable enough to cause issues. As soon as I swapped to Ethernet the problems went away. The basic form of the install command is as follows, this will create a new resource group in Azure and then the required Web App, Storage, Application Insights etc. As this is  done using an ARM template so it is idempotent i.e. it can re run as many times as you wish, it will just update the Azure services if they already exist. <em>.aggregator-cli.exe install.instance -verbose -n yourinstancename -l westeurope</em> If you do get problems, goto the Azure Portal, find th reosurce group and look at the deployment logs</p>
</li>
<li>
<p>When this completes, you can see the new resources in the Azure Portal, or check them with command line <em>.aggregator-cli.exe list.instances</em></p>
</li>
<li>
<p>You next need to register your rules. You can register as many as you wish. A few samples are provided in the <strong>test</strong> folder in the downloaded ZIP, these are good for a quick tests, thought you will usually create your own for production use. When you add a rule, behind the scenes this creates an Azure Function with the same name as the rule. <em>.aggregator-cli.exe add.rule -v -i yourinstancename -n test1 -file testtest1.rule</em></p>
</li>
<li>
<p>Finally you map a rule to some event in Azure DevOps instance <em>.aggregator-cli.exe map.rule -v -p yourproject -e workitem.updated -i yourinstancename -r test1</em></p>
</li>
</ul>
<p>Once all this done you should have a working system. If you are using the the test rules the quickest option to see it is working is to</p>
<ol>
<li>Go into the Azure Portal</li>
<li>Find the created Resource Group</li>
<li>Pick the App Service for the Azure Functions</li>
<li>Pick the Function for the rule under test</li>
<li>Pick the Monitor</li>
<li>Pick Logs</li>
<li>Open Live Metric</li>
<li>You should see log entries when you perform the event on a work item you mapped to the function.</li>
</ol>
<p>An alternative is to look in the AppInsights Logs or live telemetry. So I hope this helps my future self remember how get this tool setup quickly</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to do local template development for my Cross platform Release notes task</title>
      <link>https://blog.richardfennell.net/posts/how-to-do-local-template-development-for-my-cross-platform-release-notes-task/</link>
      <pubDate>Wed, 10 Jun 2020 14:01:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-do-local-template-development-for-my-cross-platform-release-notes-task/</guid>
      <description>&lt;p&gt;The testing cycle for &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Release Notes Templates&lt;/a&gt; can be slow, requiring a build and release cycle. To try to speed this process for users I have created a local test harness that allows the same calls to be made from a development machine as would be made within a build or release.&lt;/p&gt;
&lt;p&gt;However, running this is not as simple was you might expect so please read the instruction before proceeding.&lt;/p&gt;
&lt;h3 id=&#34;setup-and-build&#34;&gt;Setup and Build&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/rfennell/AzurePipelines&#34;&gt;Clone the repo&lt;/a&gt; contain the Azure DevOps Extension.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The testing cycle for <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Release Notes Templates</a> can be slow, requiring a build and release cycle. To try to speed this process for users I have created a local test harness that allows the same calls to be made from a development machine as would be made within a build or release.</p>
<p>However, running this is not as simple was you might expect so please read the instruction before proceeding.</p>
<h3 id="setup-and-build">Setup and Build</h3>
<ol>
<li>
<p><a href="https://github.com/rfennell/AzurePipelines">Clone the repo</a> contain the Azure DevOps Extension.</p>
</li>
<li>
<p>Change to the folder</p>
<p>_<repo root>ExtensionsXplatGenerateReleaseNotesV2testconsole</p>
<p>_</p>
</li>
<li>
<p>Build the tool using NPM (this does assume <a href="https://nodejs.org/en/download/_">Node</a> is already installed)</p>
<p>_npm install<br>
npm run build</p>
<p>_</p>
</li>
</ol>
<h3 id="running-the-tool">Running the Tool</h3>
<p>The task the testconsole runs takes many parameters, and reads runtime Azure DevOps environment variable. These have to be passing into the local tester. Given the number, and the fact that most probably won&rsquo;t need to be altered, they are provided in settings JSON file. Samples are provided for a build and a release. For details on these parameters see the <a href="https://github.com/rfennell/AzurePipelines/wiki/GenerateReleaseNotes---Node-based-Cross-Platform-Task">task documentation</a></p>
<p>The only values not stored in the JSON files are the PATs required to access the REST API. This reduces the chance of them being copied onto source control by mistake.</p>
<p>Two PATs are potentially used.</p>
<ul>
<li>Azure DevOps PAT (Required) - within a build or release this is automatically picked up. For this tool it must be provided</li>
<li>GitHub PAT - this is an optional parameter for the task, you only need to provide it if working with private GitHub repos as your code store. So usually this can be ignored.</li>
</ul>
<h3 id="test-template-generation-for-a-build">Test Template Generation for a Build</h3>
<p>To run the tool against a build</p>
<ol>
<li>
<p>In the settings file make sure the TeamFoundationCollectionUri, TeamProject and BuildID are set to the build you wish to run against, and that the ReleaseID is empty.</p>
</li>
<li>
<p>Run the command</p>
<p>_node .GenerateReleaseNotesConsoleTester.js build-settings.json <your-Azure-DevOps-PAT> &lt;Optional: your GitHub PAT&gt;</p>
<p>_</p>
</li>
<li>
<p>Assuming you are using the sample settings you should get an output.md file with your release notes.</p>
</li>
</ol>
<h3 id="test-template-generation-for-a-release">Test Template Generation for a Release</h3>
<p>To run the tool against a release is but more complex. This is because the logic looks back to see the most recent successful run. So if your release ran to completion you will get no notes as there has been no changes it it is the last successful release.</p>
<p>You have two options</p>
<ul>
<li>Allow a release  to trigger, but cancel it. You can then use its ReleaseID to compare with the last release</li>
<li>Add a stage to your release this is skipped, only run on a manual request and use this as the comparison stage to look for difference</li>
</ul>
<h3 id="to-run-the-tool">To run the tool</h3>
<ol>
<li>
<p>In the settings file make sure the TeamFoundationCollectionUri, TeamProject, BuildID, EnvironmentName (as stage in your process), ReleaseID and releaseDefinitionId are set for the release you wish to run against.</p>
</li>
<li>
<p>Run the command</p>
<p>_node .GenerateReleaseNotesConsoleTester.js release-settings.json <your-Azure-DevOps-PAT> &lt;Optional: yourGitHub PAT&gt;</p>
<p>_</p>
</li>
<li>
<p>Assuming you are using the sample settings you should get an output.md file with your release notes.</p>
</li>
</ol>
<p>Hope you find it useful</p>
]]></content:encoded>
    </item>
    <item>
      <title>New feature for Cross Platform Release notes - get parent and child work items</title>
      <link>https://blog.richardfennell.net/posts/new-feature-for-cross-platform-release-notes-get-parent-and-child-work-items/</link>
      <pubDate>Sat, 06 Jun 2020 17:44:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-feature-for-cross-platform-release-notes-get-parent-and-child-work-items/</guid>
      <description>&lt;p&gt;I have added another new feature to my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Cross Platform release note generator&lt;/a&gt;. Now, when using Handlebars based templates you can optionally get the parent or child work items for any work item associated with build/release&lt;/p&gt;
&lt;p&gt;To enable the feature, as it is off by default, you need to set the  &lt;strong&gt;getParentsAndChildren: true&lt;/strong&gt; parameter for the task, either in YAML or in the handlebars section of the configuration.&lt;/p&gt;
&lt;p&gt;This will add an extra array that the template can access &lt;strong&gt;relatedWorkItems&lt;/strong&gt;. This contains all the work items associated with the build/release plus their direct parents and children. This can then be accessed in the template&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have added another new feature to my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross Platform release note generator</a>. Now, when using Handlebars based templates you can optionally get the parent or child work items for any work item associated with build/release</p>
<p>To enable the feature, as it is off by default, you need to set the  <strong>getParentsAndChildren: true</strong> parameter for the task, either in YAML or in the handlebars section of the configuration.</p>
<p>This will add an extra array that the template can access <strong>relatedWorkItems</strong>. This contains all the work items associated with the build/release plus their direct parents and children. This can then be accessed in the template</p>
<pre tabindex="0"><code>{{#forEach this.workItems}}

{{#if isFirst}}### WorkItems {{/if}}

\* \*\*{{this.id}}\*\*  {{lookup this.fields &#39;System.Title&#39;}}

\- \*\*WIT\*\* {{lookup this.fields &#39;System.WorkItemType&#39;}} 

\- \*\*Tags\*\* {{lookup this.fields &#39;System.Tags&#39;}}

\- \*\*Assigned\*\* {{#with (lookup this.fields &#39;System.AssignedTo&#39;)}} {{displayName}} {{/with}}

\- \*\*Description\*\* {{{lookup this.fields &#39;System.Description&#39;}}}

\- \*\*Parents\*\*

{{#forEach this.relations}}

{{#if (contains this.attributes.name &#39;Parent&#39;)}}

{{#with (lookup\_a\_work\_item ../../relatedWorkItems  this.url)}}

      - {{this.id}} - {{lookup this.fields &#39;System.Title&#39;}} 

{{/with}}

{{/if}}

{{/forEach}} 

\- \*\*Children\*\*

{{#forEach this.relations}}

{{#if (contains this.attributes.name &#39;Child&#39;)}}

{{#with (lookup\_a\_work\_item ../../relatedWorkItems  this.url)}}

      - {{this.id}} - {{lookup this.fields &#39;System.Title&#39;}} 

{{/with}}

{{/if}}

{{/forEach}} 

{{/forEach}} 
</code></pre><p>This is a complex way to present the extra work items, but very flexible.</p>
<p>Hope people find the new feature useful.</p>
]]></content:encoded>
    </item>
    <item>
      <title>And another new feature for my Cross Platform Release Notes Azure DevOps Task - commit/changeset file details</title>
      <link>https://blog.richardfennell.net/posts/and-another-new-feature-for-my-cross-platform-release-notes-azure-devops-task-commit-changeset-file-details/</link>
      <pubDate>Wed, 20 May 2020 20:15:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-another-new-feature-for-my-cross-platform-release-notes-azure-devops-task-commit-changeset-file-details/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2020/03/11/a-major-new-feature-for-my-cross-platform-release-notes-azure-devops-pipelines-extension-handlebars-templating-support/&#34;&gt;addition of Handlebars based templating&lt;/a&gt; for my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Cross Platform Release Notes Task&lt;/a&gt; has certainly made it much easier to release new features. The legacy templating model it seem is what had been holding development back.&lt;/p&gt;
&lt;p&gt;In the past month or so I have added support for generating release notes based on PRs and Tests. I am now happy to say I have just added support for the actual files associated with a commit or changeset.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="https://blogs.blackmarble.co.uk/rfennell/2020/03/11/a-major-new-feature-for-my-cross-platform-release-notes-azure-devops-pipelines-extension-handlebars-templating-support/">addition of Handlebars based templating</a> for my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross Platform Release Notes Task</a> has certainly made it much easier to release new features. The legacy templating model it seem is what had been holding development back.</p>
<p>In the past month or so I have added support for generating release notes based on PRs and Tests. I am now happy to say I have just added support for the actual files associated with a commit or changeset.</p>
<p>Enriching the commit/changeset data with the details of the files edited has been a repeated request over the years. The basic commit/changeset object only detailed the commit message and the author. With this new release of my task there is now a .<strong>changes</strong> property on the <strong>commit</strong> objects that exposes the details of the actual files in the commit/changeset.</p>
<p>This is used in Handlebars based template as follows</p>
<pre tabindex="0"><code># Global list of CS ({{commits.length}})
{{#forEach commits}}
{{#if isFirst}}### Associated commits{{/if}}
* ** ID{{this.id}}** 
   -  **Message:** {{this.message}}
   -  **Commited by:** {{this.author.displayName}} 
   -  **FileCount:** {{this.changes.length}} 
{{#forEach this.changes}}
      -  **File path (use this for TFVC or TfsGit):** {{this.item.path}}  
      -  **File filename (using this for GitHub):** {{this.filename}}  
      -  **this will show all the properties available for file):** {{json this}}  
{{/forEach}}. 
{{/forEach}}
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Another feature for my Cross Platform Release Notes Azure DevOps Extension&amp;ndash;access to test results</title>
      <link>https://blog.richardfennell.net/posts/another-feature-for-my-cross-platform-release-notes-azure-devops-extension-access-to-test-results/</link>
      <pubDate>Mon, 18 May 2020 13:20:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/another-feature-for-my-cross-platform-release-notes-azure-devops-extension-access-to-test-results/</guid>
      <description>&lt;p&gt;Over the weekend I got another new feature for my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Cross Platform Release Notes Azure DevOps Extension&lt;/a&gt; working. The test results associated with build artefacts or releases are now exposed to Handlebars based templates.&lt;/p&gt;
&lt;p&gt;The new objects you can access are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In builds&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;tests – all the test run as part of current build&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In releases&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;tests – all the test run as part of any current build artefacts or previous to the running of the release notes task within a release environment&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the weekend I got another new feature for my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross Platform Release Notes Azure DevOps Extension</a> working. The test results associated with build artefacts or releases are now exposed to Handlebars based templates.</p>
<p>The new objects you can access are:</p>
<ul>
<li>
<p>In builds</p>
</li>
<li>
<p>tests – all the test run as part of current build</p>
</li>
<li>
<p>In releases</p>
</li>
<li>
<p>tests – all the test run as part of any current build artefacts or previous to the running of the release notes task within a release environment</p>
</li>
<li>
<p>releaseTests – all the test run within a release environment</p>
</li>
<li>
<p>builds.test - all the test run as part of any build artefacts group by build artefact</p>
</li>
</ul>
<p>These can be used as follows in a release template</p>
<pre tabindex="0"><code>\# Builds with associated WI/CS/Tests ({{builds.length}})

{{#forEach builds}}

{{#if isFirst}}## Builds {{/if}}

##  Build {{this.build.buildNumber}}

{{#forEach this.commits}}

{{#if isFirst}}### Commits {{/if}}

\- CS {{this.id}}

{{/forEach}}

{{#forEach this.workitems}}

{{#if isFirst}}### Workitems {{/if}}

\- WI {{this.id}}

{{/forEach}} 

{{#forEach this.tests}}

{{#if isFirst}}### Tests {{/if}}

\- Test {{this.id}} 

\-  Name: {{this.testCase.name}}

\-  Outcome: {{this.outcome}}

{{/forEach}} 

{{/forEach}}

  

\# Global list of tests ({{tests.length}})

{{#forEach tests}}

{{#if isFirst}}### Tests {{/if}}

\* \*\* ID{{this.id}}\*\* 

\-  Name: {{this.testCase.name}}

\-  Outcome: {{this.outcome}}

{{/forEach}}

  
</code></pre><p>For more details see the <a href="https://github.com/rfennell/AzurePipelines/wiki/GenerateReleaseNotes---Node-based-Cross-Platform-Task">documentation in the WIKI</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Running SonarQube for a .NET Core project in Azure DevOps YAML multi-stage pipelines</title>
      <link>https://blog.richardfennell.net/posts/running-sonarqube-for-a-net-core-project-in-azure-devops-yaml-multi-stage-pipelines/</link>
      <pubDate>Mon, 11 May 2020 10:31:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-sonarqube-for-a-net-core-project-in-azure-devops-yaml-multi-stage-pipelines/</guid>
      <description>&lt;p&gt;We have been looking migrating some of our common .NET Core libraries into new NuGet packages and have taken the chance to change our build process to use &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started/multi-stage-pipelines-experience?view=azure-devops&#34;&gt;Azure DevOps Multi-stage Pipelines&lt;/a&gt;. Whilst doing this I hit a problem getting &lt;a href=&#34;https://www.sonarqube.org/&#34;&gt;SonarQube&lt;/a&gt; analysis working, the documentation I found was a little confusing.&lt;/p&gt;
&lt;h3 id=&#34;the-problem&#34;&gt;The Problem&lt;/h3&gt;
&lt;p&gt;As part of the YAML pipeline re-design we were moving away from building Visual Studio SLN solution files, and swapping to .NET Core command line for the build and testing of .CSproj files. Historically we had used the &lt;a href=&#34;https://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-azure-devops/&#34;&gt;SonarQube Build Tasks&lt;/a&gt; that can be found in the &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=SonarSource.sonarqube&#34;&gt;Azure DevOps Marketplace&lt;/a&gt; to control SonarQube Analysis. However, if we used these tasks in the new YAML pipeline we quickly found that the SonarQube analysis failed saying it could find no projects ##[error]No analysable projects were found. SonarQube analysis will not be performed. Check the build summary report for details. So I next swapped to using use the &lt;a href=&#34;https://www.nuget.org/packages/dotnet-sonarscanner&#34;&gt;SonarScanner for .NET Core&lt;/a&gt;, assuming the issue was down to not using .NET Core commands. This gave YAML as follows,```
- task: &lt;a href=&#34;mailto:DotNetCoreCLI@2&#34;&gt;DotNetCoreCLI@2&lt;/a&gt;
     displayName: &amp;lsquo;Install Sonarscanner&amp;rsquo;
     inputs:
       command: &amp;lsquo;custom&amp;rsquo;
custom: &amp;rsquo;tool&#39;
arguments: &amp;lsquo;install &amp;ndash;global dotnet-sonarscanner &amp;ndash;version 4.9.0&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have been looking migrating some of our common .NET Core libraries into new NuGet packages and have taken the chance to change our build process to use <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started/multi-stage-pipelines-experience?view=azure-devops">Azure DevOps Multi-stage Pipelines</a>. Whilst doing this I hit a problem getting <a href="https://www.sonarqube.org/">SonarQube</a> analysis working, the documentation I found was a little confusing.</p>
<h3 id="the-problem">The Problem</h3>
<p>As part of the YAML pipeline re-design we were moving away from building Visual Studio SLN solution files, and swapping to .NET Core command line for the build and testing of .CSproj files. Historically we had used the <a href="https://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-azure-devops/">SonarQube Build Tasks</a> that can be found in the <a href="https://marketplace.visualstudio.com/items?itemName=SonarSource.sonarqube">Azure DevOps Marketplace</a> to control SonarQube Analysis. However, if we used these tasks in the new YAML pipeline we quickly found that the SonarQube analysis failed saying it could find no projects ##[error]No analysable projects were found. SonarQube analysis will not be performed. Check the build summary report for details. So I next swapped to using use the <a href="https://www.nuget.org/packages/dotnet-sonarscanner">SonarScanner for .NET Core</a>, assuming the issue was down to not using .NET Core commands. This gave YAML as follows,```
- task: <a href="mailto:DotNetCoreCLI@2">DotNetCoreCLI@2</a>
     displayName: &lsquo;Install Sonarscanner&rsquo;
     inputs:
       command: &lsquo;custom&rsquo;
custom: &rsquo;tool'
arguments: &lsquo;install &ndash;global dotnet-sonarscanner &ndash;version 4.9.0</p>
<ul>
<li>task: <a href="mailto:DotNetCoreCLI@2">DotNetCoreCLI@2</a>
displayName: &lsquo;Begin Sonarscanner&rsquo;
inputs:
command: &lsquo;custom&rsquo;
custom: &lsquo;sonarscanner&rsquo;
arguments: &lsquo;begin /key:&quot;$(SonarQubeProjectKey)&quot; /name:&quot;$(SonarQubeName)&quot; /d:sonar.host.url=&quot;$(SonarQubeUrl)&quot; /d:sonar.login=&quot;$(SonarQubeProjectAPIKey)&quot; /version:$(Major).$(Minor)&rsquo;</li>
</ul>
<p>…. Build and test the project</p>
<ul>
<li>task: DotNetCoreCLI@2
displayName: &lsquo;En Sonarscanner&rsquo;
inputs:
command: &lsquo;custom&rsquo;
custom: &lsquo;sonarscanner&rsquo;
arguments: &rsquo;end /key:&quot;$(SonarQubeProjectKey)&quot; '</li>
</ul>
<pre tabindex="0"><code class="language-However," data-lang="However,">
### The Solution

The solution it turns out was nothing to do with using the either of the ways to trigger SonarQube analysis, it was down to the fact that the .NET Core .CSProj file did not have a unique GUID. Historically this had not been an issue as if you trigger SonarQube analysis via a Visual Studio solution GUIDs are automatically injected. The move to building using the .NET core command line was the problem, but the fix was simple, just add a unique GUID to each CS project file.```
&lt;Project Sdk=&#34;MSBuild.Sdk.Extras&#34;&gt;
  &lt;PropertyGroup&gt;
     &lt;TargetFramework&gt;netstandard2.0&lt;/TargetFramework&gt;
     &lt;PublishRepositoryUrl&gt;true&lt;/PublishRepositoryUrl&gt;
     &lt;EmbedUntrackedSources&gt;true&lt;/EmbedUntrackedSources&gt;
     &lt;ProjectGuid&gt;e2bb4d3a-879c-4472-8ddc-94b2705abcde&lt;/ProjectGuid&gt;

…
```Once this was done, either way of running SonarQube worked. After a bit of thought, I decided to stay with the same tasks I have used historically to trigger analysis. This was for a few reasons

*   I can use a central Service Connector to manage credentials to access SonarQube
*   The tasks manage the installation and update of the SonarQube tools on the agent
*   I need to pass less parameters about due to the use of the service connector
*   I can more easily include the SonarQube analysis report in the build

So my YAML now looks like this```
\- task: [SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B7A063157.SonarQubePrepare@4](mailto:SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B7A063157.SonarQubePrepare@4)
  displayName: &#39;Prepare analysis on SonarQube&#39;
  inputs:
    SonarQube: Sonarqube
    projectKey: &#39;$(sonarqubeKey)&#39;
    projectName: &#39;$(sonarqubeName)&#39;
    projectVersion: &#39;$(Major).$(Minor)&#39;
    extraProperties: |
          # Additional properties that will be passed to the scanner, 
          # Put one key=value per line, example:
          # sonar.exclusions=\*\*/\*.bin
          sonar.dependencyCheck.reportPath=$(Build.SourcesDirectory)/dependency-check-report.xml     
       sonar.dependencyCheck.htmlReportPath=$(Build.SourcesDirectory)/dependency-check-report.html
          sonar.cpd.exclusions=\*\*/AssemblyInfo.cs,\*\*/\*.g.cs
             sonar.cs.vscoveragexml.reportsPaths=$(System.DefaultWorkingDirectory)/\*\*/\*.coveragexml
         sonar.cs.vstest.reportsPaths=$(System.DefaultWorkingDirectory)/\*\*/\*.trx

… Build &amp; Test with [DotNetCoreCLI](mailto:DotNetCoreCLI@displayName: &#39;Build the project&#39;)

- task: SonarSource.sonarqube.6D01813A-9589-4B15-8491-8164AEB38055.SonarQubeAnalyze@4
  displayName: &#39;Run Code Analysis&#39;

- task: SonarSource.sonarqube.291ed61f-1ee4-45d3-b1b0-bf822d9095ef.SonarQubePublish@4
  displayName: &#39;Publish Quality Gate Result&#39;
</code></pre><h3 id="addendum">Addendum..</h3>
<p>Even though I don’t use it in the YAML, I still found a use for the .NET Core SonarScanner commands. We use the <a href="https://www.sonarqube.org/developer-edition/?gclid=CjwKCAjw7-P1BRA2EiwAXoPWAyHYW04MGJUXJtfXaA2cvE2Q-wR9WeSEYyvjbm7yuzcHwADOrtvefxoCIoEQAvD_BwE">Developer edition of SonarQube</a>, this understands Git branches and PR. This edition has a requirement that you must perform an analysis on the master branch before any analysis on other branches can be done. This is because the branch analysis is measured relative to the quality of the master branch. I have found the easiest way to establish this baseline, even if it is of an empty project, is to run SonarScanner from the command on my PC, just to setup the base for any PR to be measured against.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Announcing the deprecation of my Azure DevOps Pester Extension as it has been migrated to the Pester Project and republished under a new ID</title>
      <link>https://blog.richardfennell.net/posts/announcing-the-deprecation-of-my-azure-devops-pester-extension-as-it-has-been-migrated-to-the-pester-project-and-republished-under-a-new-id/</link>
      <pubDate>Sun, 03 May 2020 13:14:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/announcing-the-deprecation-of-my-azure-devops-pester-extension-as-it-has-been-migrated-to-the-pester-project-and-republished-under-a-new-id/</guid>
      <description>&lt;p&gt;Back in early 2016 I wrote an &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-PesterRunner-Task&#34;&gt;Azure DevOps Extension to wrapper Pester&lt;/a&gt;, the Powershell unit testing tool. Over the years I updated it, and then passed the support of it over to someone who knows much more about Powershell and Pester than I &lt;a href=&#34;https://github.com/ChrisLGardner&#34;&gt;Chris Gardner&lt;/a&gt; who continued to develop it.&lt;/p&gt;
&lt;p&gt;With the advent of cross-platform Powershell Core we realized that the current extension implementation had a fundamental limitation. Azure DevOps Tasks can only be executed by the agent using the Windows version of Powershell or Node. There is no option for execution by Powershell Core, and probably never will be. As Pester is now supported by Powershell Core this was a serious limitation.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Back in early 2016 I wrote an <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-PesterRunner-Task">Azure DevOps Extension to wrapper Pester</a>, the Powershell unit testing tool. Over the years I updated it, and then passed the support of it over to someone who knows much more about Powershell and Pester than I <a href="https://github.com/ChrisLGardner">Chris Gardner</a> who continued to develop it.</p>
<p>With the advent of cross-platform Powershell Core we realized that the current extension implementation had a fundamental limitation. Azure DevOps Tasks can only be executed by the agent using the Windows version of Powershell or Node. There is no option for execution by Powershell Core, and probably never will be. As Pester is now supported by Powershell Core this was a serious limitation.</p>
<p>To get around this problem <a href="https://blogs.blackmarble.co.uk/rfennell/2019/12/28/a-technique-for-porting-powershell-based-azure-devops-extensions-to-node-so-they-can-be-run-cross-platform-without-a-complete-re-write/">I wrote a Node wrapper</a> to allow the existing Powershell task to be executed using Node, by running a Node script then shelling out to Powershell or Powershell Core. A technique I have since used to make other extensions of mine cross-platform</p>
<p>Around this time we started to discuss whether my <a href="https://github.com/rfennell/AzurePipelines">GitHub repo</a> was really the best home for this Pester extension, and in the decided that this major update to provide cross-platform support was a good point to move it a new home under the ownership of <a href="https://github.com/pester">Pester Project</a>.</p>
<p>So, given all that history, I am really pleased to say that I am deprecating my Pester Extension and adding instructions that though my extension is not going away and will continue to work as it currently does, it will not be updated again and all users should consider swapping over to the <a href="https://marketplace.visualstudio.com/items?itemName=Pester.PesterRunner">new cross-platform version of the extension</a> that is the next generation of same code base but now owned and maintained by the Pester project (well still Chris in reality).</p>
<p>Unfortunately, Azure DevOps provides no way to migrate ownership of an extension. So to swap to the new version will require some work. If you are using YAML the conversion is only a case of changing the task name/id. If you are using the UI based builds or release you need to add the new task and do some copy typing of parameters. The good news is that all the parameter options remain the same so it should be a quick job.</p>
<p>Also please note that any outstanding issues, not fixed in the new release, have been migrated over to the extensions now home, they have not been forgotten.</p>
<p>So hope you all like the new enhanced version of the Pester Extension and thanks to Chris for sorting the migration and all his work support it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for &amp;lsquo;System.BadImageFormatException&amp;rsquo; when running x64 based tests inside a Azure DevOps Release</title>
      <link>https://blog.richardfennell.net/posts/fix-for-system-badimageformatexception-when-running-x64-based-tests-inside-a-azure-devops-release/</link>
      <pubDate>Thu, 23 Apr 2020 10:05:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-system-badimageformatexception-when-running-x64-based-tests-inside-a-azure-devops-release/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is one of those blog posts I write to remind my future self how I fixed a problem.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&#34;the-problem&#34;&gt;The Problem&lt;/h3&gt;
&lt;p&gt;I have a release that installs VSTest and runs some integration tests that target .NET 4.6 x64. All these tests worked fine in Visual Studio. However, I got the following errors for all tests when they were run in a release&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;2020-04-23T09:30:38.7544708Z vstest.console.exe &amp;#34;C:agent\_workr1aPaymentServicesdroptestartifactsPaymentService.IntegrationTests.dll&amp;#34;

2020-04-23T09:30:38.7545688Z /Settings:&amp;#34;C:agent\_work\_tempuxykzf03ik2.tmp.runsettings&amp;#34;

2020-04-23T09:30:38.7545808Z /Logger:&amp;#34;trx&amp;#34;

2020-04-23T09:30:38.7545937Z /TestAdapterPath:&amp;#34;C:agent\_workr1aPaymentServicesdroptestartifacts&amp;#34;

2020-04-23T09:30:39.2634578Z Starting test execution, please wait...

2020-04-23T09:30:39.4783658Z A total of 1 test files matched the specified pattern.

2020-04-23T09:30:40.8660112Z   X Can\_Get\_MIDs \[521ms\]

2020-04-23T09:30:40.8684249Z   Error Message:

2020-04-23T09:30:40.8684441Z    Test method PaymentServices.IntegrationTests.ControllerMIDTests.Can\_Get\_MIDs threw exception: 

2020-04-23T09:30:40.8684574Z System.BadImageFormatException: Could not load file or assembly &amp;#39;PaymentServices, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null&amp;#39; or one of its dependencies. An attempt was made to load a program with an incorrect format.

2020-04-23T09:30:40.8684766Z   Stack Trace:

2020-04-23T09:30:40.8684881Z       at PaymentServices.IntegrationTests.ControllerMIDTests.Can\_Get\_MIDs()

…

2020-04-23T09:30:40.9038788Z Results File: C:agent\_work\_tempTestResultssvc-devops\_SVRHQAPP027\_2020-04-23\_10\_30\_40.trx

2020-04-23T09:30:40.9080344Z Total tests: 22

2020-04-23T09:30:40.9082348Z      Failed: 22

2020-04-23T09:30:40.9134858Z ##\[error\]Test Run Failed.
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;solution&#34;&gt;Solution&lt;/h3&gt;
&lt;p&gt;I needed to tell &lt;strong&gt;vstest.console.exe&lt;/strong&gt; to run x64 as opposed to it’s default of x32. This can be done with a command line override &lt;strong&gt;–platform:x64&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>This is one of those blog posts I write to remind my future self how I fixed a problem.</em></p>
<h3 id="the-problem">The Problem</h3>
<p>I have a release that installs VSTest and runs some integration tests that target .NET 4.6 x64. All these tests worked fine in Visual Studio. However, I got the following errors for all tests when they were run in a release</p>
<pre tabindex="0"><code>2020-04-23T09:30:38.7544708Z vstest.console.exe &#34;C:agent\_workr1aPaymentServicesdroptestartifactsPaymentService.IntegrationTests.dll&#34;

2020-04-23T09:30:38.7545688Z /Settings:&#34;C:agent\_work\_tempuxykzf03ik2.tmp.runsettings&#34;

2020-04-23T09:30:38.7545808Z /Logger:&#34;trx&#34;

2020-04-23T09:30:38.7545937Z /TestAdapterPath:&#34;C:agent\_workr1aPaymentServicesdroptestartifacts&#34;

2020-04-23T09:30:39.2634578Z Starting test execution, please wait...

2020-04-23T09:30:39.4783658Z A total of 1 test files matched the specified pattern.

2020-04-23T09:30:40.8660112Z   X Can\_Get\_MIDs \[521ms\]

2020-04-23T09:30:40.8684249Z   Error Message:

2020-04-23T09:30:40.8684441Z    Test method PaymentServices.IntegrationTests.ControllerMIDTests.Can\_Get\_MIDs threw exception: 

2020-04-23T09:30:40.8684574Z System.BadImageFormatException: Could not load file or assembly &#39;PaymentServices, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null&#39; or one of its dependencies. An attempt was made to load a program with an incorrect format.

2020-04-23T09:30:40.8684766Z   Stack Trace:

2020-04-23T09:30:40.8684881Z       at PaymentServices.IntegrationTests.ControllerMIDTests.Can\_Get\_MIDs()

…

2020-04-23T09:30:40.9038788Z Results File: C:agent\_work\_tempTestResultssvc-devops\_SVRHQAPP027\_2020-04-23\_10\_30\_40.trx

2020-04-23T09:30:40.9080344Z Total tests: 22

2020-04-23T09:30:40.9082348Z      Failed: 22

2020-04-23T09:30:40.9134858Z ##\[error\]Test Run Failed.
</code></pre><h3 id="solution">Solution</h3>
<p>I needed to tell <strong>vstest.console.exe</strong> to run x64 as opposed to it’s default of x32. This can be done with a command line override <strong>–platform:x64</strong></p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/04/image-1.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/04/image_thumb-1.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>I decided to create a video of my blog post on Multistage YAML pipelines</title>
      <link>https://blog.richardfennell.net/posts/i-decided-to-create-a-video-of-my-blog-post-on-multistage-yaml-pipelines/</link>
      <pubDate>Wed, 22 Apr 2020 11:42:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-decided-to-create-a-video-of-my-blog-post-on-multistage-yaml-pipelines/</guid>
      <description>&lt;p&gt;I decided to create a video of my blog post ‘&lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2020/04/07/swapping-my-azure-devops-pipeline-extensions-release-process-to-use-multistage-yaml-pipelines/&#34;&gt;Swapping my Azure DevOps Pipeline Extensions release process to use Multistage YAML pipelines’&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;[iframe width=&amp;ldquo;560&amp;rdquo; height=&amp;ldquo;315&amp;rdquo; src=&amp;ldquo;&lt;a href=&#34;https://www.youtube.com/embed/WMQ0G9eXczE%22&#34;&gt;https://www.youtube.com/embed/WMQ0G9eXczE&#34;&lt;/a&gt; frameborder=&amp;ldquo;0&amp;rdquo; allowfullscreen=&amp;rdquo;&amp;quot; allow=&amp;ldquo;accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture&amp;rdquo;]&lt;/p&gt;
&lt;p&gt;The video up on &lt;a href=&#34;https://bit.ly/MigratetoYAML&#34;&gt;YouTube&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I decided to create a video of my blog post ‘<a href="https://blogs.blackmarble.co.uk/rfennell/2020/04/07/swapping-my-azure-devops-pipeline-extensions-release-process-to-use-multistage-yaml-pipelines/">Swapping my Azure DevOps Pipeline Extensions release process to use Multistage YAML pipelines’</a>.</p>
<p>[iframe width=&ldquo;560&rdquo; height=&ldquo;315&rdquo; src=&ldquo;<a href="https://www.youtube.com/embed/WMQ0G9eXczE%22">https://www.youtube.com/embed/WMQ0G9eXczE"</a> frameborder=&ldquo;0&rdquo; allowfullscreen=&rdquo;&quot; allow=&ldquo;accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture&rdquo;]</p>
<p>The video up on <a href="https://bit.ly/MigratetoYAML">YouTube</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>And more  enriching the data available in my Azure DevOps Pipelines Cross Platform Release Notes Task</title>
      <link>https://blog.richardfennell.net/posts/and-more-enriching-the-data-available-in-my-azure-devops-pipelines-cross-platform-release-notes-task/</link>
      <pubDate>Tue, 21 Apr 2020 16:42:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-more-enriching-the-data-available-in-my-azure-devops-pipelines-cross-platform-release-notes-task/</guid>
      <description>&lt;p&gt;I have today released another enrichment to the dataset available in my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Cross Platform Release Notes Azure Pipeline Task&lt;/a&gt;. It now returns an extra array of data that links work items and commits to build artifacts.&lt;/p&gt;
&lt;p&gt;So your reporting objects are:&lt;/p&gt;
&lt;h3 id=&#34;array-objects&#34;&gt;Array Objects&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;workItems – the array of all work item associated with the release&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;commits – the array of all commits associated with the release&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;pullRequests - the array of all PRs referenced by the commits in the release&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have today released another enrichment to the dataset available in my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross Platform Release Notes Azure Pipeline Task</a>. It now returns an extra array of data that links work items and commits to build artifacts.</p>
<p>So your reporting objects are:</p>
<h3 id="array-objects">Array Objects</h3>
<ul>
<li>
<p>workItems – the array of all work item associated with the release</p>
</li>
<li>
<p>commits – the array of all commits associated with the release</p>
</li>
<li>
<p>pullRequests - the array of all PRs referenced by the commits in the release</p>
</li>
<li>
<p><strong>The new one</strong> - builds - the array of the build artifacts that CS and WI are associated with. Note that this is a object with three properties</p>
</li>
<li>
<p>build - the build details</p>
</li>
<li>
<p>commits - the commits associated with this build</p>
</li>
<li>
<p>workitems - the work items associated with the build</p>
</li>
</ul>
<h3 id="release-objects-only-available-in-a-release">Release objects (only available in a release)</h3>
<ul>
<li>releaseDetails – the release details of the release that the task was triggered for.</li>
<li>compareReleaseDetails - the the previous successful release that comparisons are being made against</li>
</ul>
<h3 id="build-objects">Build objects</h3>
<ul>
<li>buildDetails – if running in a build, the build details of the build that the task is running in. If running in a release it is the build that triggered the release.</li>
</ul>
<p><strong>Note:</strong> To dump all possible values use the form {{json propertyToDump}} this runs a custom Handlebars extension to do the expansion</p>
<p>It is important to realised that these arrays are only available using the Handlebars form of templating. You can find sample <a href="https://github.com/rfennell/AzurePipelines/tree/master/SampleTemplates/XplatGenerateReleaseNotes%20%28Node%20based%29/Version%202/Handlbars">here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Further enriching the data available in my Azure DevOps Pipelines Cross Platform Release Notes Task</title>
      <link>https://blog.richardfennell.net/posts/further-enriching-the-data-available-in-my-azure-devops-pipelines-cross-platform-release-notes-task/</link>
      <pubDate>Wed, 15 Apr 2020 16:01:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/further-enriching-the-data-available-in-my-azure-devops-pipelines-cross-platform-release-notes-task/</guid>
      <description>&lt;p&gt;I recently post about &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2020/04/04/enriching-the-data-available-in-my-azure-devops-pipelines-cross-platform-release-notes-task/&#34;&gt;Enriching the data available in my Azure DevOps Pipelines Cross Platform Release Notes Task&lt;/a&gt; by adding Pull Request information. Well, that first release was fairly limited only working for PR validation builds, so I have made more improvements and shipped a newer version.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;task&lt;/a&gt; now will, as well as checking for PR build trigger, try to associate the commits associated with a build/release pipeline to any completed PRs in the repo. This is done using the Last Merge Commit ID, and from my tests seems to work for the various types of PR e.g. squash, merge, rebased and semi-linear.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently post about <a href="https://blogs.blackmarble.co.uk/rfennell/2020/04/04/enriching-the-data-available-in-my-azure-devops-pipelines-cross-platform-release-notes-task/">Enriching the data available in my Azure DevOps Pipelines Cross Platform Release Notes Task</a> by adding Pull Request information. Well, that first release was fairly limited only working for PR validation builds, so I have made more improvements and shipped a newer version.</p>
<p>The <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">task</a> now will, as well as checking for PR build trigger, try to associate the commits associated with a build/release pipeline to any completed PRs in the repo. This is done using the Last Merge Commit ID, and from my tests seems to work for the various types of PR e.g. squash, merge, rebased and semi-linear.</p>
<p>The resultant set of PRs are made available to the release notes template processor as an array. However, there is a difference between the existing arrays for Work Items and Commits and the new one for Pull requests. The new one is only available if you are using the <a href="https://blogs.blackmarble.co.uk/rfennell/2020/03/11/a-major-new-feature-for-my-cross-platform-release-notes-azure-devops-pipelines-extension-handlebars-templating-support/">new Handlebars based templating mode</a>.</p>
<p>You would add a block in the general form….</p>
<pre tabindex="0"><code>{{#forEach pullRequests}}  
{{#if isFirst}}### Associated Pull Requests (only shown if  PR) {{/if}}  
\*  \*\*PR {{this.id}}\*\*  {{this.title}}  
{{/forEach}}
</code></pre><pre tabindex="0"><code>
The reason I have chosen to only support Handlebars is it make the development so much easier and provides a more flexible solution, given all the Handlebar helpers available. I think this might be the first tentative step towards deprecating my legacy templating solution in favour of only shipping Handlebars support.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Swapping my Azure DevOps Pipeline Extensions release process to use Multistage YAML pipelines</title>
      <link>https://blog.richardfennell.net/posts/swapping-my-azure-devops-pipeline-extensions-release-process-to-use-multistage-yaml-pipelines/</link>
      <pubDate>Tue, 07 Apr 2020 10:55:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/swapping-my-azure-devops-pipeline-extensions-release-process-to-use-multistage-yaml-pipelines/</guid>
      <description>&lt;p&gt;In the past I have &lt;a href=&#34;https://github.com/rfennell/AzurePipelines/wiki/Outlining-my-Azure-DevOps-CI-CD-Process-using-UI-based-tools&#34;&gt;documented the build and release process I use for my Azure DevOps Pipeline Extensions&lt;/a&gt; and also detailed how I have started &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2019/04/26/migrating-a-gui-based-build-to-yaml-in-azure-devops-pipelines/&#34;&gt;to move the build phases to YAML&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Well now I consider that &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&amp;amp;tabs=yaml&#34;&gt;multistage YAML pipelines&lt;/a&gt; are mature enough to allow me to do my whole release pipeline in YAML, hence this post.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/04/image.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/04/image_thumb.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;My pipeline performs a number of stages, &lt;a href=&#34;https://github.com/rfennell/AzurePipelines/blob/master/Extensions/ArtifactDescription/azure-pipelines.yml&#34;&gt;you can find a sample pipeline here&lt;/a&gt;. Note that I have made every effort to extract variables into variable groups to aid reuse of the pipeline definition. I have added documentation as to where variable are stored and what they are used for.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In the past I have <a href="https://github.com/rfennell/AzurePipelines/wiki/Outlining-my-Azure-DevOps-CI-CD-Process-using-UI-based-tools">documented the build and release process I use for my Azure DevOps Pipeline Extensions</a> and also detailed how I have started <a href="https://blogs.blackmarble.co.uk/rfennell/2019/04/26/migrating-a-gui-based-build-to-yaml-in-azure-devops-pipelines/">to move the build phases to YAML</a>.</p>
<p>Well now I consider that <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&amp;tabs=yaml">multistage YAML pipelines</a> are mature enough to allow me to do my whole release pipeline in YAML, hence this post.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/04/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/04/image_thumb.png" title="image"></a></p>
<p>My pipeline performs a number of stages, <a href="https://github.com/rfennell/AzurePipelines/blob/master/Extensions/ArtifactDescription/azure-pipelines.yml">you can find a sample pipeline here</a>. Note that I have made every effort to extract variables into variable groups to aid reuse of the pipeline definition. I have added documentation as to where variable are stored and what they are used for.</p>
<p>The stages are as follows</p>
<h3 id="build">Build</h3>
<p>The build phase does the following</p>
<ul>
<li>
<p>Updates all the TASK.JSON files so that the help text has the correct version number</p>
</li>
<li>
<p>Calls a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops">YAML template</a> (<a href="https://github.com/rfennell/AzurePipelines/blob/master/YAMLTemplates/build-node-task.yml">build-Node-task</a>) that performs all the tasks to transpile a TypeScript based task – if my extension contained multiple tasks this template would be called a number of time</p>
</li>
<li>
<p>Get NPM packages</p>
</li>
<li>
<p>Run <a href="https://marketplace.visualstudio.com/items?itemName=Snyk.snyk-security-scan">Snyk</a> to check for vulnerabilities – if any vulnerabilities are found the build fails</p>
</li>
<li>
<p>Lint and Transpile the TypeScript – if any issue are found the build fails</p>
</li>
<li>
<p>Run any Unit test and publish results – if any test fail the build fails</p>
</li>
<li>
<p>Package up the task (remove dev dependencies)</p>
</li>
<li>
<p>Download the TFX client</p>
</li>
<li>
<p>Package up the Extension VSIX package and publish as a pipeline artifact.</p>
</li>
</ul>
<h3 id="private">Private</h3>
<p>The private phase does the following</p>
<ul>
<li>
<p>Using another YAML template (<a href="https://github.com/rfennell/AzurePipelines/blob/master/YAMLTemplates/publish-extension.yml">publish-extension</a>) publish the extension to the <a href="https://marketplace.visualstudio.com/search?term=fennell&amp;target=AzureDevOps&amp;category=Azure%20Pipelines&amp;sortBy=Relevance">Azure DevOps Marketplace</a>, but with flags so it is private and only assessible to my account for testing</p>
</li>
<li>
<p>Download the TFX client</p>
</li>
<li>
<p>Publishes the Extension to the Marketplace</p>
</li>
</ul>
<p>This phase is done as a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops">deployment job</a> and is linked to an environment,. However, there are no special approval requirements are set on this environment. This is because I am happy for the release to be done to the private instance assuming the build phase complete without error.</p>
<h3 id="test">Test</h3>
<p>This is where the pipeline gets interesting. The test phase does the following</p>
<ul>
<li>Runs any integration tests. These could be anything dependant on the extension being deployed. Unfortunately there is no option at present in multistage pipeline for a manual task to say ‘do the manual tests’, but you could simulate similar by sending an email or the like.</li>
</ul>
<p>The clever bit here is that I don’t want this stage to run until the new private version of the extension has been published and is available; there can be a delay between TFX saying the extension is published and it being downloadable by an agent. This can cause a problem in that you think you are running tests against a different version of the extension to one you have. To get around this problem I have implemented a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&amp;tabs=check-pass">check on the environment</a> this stage’s deployment job is linked to. This check runs an Azure Function to check the version of the extension in the Marketplace. This is <a href="https://blogs.blackmarble.co.uk/rfennell/2018/03/20/using-vsts-gates-to-help-improve-my-deployment-pipeline-of-vsts-extensions-to-the-visual-studio-marketplace/">exactly the same Azure Function I already used in my UI based pipelines to perform the same job</a>.</p>
<p>The only issue here is that this Azure Function is used as an exit gate in my UI based pipelines; to not allow the pipeline to exit the private stage until the extension is publish. I cannot do this in a multistage YAML pipeline as environment checks are only done on entry to the environment. This means I have had to use an extra Test stage to associate the entry check with. This was setup as follows</p>
<ul>
<li>
<p>Create a new environment</p>
</li>
<li>
<p>Click the ellipse (…) and pick ‘approvals and checks’</p>
</li>
<li>
<p>Add a new Azure Function check</p>
</li>
<li>
<p>Provide the details, documented in my <a href="https://blogs.blackmarble.co.uk/rfennell/2018/03/20/using-vsts-gates-to-help-improve-my-deployment-pipeline-of-vsts-extensions-to-the-visual-studio-marketplace/">previous post</a>, to link to your Azure Function. Note that you can, in the ’control options’ section of the configuration, link to a variable group. This is a good place to store all the values, you need to provide</p>
</li>
<li>
<p>URL of the Azure Function</p>
</li>
<li>
<p>Key to us the function</p>
</li>
<li>
<p>The function header</p>
</li>
<li>
<p>The body – this one is interesting. You need to provide the build number and the GUID of a task in the extension for my Azure Function. It would be really good if both of these could be picked up from the pipeline trying to use the environment. This would allow a single ‘test’ environment to be created for use by all my extensions, in the same way there are only a single ‘private’ and ‘public’ environment. However, there is a problem, the build number is picked up OK, but as far as I can see I cannot access custom pipeline variables, so cannot get the task GUID I need dynamically. I assume this is because this environment entry check is run outside of the pipeline. The only solution  can find is to place the task GUID as a hard coded value in the check declaration (or I suppose in the variable group). The downside of this is it means I have to have an environment dedicated to each extension, each with a different task GUID. Not perfect, but not too much of a problem</p>
</li>
<li>
<p>In the Advanced check the check logic</p>
</li>
<li>
<p>In control options link to the variable group contain any variables used.</p>
</li>
</ul>
<h3 id="documentation">Documentation</h3>
<p>The documentation stage again uses a template (<a href="https://github.com/rfennell/AzurePipelines/blob/master/YAMLTemplates/generate-wiki-docs.yml">generate-wiki-docs</a>) and does the following</p>
<ul>
<li>Use the extension and task manifest files to generate YAML usage documentation <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-YAMLGenerator">using one of my tasks</a></li>
<li>Uploads the extension readme file to a WIKI <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks">using another of my tasks</a></li>
<li>Uploads the extension YAML usage file to a WIKI</li>
</ul>
<h3 id="public">Public</h3>
<p>The public stage is also a deployment job and linked to an environment. This environment has an approval set so I have to approve any release of the public version of the extension.</p>
<p>As well as doing the same as private stage this stage does the following</p>
<ul>
<li>Same a private stage</li>
<li>Send a Tweet saying I have release a new version of the extension using my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-ArtifactDescription-Tasks">Artifact Description</a> extension to get the text of the PR</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-BuildUpdating-Tasks">Update the pipelines minor build number variable</a> to make sure then I next release I have a higher build number</li>
</ul>
<h3 id="summary">Summary</h3>
<p>It took a bit of trial and error to get this going, but I think I have a good solution now. The fact that the bulk of the work is done using shared templates means I should get good reuse of the work I have done. I am sure I will be able to improve the template as time goes on but it is a good start</p>
]]></content:encoded>
    </item>
    <item>
      <title>My Azure DevOps Pipeline is not triggering on a GitHub Pull request - fixed</title>
      <link>https://blog.richardfennell.net/posts/my-azure-devops-pipeline-is-not-triggering-on-a-github-pull-request-fixed/</link>
      <pubDate>Tue, 07 Apr 2020 09:41:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-azure-devops-pipeline-is-not-triggering-on-a-github-pull-request-fixed/</guid>
      <description>&lt;p&gt;I have recently hit a problem that some of my Azure DevOps YAML pipelines, that I use to build my Azure DevOps Pipeline Extensions, are not triggering on a new PR being created on GitHub.&lt;/p&gt;
&lt;p&gt;I did not get to the bottom of why this is happening, but I found a fix.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Check and of make a note of any UI declared variables in your Azure DevOps YAML Pipeline that is not triggering&lt;/li&gt;
&lt;li&gt;Delete the pipeline&lt;/li&gt;
&lt;li&gt;Re-add the pipeline, linking to the YAML file hosted on GitHub. You might be asked to re-authorise the link between Azure DevOps Pipelines and GitHub.&lt;/li&gt;
&lt;li&gt;Re-enter any variables that are declared via the Pipelines UI and save the changes&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Your pipeline should start to be triggered again&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently hit a problem that some of my Azure DevOps YAML pipelines, that I use to build my Azure DevOps Pipeline Extensions, are not triggering on a new PR being created on GitHub.</p>
<p>I did not get to the bottom of why this is happening, but I found a fix.</p>
<ul>
<li>Check and of make a note of any UI declared variables in your Azure DevOps YAML Pipeline that is not triggering</li>
<li>Delete the pipeline</li>
<li>Re-add the pipeline, linking to the YAML file hosted on GitHub. You might be asked to re-authorise the link between Azure DevOps Pipelines and GitHub.</li>
<li>Re-enter any variables that are declared via the Pipelines UI and save the changes</li>
</ul>
<p>Your pipeline should start to be triggered again</p>
]]></content:encoded>
    </item>
    <item>
      <title>Enriching the data available in my Azure DevOps Pipelines Cross Platform Release Notes Task</title>
      <link>https://blog.richardfennell.net/posts/enriching-the-data-available-in-my-azure-devops-pipelines-cross-platform-release-notes-task/</link>
      <pubDate>Sat, 04 Apr 2020 12:06:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/enriching-the-data-available-in-my-azure-devops-pipelines-cross-platform-release-notes-task/</guid>
      <description>&lt;p&gt;A common request for my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Generate Release Notes Tasks&lt;/a&gt; is to enrich the data available beyond basic build, work item and commit/changeset details. I have resisted these requests as it felt like a never ending journey to start. However, I have now relented and added the option to see any pull request information available.&lt;/p&gt;
&lt;p&gt;This feature is limited, you obviously have to be using artifacts that linked to a Git repo, and also the Git repo have to on an Azure DevOps hosted repository. This won’t meet everyone’s needs but it is a start.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A common request for my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Generate Release Notes Tasks</a> is to enrich the data available beyond basic build, work item and commit/changeset details. I have resisted these requests as it felt like a never ending journey to start. However, I have now relented and added the option to see any pull request information available.</p>
<p>This feature is limited, you obviously have to be using artifacts that linked to a Git repo, and also the Git repo have to on an Azure DevOps hosted repository. This won’t meet everyone’s needs but it is a start.</p>
<h3 id="what-was-already-available">What was already available</h3>
<p>Turns out there was already a means to get a limited set of PR details from a build. You used the form</p>
<pre tabindex="0"><code>\*\*Build Trigger PR Number\*\*: ${buildDetails.triggerInfo\[&#39;pr.number&#39;\]}
</code></pre><p>or in handlebars format</p>
<pre tabindex="0"><code>\*\*Build Trigger PR Number\*\*: {{lookup buildDetails.triggerInfo &#39;pr.number&#39;}} 
</code></pre><h3 id="the-improvements">The improvements</h3>
<p>That said I have improved the options. There is now a new `prDetails` object available to the template.</p>
<p>If you use the dump option</p>
<pre tabindex="0"><code>${JSON.stringify(prDetails)}      
</code></pre><p>You can see the fields available</p>
<pre tabindex="0"><code>{  
     &#34;repository&#34;: {  
         &#34;id&#34;: &#34;bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59&#34;,  
         &#34;name&#34;: &#34;VSTSBuildTaskValidation&#34;,  
         &#34;url&#34;: &#34;[https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/\_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59&#34;](https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59&#34;),  
         &#34;project&#34;: {  
             &#34;id&#34;: &#34;670b3a60-2021-47ab-a88b-d76ebd888a2f&#34;,  
             &#34;name&#34;: &#34;GitHub&#34;,  
             &#34;description&#34;: &#34;A container for GitHub CI/CD processes&#34;,  
             &#34;url&#34;: &#34;[https://richardfennell.visualstudio.com/\_apis/projects/670b3a60-2021-47ab-a88b-d76ebd888a2f&#34;](https://richardfennell.visualstudio.com/_apis/projects/670b3a60-2021-47ab-a88b-d76ebd888a2f&#34;),  
             &#34;state&#34;: &#34;wellFormed&#34;,  
             &#34;revision&#34;: 411511726,  
             &#34;visibility&#34;: 2,  
             &#34;lastUpdateTime&#34;: &#34;2019-10-10T20:35:51.85Z&#34;  
         },  
         &#34;size&#34;: 9373557,  
         &#34;remoteUrl&#34;: &#34;[https://richardfennell.visualstudio.com/DefaultCollection/GitHub/\_git/VSTSBuildTaskValidation&#34;](https://richardfennell.visualstudio.com/DefaultCollection/GitHub/_git/VSTSBuildTaskValidation&#34;),  
         &#34;sshUrl&#34;: &#34;richardfennell@vs-ssh.visualstudio.com:v3/richardfennell/GitHub/VSTSBuildTaskValidation&#34;,  
         &#34;webUrl&#34;: &#34;[https://richardfennell.visualstudio.com/DefaultCollection/GitHub/\_git/VSTSBuildTaskValidation&#34;](https://richardfennell.visualstudio.com/DefaultCollection/GitHub/_git/VSTSBuildTaskValidation&#34;)  
     },  
     &#34;pullRequestId&#34;: 4,  
     &#34;codeReviewId&#34;: 4,  
     &#34;status&#34;: 1,  
     &#34;createdBy&#34;: {  
         &#34;displayName&#34;: &#34;Richard Fennell (Work MSA)&#34;,  
         &#34;url&#34;: &#34;[https://spsprodeus24.vssps.visualstudio.com/Ac0efb61e-a937-42a0-9658-649757d55d46/\_apis/Identities/b1fce0e9-fbf4-4202-bc09-a290def3e98b&#34;](https://spsprodeus24.vssps.visualstudio.com/Ac0efb61e-a937-42a0-9658-649757d55d46/_apis/Identities/b1fce0e9-fbf4-4202-bc09-a290def3e98b&#34;),  
         &#34;\_links&#34;: {  
             &#34;avatar&#34;: {  
                 &#34;href&#34;: &#34;[https://richardfennell.visualstudio.com/\_apis/GraphProfile/MemberAvatars/aad.NzQzY2UyODUtN2Q0Ny03YjNkLTk0ZGUtN2Q0YjA1ZGE5NDdj&#34;](https://richardfennell.visualstudio.com/_apis/GraphProfile/MemberAvatars/aad.NzQzY2UyODUtN2Q0Ny03YjNkLTk0ZGUtN2Q0YjA1ZGE5NDdj&#34;)  
             }  
         },  
         &#34;id&#34;: &#34;b1fce0e9-fbf4-4202-bc09-a290def3e98b&#34;,  
         &#34;uniqueName&#34;: &#34;bm-richard.fennell@outlook.com&#34;,  
         &#34;imageUrl&#34;: &#34;[https://richardfennell.visualstudio.com/\_api/\_common/identityImage?id=b1fce0e9-fbf4-4202-bc09-a290def3e98b&#34;](https://richardfennell.visualstudio.com/_api/_common/identityImage?id=b1fce0e9-fbf4-4202-bc09-a290def3e98b&#34;),  
         &#34;descriptor&#34;: &#34;aad.NzQzY2UyODUtN2Q0Ny03YjNkLTk0ZGUtN2Q0YjA1ZGE5NDdj&#34;  
     },  
     &#34;creationDate&#34;: &#34;2020-04-04T10:44:59.566Z&#34;,  
     &#34;title&#34;: &#34;Added test.txt&#34;,  
     &#34;description&#34;: &#34;Added test.txt&#34;,  
     &#34;sourceRefName&#34;: &#34;refs/heads/branch2&#34;,  
     &#34;targetRefName&#34;: &#34;refs/heads/master&#34;,  
     &#34;mergeStatus&#34;: 3,  
     &#34;isDraft&#34;: false,  
     &#34;mergeId&#34;: &#34;f76a6556-8b4f-44eb-945a-9350124f067b&#34;,  
     &#34;lastMergeSourceCommit&#34;: {  
         &#34;commitId&#34;: &#34;f43fa4de163c3ee0b4f17b72a659eac0d307deb8&#34;,  
         &#34;url&#34;: &#34;[https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/\_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59/commits/f43fa4de163c3ee0b4f17b72a659eac0d307deb8&#34;](https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59/commits/f43fa4de163c3ee0b4f17b72a659eac0d307deb8&#34;)  
     },  
     &#34;lastMergeTargetCommit&#34;: {  
         &#34;commitId&#34;: &#34;829ab2326201c7a5d439771eef5a57f58f94897d&#34;,  
         &#34;url&#34;: &#34;[https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/\_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59/commits/829ab2326201c7a5d439771eef5a57f58f94897d&#34;](https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59/commits/829ab2326201c7a5d439771eef5a57f58f94897d&#34;)  
     },  
     &#34;lastMergeCommit&#34;: {  
         &#34;commitId&#34;: &#34;53f393cae4ee3b901bb69858c4ee86cc8b466d6f&#34;,  
         &#34;author&#34;: {  
             &#34;name&#34;: &#34;Richard Fennell (Work MSA)&#34;,  
             &#34;email&#34;: &#34;bm-richard.fennell@outlook.com&#34;,  
             &#34;date&#34;: &#34;2020-04-04T10:44:59.000Z&#34;  
         },  
         &#34;committer&#34;: {  
             &#34;name&#34;: &#34;Richard Fennell (Work MSA)&#34;,  
             &#34;email&#34;: &#34;bm-richard.fennell@outlook.com&#34;,  
             &#34;date&#34;: &#34;2020-04-04T10:44:59.000Z&#34;  
         },  
         &#34;comment&#34;: &#34;Merge pull request 4 from branch2 into master&#34;,  
         &#34;url&#34;: &#34;[https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/\_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59/commits/53f393cae4ee3b901bb69858c4ee86cc8b466d6f&#34;](https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59/commits/53f393cae4ee3b901bb69858c4ee86cc8b466d6f&#34;)  
     },  
     &#34;reviewers&#34;: \[\],  
     &#34;url&#34;: &#34;[https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/\_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59/pullRequests/4&#34;](https://richardfennell.visualstudio.com/670b3a60-2021-47ab-a88b-d76ebd888a2f/_apis/git/repositories/bebd0ae2-405d-4c0a-b9c5-36ea94c1bf59/pullRequests/4&#34;),  
     &#34;supportsIterations&#34;: true,  
     &#34;artifactId&#34;: &#34;vstfs:///Git/PullRequestId/670b3a60-2021-47ab-a88b-d76ebd888a2f%2fbebd0ae2-405d-4c0a-b9c5-36ea94c1bf59%2f4&#34;  
}  

```In templates this new object could be is used  
</code></pre><p>**PR Title **: ${prDetails.title}</p>
<pre tabindex="0"><code>or in handlebars format.
</code></pre><p>**PR Details**: {{prDetails.title}}</p>
<pre tabindex="0"><code>
It will be interesting to here feedback from the real world as opposed to test harnesses
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Experiences setting up Azure Active Directory single sign-on (SSO) integration with GitHub Enterprise</title>
      <link>https://blog.richardfennell.net/posts/experiences-setting-up-azure-active-directory-single-sign-on-sso-integration-with-github-enterprise/</link>
      <pubDate>Mon, 30 Mar 2020 16:29:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-setting-up-azure-active-directory-single-sign-on-sso-integration-with-github-enterprise/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;GitHub is a great system for individuals and OSS communities for both public and private project. However, corporate customers commonly want more control over their system than the standard GitHub offering. It is for this reason GitHub offers  &lt;a href=&#34;https://github.com/enterprise&#34;&gt;GitHub Enterprise&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For most corporates, the essential feature that GitHub Enterprise offers is the use Single Sign On (SSO) i.e. allowing users to login to GitHub using their corporate directory accounts.&lt;/p&gt;
&lt;p&gt;I wanted to see how easy this was to setup when you are using Azure Active Directory (AAD).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>GitHub is a great system for individuals and OSS communities for both public and private project. However, corporate customers commonly want more control over their system than the standard GitHub offering. It is for this reason GitHub offers  <a href="https://github.com/enterprise">GitHub Enterprise</a>.</p>
<p>For most corporates, the essential feature that GitHub Enterprise offers is the use Single Sign On (SSO) i.e. allowing users to login to GitHub using their corporate directory accounts.</p>
<p>I wanted to see how easy this was to setup when you are using Azure Active Directory (AAD).</p>
<p>Luckily there is a <a href="https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/github-tutorial">step by step tutorial from Microsoft</a> on how to set this up. Though, I would say that though detailed this tutorial has a strange structure in that it shows the default values not the correct values. Hence, the tutorial requires close reading, don’t just look at the pictures!</p>
<p>Even with close reading, I still hit a problem, all of my own making, as I went through this tutorial.</p>
<h3 id="the-issue--a-stray--in-a-url">The Issue – a stray / in a URL</h3>
<p>I entered all the AAD URLs and certs as instructed (or so I thought) by the tutorial into the Security page of GitHub Enterprise.</p>
<p>When I pressed the ‘Validate’ button in GitHub, to test the SSO settings, I got an error</p>
<p><em>‘The client has not listed any permissions for &lsquo;AAD Graph&rsquo; in the requested permissions in the client&rsquo;s application registration’</em></p>
<p>This sent me shown a rabbit hole looking at user permissions. That wasted a lot of time.</p>
<p>However, it turns out the issue was that I had a // in a URL when it should have been a  /. This was because I had made a cut and paste error when editing the tutorial’s sample URL and adding my organisation details.</p>
<p>Once I fixed this typo the validation worked, I was able to complete the setup and then I could to invite my AAD users to my GitHub Enterprise organisation.</p>
<h3 id="summary">Summary</h3>
<p>So the summary is, if you follow the tutorial setting up SSO from AAD to GitHub Enterprise is easy enough to do, just be careful of over the detail.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A major new feature for my Cross-platform Release Notes Azure DevOps Pipelines Extension&amp;ndash;Handlebars Templating Support</title>
      <link>https://blog.richardfennell.net/posts/a-major-new-feature-for-my-cross-platform-release-notes-azure-devops-pipelines-extension-handlebars-templating-support/</link>
      <pubDate>Wed, 11 Mar 2020 11:48:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-major-new-feature-for-my-cross-platform-release-notes-azure-devops-pipelines-extension-handlebars-templating-support/</guid>
      <description>&lt;p&gt;I recently got a very interesting PR for my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Cross-platform Release Notes Azure DevOps Pipelines Extension&lt;/a&gt; from &lt;a href=&#34;https://github.com/KennethScott&#34;&gt;Kenneth Scott&lt;/a&gt;. He had added a new templating engine to the task, &lt;a href=&#34;https://handlebarsjs.com/&#34;&gt;Handlebars&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Previous to this PR the templating in the task was done with a line by line evaluation of a template that used my own mark-up. This method worked but has limitations, mostly due to the line by line evaluation model.  With the Kenneth’s PR the option was added to write your templates in Handlebars, or stay with my previous templating engine.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently got a very interesting PR for my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross-platform Release Notes Azure DevOps Pipelines Extension</a> from <a href="https://github.com/KennethScott">Kenneth Scott</a>. He had added a new templating engine to the task, <a href="https://handlebarsjs.com/">Handlebars</a>.</p>
<p>Previous to this PR the templating in the task was done with a line by line evaluation of a template that used my own mark-up. This method worked but has limitations, mostly due to the line by line evaluation model.  With the Kenneth’s PR the option was added to write your templates in Handlebars, or stay with my previous templating engine.</p>
<h3 id="using-handlebars">Using Handlebars</h3>
<p>If you use Handlebars, the template becomes something like</p>
<pre tabindex="0"><code>\## Notes for release  {{releaseDetails.releaseDefinition.name}}    
\*\*Release Number\*\*  : {{releaseDetails.name}}
\*\*Release completed\*\* : {{releaseDetails.modifiedOn}}     
\*\*Build Number\*\*: {{buildDetails.id}}
\*\*Compared Release Number\*\*  : {{compareReleaseDetails.name}}    

### Associated Work Items ({{workItems.length}})
{{#each workItems}}
\*  \*\*{{this.id}}\*\*  {{lookup this.fields &#39;System.Title&#39;}}
   - \*\*WIT\*\* {{lookup this.fields &#39;System.WorkItemType&#39;}} 
   - \*\*Tags\*\* {{lookup this.fields &#39;System.Tags&#39;}}
{{/each}}

### Associated commits ({{commits.length}})
{{#each commits}}
\* \*\* ID{{this.id}}\*\* 
   -  \*\*Message:\*\* {{this.message}}
   -  \*\*Commited by:\*\* {{this.author.displayName}} 
{{/each}}
</code></pre><p>The whole template is evaluated by the Handlebars engine using its own mark-up to provide a means for looping across arrays and the like.</p>
<p>This seemed a great enhancement to the task. However, we soon realised that it could be better. Handlebars is extensible, so why not allow the extensibility to be used?</p>
<h3 id="using-handlebars-extensions">Using Handlebars Extensions</h3>
<p>I have added extensibility in two ways. Firstly I have also added support for the common <a href="https://github.com/helpers/handlebars-helpers">Handlebar-Helpers</a> extensions, this added over 150 helpers. These are just accessed in a template as follows</p>
<pre tabindex="0"><code>\## To confirm the handbars-helpers is work
The year is {{year}} 
We can capitalize &#34;foo bar baz&#34; {{capitalizeAll &#34;foo bar baz&#34;}}
</code></pre><p>I have also added the ability to provide a block of JavaScript as a task parameter is that is loaded as a custom Handlebars extension. So if you add the following block in the tasks <strong>customHandlebarsExtensionCode</strong> parameter.</p>
<pre tabindex="0"><code>module.exports = {foo: function () {return &#39;Returns foo&#39;;}};
</code></pre><p>You can access in the templates as</p>
<pre tabindex="0"><code>\## To confirm our custom extension works
We can call our custom extension {{foo}}
</code></pre><p>It will be interesting to see how popular this alternative way of templating will be.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where did all my test results go?</title>
      <link>https://blog.richardfennell.net/posts/where-did-all-my-test-results-go/</link>
      <pubDate>Thu, 05 Mar 2020 13:57:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-did-all-my-test-results-go/</guid>
      <description>&lt;h3 id=&#34;problem&#34;&gt;Problem&lt;/h3&gt;
&lt;p&gt;I recently tripped myself up whist adding SonarQube analysis to a rather complex Azure DevOps build.&lt;/p&gt;
&lt;p&gt;The build has two VsTest steps, both were using the same folder for their test result files. When the first VsTest task ran it created the expected .TRX and .COVERAGE files and then published its results to Azure DevOps, but when the second VsTest task ran it over wrote this folder, deleting the files already present, before it generated and published it results.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="problem">Problem</h3>
<p>I recently tripped myself up whist adding SonarQube analysis to a rather complex Azure DevOps build.</p>
<p>The build has two VsTest steps, both were using the same folder for their test result files. When the first VsTest task ran it created the expected .TRX and .COVERAGE files and then published its results to Azure DevOps, but when the second VsTest task ran it over wrote this folder, deleting the files already present, before it generated and published it results.</p>
<p>This meant that the build itself had all the test results published, but when SonarQube looked for the files for analysis only the second set of test were present, so its analysis was incorrect.</p>
<h3 id="solution">Solution</h3>
<p>The solution was easy, use different folders for each set of test results.</p>
<p>This gave me a build, the key items are shown below, where one VsTest step does not overwrite the previous results before they can be processed by any 3rd party tasks such as SonarQube.</p>
<pre tabindex="0"><code>steps:  
- task: SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B77A063157.SonarQubePrepare@4  
   displayName: &#39;Prepare analysis on SonarQube&#39;  
   inputs:  
     SonarQube: Sonarqube  
     projectKey: &#39;Services&#39;  
     projectName: &#39;Services&#39;  
     projectVersion: &#39;$(major).$(minor)&#39;  
     extraProperties: |  
      # Additional properties that will be passed to the scanner,   
      sonar.cs.vscoveragexml.reportsPaths=$(System.DefaultWorkingDirectory)/\*\*/\*.coveragexml  
      sonar.cs.vstest.reportsPaths=$(System.DefaultWorkingDirectory)/\*\*/\*.trx

  

… other build steps

  

\- task: VSTest@2  
   displayName: &#39;VsTest – Internal Services&#39;  
   inputs:  
     testAssemblyVer2: |  
      \*\*\*.unittests.dll  
      !\*\*obj\*\*  
     searchFolder: &#39;$(System.DefaultWorkingDirectory)/src/Services&#39;  
     resultsFolder: &#39;$(System.DefaultWorkingDirectory)TestResultsServices&#39;  
     overrideTestrunParameters: &#39;-DeploymentEnabled false&#39;  
     codeCoverageEnabled: true  
     testRunTitle: &#39;Services Unit Tests&#39;  
     diagnosticsEnabled: True  
   continueOnError: true

\- task: VSTest@2  
   displayName: &#39;VsTest - External&#39;  
   inputs:  
     testAssemblyVer2: |  
      \*\*\*.unittests.dll  
      !\*\*obj\*\*  
     searchFolder: &#39;$(System.DefaultWorkingDirectory)/src/ExternalServices&#39;  
     resultsFolder: &#39;$(System.DefaultWorkingDirectory)TestResultsExternalServices&#39;  
     vsTestVersion: 15.0  
     codeCoverageEnabled: true  
     testRunTitle: &#39;External Services Unit Tests&#39;  
     diagnosticsEnabled: True  
   continueOnError: true  

\- task: BlackMarble.CodeCoverage-Format-Convertor-Private.CodeCoverageFormatConvertor.CodeCoverage-Format-Convertor@1  
   displayName: &#39;CodeCoverage Format Convertor&#39;  
   inputs:  
     ProjectDirectory: &#39;$(System.DefaultWorkingDirectory)&#39;  

\- task: SonarSource.sonarqube.6D01813A-9589-4B15-8491-8164AEB38055.SonarQubeAnalyze@4  
   displayName: &#39;Run Code Analysis&#39;

\- task: SonarSource.sonarqube.291ed61f-1ee4-45d3-b1b0-bf822d9095ef.SonarQubePublish@4  
   displayName: &#39;Publish Quality Gate Result&#39;  
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>You need to pass a GitHub PAT to create Azure DevOps Agent Images using Packer</title>
      <link>https://blog.richardfennell.net/posts/you-need-to-pass-a-github-pat-to-create-azure-devops-agent-images-using-packer/</link>
      <pubDate>Mon, 02 Mar 2020 11:52:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-need-to-pass-a-github-pat-to-create-azure-devops-agent-images-using-packer/</guid>
      <description>&lt;p&gt;I wrote recently about &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2019/12/21/creating-hyper-v-hosted-azure-devops-private-agents-based-on-the-same-vm-images-as-used-by-microsoft-for-their-hosted-agents/&#34;&gt;Creating Hyper-V hosted Azure DevOps Private Agents based on the same VM images as used by Microsoft for their Hosted Agent&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As discussed in that post, using this model you will recreate your build agent VMs on a regular basis, as opposed to patching them. When I came to do this recently I found that the Packer image generation was failing with errors related to accessing packages.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I wrote recently about <a href="https://blogs.blackmarble.co.uk/rfennell/2019/12/21/creating-hyper-v-hosted-azure-devops-private-agents-based-on-the-same-vm-images-as-used-by-microsoft-for-their-hosted-agents/">Creating Hyper-V hosted Azure DevOps Private Agents based on the same VM images as used by Microsoft for their Hosted Agent</a>.</p>
<p>As discussed in that post, using this model you will recreate your build agent VMs on a regular basis, as opposed to patching them. When I came to do this recently I found that the Packer image generation was failing with errors related to accessing packages.</p>
<p>Initially, I did not read the error message too closely and just assumed it was an intermittent issue as I had found you sometime get random timeouts with this process. However, when the problem did not go away after repeated retries I realised I had a more fundamental problem, so read the log properly!</p>
<p>Turns out the issue is you now have to pass a GitHub PAT token that has at least read access to the packages feed to allow Packer to authenticate with GitHub to read packages.</p>
<p>The process to create the required PAT is as follows</p>
<ol>
<li>In a browser login to GitHub</li>
<li>Click your profile (top right)</li>
<li>Select Settings</li>
<li>Pick Developer Settings</li>
<li>Pick Personal Access Tokens and create a new one that has <strong>read:packages</strong> enabled</li>
</ol>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/03/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/03/image_thumb.png" title="image"></a></p>
<p>Once created, this PAT needs to be passed into Packer. If using the settings JSON file this is just another variable</p>
<pre tabindex="0"><code>{
&#34;client\_id&#34;: &#34;Azure Client ID&#34;,
&#34;client\_secret&#34;: &#34;Client Secret&#34;,
&#34;tenant\_id&#34;: &#34;Azure Tenant ID&#34;,
&#34;subscription\_id&#34;: &#34;Azure Sub ID&#34;,
&#34;object\_id&#34;: &#34;The object ID for the AAD SP&#34;,
&#34;location&#34;: &#34;Azure location to use&#34;,
&#34;resource\_group&#34;: &#34;Name of resource group that contains Storage Account&#34;,
&#34;storage\_account&#34;: &#34;Name of the storage account&#34;,
&#34;ssh\_password&#34;: A password&#34;,
&#34;install\_password&#34;: &#34;A password&#34;,
&#34;commit\_url&#34;: &#34;A url to to be save in a text file on the VHD, usually the URL if commit VHD based on&#34;,

&#34;github\_feed\_token&#34;: &#34;A PAT&#34;

}  
</code></pre><p>If you are running Packer within a build pipeline, as the other blog post discusses, then the PAT will be another build variable.</p>
<p>Once this change was made I was able to get Packer to run to completion, as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Registration is open for the Global DevOps Bootcamp 2020 @ Black Marble</title>
      <link>https://blog.richardfennell.net/posts/registration-is-open-for-the-global-devops-bootcamp-2020-black-marble/</link>
      <pubDate>Sat, 01 Feb 2020 14:54:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/registration-is-open-for-the-global-devops-bootcamp-2020-black-marble/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://www.eventbrite.com/e/gdbc2020-global-devops-bootcamp-blackmarble-tickets-91833453331&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/02/image.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I am really pleased to say that we at Black Marble are again hosting a venue for this year’s edition of the Global DevOps Bootcamp on Saturday May 30th 2020.&lt;/p&gt;
&lt;p&gt;For those who have not been to a previous GDBC event at Black Marble, or any of the other 70+ venues across the work, what can you expect on the day?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A video keynote from an Industry Leader in the DevOps field&lt;/li&gt;
&lt;li&gt;A local keynote developing the topics of the bootcamp&lt;/li&gt;
&lt;li&gt;The remainder of the day is made up of team based hands on exercises.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Lat years content can be &lt;a href=&#34;https://globaldevopsbootcamp.com/home/impressions2019&#34;&gt;seen here&lt;/a&gt;, this years will be all new.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://www.eventbrite.com/e/gdbc2020-global-devops-bootcamp-blackmarble-tickets-91833453331"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/02/image.png" title="image"></a></p>
<p>I am really pleased to say that we at Black Marble are again hosting a venue for this year’s edition of the Global DevOps Bootcamp on Saturday May 30th 2020.</p>
<p>For those who have not been to a previous GDBC event at Black Marble, or any of the other 70+ venues across the work, what can you expect on the day?</p>
<ul>
<li>A video keynote from an Industry Leader in the DevOps field</li>
<li>A local keynote developing the topics of the bootcamp</li>
<li>The remainder of the day is made up of team based hands on exercises.</li>
</ul>
<p>Lat years content can be <a href="https://globaldevopsbootcamp.com/home/impressions2019">seen here</a>, this years will be all new.</p>
<p>It is worth stressing that this event is not a competition. It is a day of learning for people of all levels of experience. We encourage the forming of teams that are cross skill and include all levels of experience. The key aims for the day are that everyone learns and has a great time.</p>
<p>Oh, and did I mention it is a FREE event and lunch will be provided.</p>
<p>For more details have a look at that the <a href="https://globaldevopsbootcamp.com/">central GDBC 2020 site</a></p>
<p>We do have limited spaces so if you are interested in booking your place please <a href="https://www.eventbrite.com/e/gdbc2020-global-devops-bootcamp-blackmarble-tickets-91833453331">register here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio Online is back and it is an editor this time!</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-online-is-back-and-it-is-an-editor-this-time/</link>
      <pubDate>Fri, 17 Jan 2020 10:07:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-online-is-back-and-it-is-an-editor-this-time/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://visualstudio.microsoft.com/services/visual-studio-online/&#34;&gt;Visual Studio Online&lt;/a&gt; is back. Unlike the previous usage of this name, which was an incarnation of what is now &lt;a href=&#34;https://azure.microsoft.com/en-gb/services/devops/&#34;&gt;Azure DevOps Services&lt;/a&gt;, this is actually an editor for code. Just like you might expect it to be!&lt;/p&gt;
&lt;p&gt;The new VSO, which is currently in preview, is a service running in Azure that allows you to in effect run &lt;a href=&#34;https://code.visualstudio.com/&#34;&gt;Visual Studio Code&lt;/a&gt; on a Linux VM. &lt;/p&gt;
&lt;p&gt;Once you have signed into VSO with an MSA and it has created the required Resource Group and VSO Plan in your Azure subscription, you create one or more ‘environments’ that defines the size of the VM to use and which GitHub hosted repo the environment will edit.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://visualstudio.microsoft.com/services/visual-studio-online/">Visual Studio Online</a> is back. Unlike the previous usage of this name, which was an incarnation of what is now <a href="https://azure.microsoft.com/en-gb/services/devops/">Azure DevOps Services</a>, this is actually an editor for code. Just like you might expect it to be!</p>
<p>The new VSO, which is currently in preview, is a service running in Azure that allows you to in effect run <a href="https://code.visualstudio.com/">Visual Studio Code</a> on a Linux VM. </p>
<p>Once you have signed into VSO with an MSA and it has created the required Resource Group and VSO Plan in your Azure subscription, you create one or more ‘environments’ that defines the size of the VM to use and which GitHub hosted repo the environment will edit.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/01/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/01/image_thumb.png" title="image"></a></p>
<p>You then start your environment and get the editor experience you would expect from Visual Studio Code running on a Linux instance, but in your browser.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/01/image-1.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2020/01/image_thumb-1.png" title="image"></a></p>
<p>This certainly opens more use-cases for editing of code that is too complex for the GitHub in browser editing experience, but you don’t want to keep a full local development setup.</p>
<p>Only time will tell how much I use it, but it looks interesting</p>
]]></content:encoded>
    </item>
    <item>
      <title>A technique for porting PowerShell based Azure DevOps Extensions to Node so they can be run cross-platform without a complete re-write</title>
      <link>https://blog.richardfennell.net/posts/a-technique-for-porting-powershell-based-azure-devops-extensions-to-node-so-they-can-be-run-cross-platform-without-a-complete-re-write/</link>
      <pubDate>Sat, 28 Dec 2019 16:11:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-technique-for-porting-powershell-based-azure-devops-extensions-to-node-so-they-can-be-run-cross-platform-without-a-complete-re-write/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;I&amp;rsquo;ve written a &lt;a href=&#34;https://marketplace.visualstudio.com/search?term=fennell&amp;amp;target=AzureDevOps&amp;amp;category=All%20categories&amp;amp;sortBy=Relevance&#34;&gt;good few extensions for Azure DevOps Services.&lt;/a&gt; Most of the early ones I wrote were written in PowerShell, but of late I have tended to use Typescript (targeting Node.JS) for the added cross-platform support.&lt;/p&gt;
&lt;p&gt;This has led me to consider if it was worth the effort to convert all my legacy extensions to support cross-platform usage?&lt;/p&gt;
&lt;p&gt;This is of course assuming the tasks the extension contains are useful on a non-Window platform. There is no point porting a Windows only tool away from PowerShell.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>I&rsquo;ve written a <a href="https://marketplace.visualstudio.com/search?term=fennell&amp;target=AzureDevOps&amp;category=All%20categories&amp;sortBy=Relevance">good few extensions for Azure DevOps Services.</a> Most of the early ones I wrote were written in PowerShell, but of late I have tended to use Typescript (targeting Node.JS) for the added cross-platform support.</p>
<p>This has led me to consider if it was worth the effort to convert all my legacy extensions to support cross-platform usage?</p>
<p>This is of course assuming the tasks the extension contains are useful on a non-Window platform. There is no point porting a Windows only tool away from PowerShell.</p>
<p>Assuming a conversion is a useful thing, there are two obvious ways to go about it:</p>
<ul>
<li>Completely re-write the task in TypeScript, but I would like to avoid this effort if possible.</li>
<li>To use PowerShell Core, this is option I decide to experiment with.</li>
</ul>
<h3 id="a-solution">A Solution</h3>
<p>You might think the answer is to just alter the task’s manifest to run PSCore as opposed PowerShell3. The problem is that the Azure DevOps Agent does not provide support for PSCore, only Node or PowerShell3 execution of scripts.</p>
<p>However, there is a way around this limitation. You can shell a PSCore session from Node, <a href="https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/PowerShellV2/powershell.ts">as is done with the Microsoft PowerShell/PSCore script runner tasks</a>.</p>
<p>I had previously experimented with this technique with my Pester Test Runner. The process I followed was</p>
<ol>
<li>Alter the PowerShell script to accept all the task parameters it previously got via the SDK calls as script parameters</li>
<li>Alter the task manifest to run a Node script</li>
<li>In the new Node wrapper script get all the Azure DevOps variables and then run the old script via a PSCore shell with the variables passed as parameters</li>
</ol>
<p>This had worked surprising well, the only negative was that all log messages seem to gain an extra line break, but I can live with that.  Oh, and yes before you ask there is a new cross platform version of the Pester test runner on the way, but it is moving home. More on that soon.</p>
<p>However, when I tried the same technique on another extension, specifically my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-BuildUpdating-Tasks">Build Updating one</a>, I hit a problem.</p>
<p>All the Pester task’s operations are against the file system, there is no communication back to the Azure DevOps Server. This is not true for the Build tasks. They needed to talk to the Azure DevOps API. To do this they have to get the agent’s access token. This was done using the PowerShell Azure DevOps SDK, which in this new way of working is not loaded (the agent previously did it automatically when executing a script via PowerShell3).</p>
<p>After a bit of experimentation trying to load the PowerShell Azure DevOps SDK inside my PowerShell script inside a Node wrapper (a bad idea) I found the best option was to use the Azure DevOps Node SDK to get the token in the wrapper script and pass it into the PowerShell script as an extra parameter (it is then passing into all the functions as needed). This is more of an edit than I wanted but not too much work, far easier than a complete rewrite.</p>
<p>You can see an example of a <a href="https://github.com/rfennell/AzurePipelines/blob/master/Extensions/BuildUpdating/BuildRetensionTask/src/BuildRetensionTask.ts">wrapper here</a></p>
<h3 id="in-summary">In Summary</h3>
<p>So have now have a mechanism to port extensions for cross platform usage without a complete re-write. Hence adding value to what has already been created. I guess I have found some OSS work for 2020</p>
]]></content:encoded>
    </item>
    <item>
      <title>Creating Hyper-V hosted Azure DevOps Private Agents based on the same VM images as used by Microsoft for their Hosted Agents</title>
      <link>https://blog.richardfennell.net/posts/creating-hyper-v-hosted-azure-devops-private-agents-based-on-the-same-vm-images-as-used-by-microsoft-for-their-hosted-agents/</link>
      <pubDate>Sat, 21 Dec 2019 15:36:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/creating-hyper-v-hosted-azure-devops-private-agents-based-on-the-same-vm-images-as-used-by-microsoft-for-their-hosted-agents/</guid>
      <description>&lt;h3 id=&#34;introduction&#34;&gt;Introduction&lt;/h3&gt;
&lt;p&gt;There are times when you need to run Private Azure DevOps agents as opposed to using one of the hosted ones provided by Microsoft. This could be for a variety of reasons, including needing to access resources inside your corporate network or needing to have a special hardware specification or set of software installed on the agent.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2019/12/image.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2019/12/image_thumb.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If using such private agents, you really need to have an easy way to provision them. This is so that all your agents are standardised and easily re-creatable. Firstly you don’t want build agents with software on them you can’t remember installing or patching. This is just another form of the “works on one developer’s machine but not another” problem. Also if you have the means to replace the agents very regularly and reliably you can avoid the need to patch them; you can just replace them with newer VMs created off latest patched base Operating System images and software releases.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>There are times when you need to run Private Azure DevOps agents as opposed to using one of the hosted ones provided by Microsoft. This could be for a variety of reasons, including needing to access resources inside your corporate network or needing to have a special hardware specification or set of software installed on the agent.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2019/12/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2019/12/image_thumb.png" title="image"></a></p>
<p>If using such private agents, you really need to have an easy way to provision them. This is so that all your agents are standardised and easily re-creatable. Firstly you don’t want build agents with software on them you can’t remember installing or patching. This is just another form of the “works on one developer’s machine but not another” problem. Also if you have the means to replace the agents very regularly and reliably you can avoid the need to patch them; you can just replace them with newer VMs created off latest patched base Operating System images and software releases.</p>
<p>Microsoft uses <a href="https://packer.io/">Packer</a> to build the VM images into Azure Storage. Luckily, Microsoft have open sourced their build tooling process and configuration, you can find the resources on <a href="https://github.com/actions/virtual-environments">GitHub</a>. </p>
<p>A fellow MVP, Wouter de Kort, has done an <a href="https://wouterdekort.com/2018/02/25/build-your-own-hosted-vsts-agent-cloud-part-1-build/">excellent series of posts</a> on how to use these Packer tools to build your own Azure hosted Private Agents.</p>
<p>I don’t propose to go over that again. In this post, I will discuss what needs to be done to use these tools to create private agents on your own Hyper-V hardware.</p>
<p>By this point you are probably thinking ‘could this be done with containers? They are designed to allow the easy provisioning of things like agents’.</p>
<p>Well, the answer is yes that is an option. Microsoft provides both container and VM based agents and have only recently split the <a href="https://github.com/microsoft/azure-pipelines-agent">repo</a> to separate the container creation logic from the VM creation logic. The container logic remains in the <a href="https://github.com/microsoft/azure-pipelines-agent">old GitHub home</a>. However, in this post I am focusing on VMs, so will be working against the <a href="https://github.com/microsoft/azure-pipelines-agent">new home for the VM logic</a>.</p>
<h3 id="preparation--getting-ready-to-run-packer">Preparation – Getting Ready to run Packer</h3>
<h4 id="copy-the-microsoft-repo">Copy the Microsoft Repo</h4>
<p>Microsoft’s needs are not ours, we wanted to make some small changes to the way that Packer builds VMs. The key changes are:</p>
<ul>
<li>We want to add some scripts to the repo to help automate our process.</li>
<li>We don’t, at this time, make much use of Docker, so don’t bother to pre-cache the Docker images in the agent. This speeds up the image generation and keeps the VMs VHD smaller.</li>
</ul>
<p>The way we manage these changes is to <a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/import-git-repository?view=azure-devops">import</a> the Microsoft repo into our Azure DevOps Services instance. We can keep our copy <a href="https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork">up to date</a> by setting an upstream remote reference and from time to time merging in Microsoft’s changes, but more on that later.</p>
<p>All our are changes are done on our own long living-branch, we PR any revisions we make into this long lived branch.</p>
<p>The aim is to not alter the main Microsoft Packer JSON definition as sorting out a three way merge if both theirs and our versions of the main JSON file are updated is harder than I like. Rather if we don’t want a feature installed we add <em>’return $true’</em> at the start of the PowerShell script that installs the feature, thus allowing Packer to call the script, but skip the actions in the script without the need to edit the controlling JSON file.</p>
<p>This way of working allows us to update the master branch from the upstream repo to get the Microsoft changes, and then to regularly rebase our changes onto the updated master.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2019/12/image-1.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2019/12/image_thumb-1.png" title="image"></a></p>
<h4 id="a-local-test-of-packer">A local test of Packer</h4>
<p>It is a good idea to test out the Packer build from a development PC to make sure you have all the Azure settings correct. This is done using a command along the lines of</p>
<pre tabindex="0"><code>packer.exe&#34; build -var-file=&#34;azurepackersettings.json&#34;  -on-error=ask &#34;Windows2016-Azure.json&#34;
</code></pre><p>Where the <em>‘windows2016-azure.json’</em> is the Packer definition and the ‘<em>azurepackersettings.json’</em> the user configurations containing the following values. See the <a href="https://www.packer.io/docs/builders/azure-arm.html">Packer documentation for more details</a></p>
<pre tabindex="0"><code>{
&#34;client\_id&#34;: &#34;Azure Client ID&#34;,
&#34;client\_secret&#34;: &#34;Client Secret&#34;,
&#34;tenant\_id&#34;: &#34;Azure Tenant ID&#34;,
&#34;subscription\_id&#34;: &#34;Azure Sub ID&#34;,
&#34;object\_id&#34;: &#34;The object ID for the AAD SP&#34;,
&#34;location&#34;: &#34;Azure location to use&#34;,
&#34;resource\_group&#34;: &#34;Name of resource group that contains Storage Account&#34;,
&#34;storage\_account&#34;: &#34;Name of the storage account&#34;,
&#34;ssh\_password&#34;: A password&#34;,
&#34;install\_password&#34;: &#34;A password&#34;,
&#34;commit\_url&#34;: &#34;A url to to be save in a text file on the VHD, usually the URL if commit VHD based on&#34;
}
</code></pre><p>If all goes well you should end up with a SysPrep’d VHD in your storage account after a few hours.</p>
<p><strong>Note:</strong> You might wonder why we don’t try to build the VM locally straight onto our Hyper-V infrastructure. Packer does have a <a href="https://www.packer.io/docs/builders/hyperv-iso.html">Hyper-V ISO builder</a> but I could not get it working. Firstly finding an up to date patched Operative System ISO is not that easy and I wanted to avoid having to run Windows Update as this really slows the creation process . Also the process kept stalling as it could not seem to get a WinRM session, when I looked this seemed to be <a href="https://github.com/taliesins/packer-baseboxes/issues/16">something to do with Hyper-V Vnet switches</a>. In the end, I decided it was easier just to build to Azure storage. This also had the advantage of requiring fewer changes to the Microsoft Packer definitions, so making keeping our branch up to date easier.</p>
<h3 id="pipeline-process--preparation-stages"><em>Pipeline Process – Preparation Stages</em></h3>
<p>The key aim was to automate the updating of the build images. So we aimed to do all the work required inside an <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&amp;tabs=yaml">Azure DevOps multistage pipeline</a>. How you might choose to implement such a pipeline will depend on your needs, but I suspect it will follow a similar flow to ours.</p>
<ol>
<li>Generate a Packer VHD</li>
<li>Copy the VHD locally</li>
<li>Create a new agent VM from the VHD</li>
<li>Repeat step 3. a few times</li>
</ol>
<p>There is a ‘what comes first the chicken or the egg’ question here. How do we create the agent to run the agent creation on?</p>
<p>In our case, we have a special manually created agent that is run on the Hyper-V host that the new agents will be created on. This has some special configuration which I will discuss further below.</p>
<h4 id="stage-1--update-our-repo">Stage 1 – Update our repo</h4>
<p>As the pipeline has a source of our copy of the repo (and targets our branch), the pipeline will automatically get the latest version of our Packer configuration source in our repo. However, there is a very good chance Microsoft will have updated their upstream repo. We could of course manually update our repo as mentioned above and we do do this from time to time. However, just to make sure we are up to date, the pipeline also does a fetch, merge and rebases our branch on its local copy. To do this it does the following</p>
<ol>
<li>Adds the Microsoft repo as an upstream remote</li>
<li>Fetches the latest upstream/master changes and merges them onto origin/master</li>
<li>Rebases our working branch onto the updated origin/master</li>
</ol>
<p>Assuming this all works, and we have not messed up a commit so causing a 3-way merge that blocks the scripts, we should have all Microsoft’s latest settings e.g packages, base images etc. plus our customisation.</p>
<h4 id="stage-2--run-packer">Stage 2 – Run Packer</h4>
<p>Next, we need to run Packer to generate the VHD. Luckily there is <a href="https://marketplace.visualstudio.com/items?itemName=riezebosch.Packer">a Packer extension in the Marketplace</a>. This provides two tasks we use</p>
<ol>
<li>Installs the Packer executable</li>
<li>Run Packer passing in all the values, stored securely as Azure DevOps pipeline variables, as used in the <em>azurepackersettings.json</em> file for a local test plus the details of an Azure subscription.</li>
</ol>
<p>Being run within a pipeline has no effect on performance, so this stage is still slow, taking hours. However, once it is completed we don’t need to run it again so we have this stage set for conditional execution based on a pipeline variable so we can skip the step if it has already completed. Very useful for testing.</p>
<h4 id="stage-3--copy-the-vhd-to-a-local-file-share">Stage 3 – Copy the VHD to a Local File Share</h4>
<p>As we are building local private agents we need the VHD file stored locally i.e. copied down to a local UNC share. This is done with some PowerShell that runs the <a href="https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest">Azure CLI</a>. It finds the newest VHD in the Azure Storage account and copies it locally, we do assume we are the only thing creating VHDs in the storage account and that the previous stage has just completed.</p>
<p>Again this is slow, it can take many hours depending on how fast your internet connection is. Once the VHD file is downloaded, we create a metadata file contains the name of profile it can be used with e.g. for a VS2017 or VS2019 agent and a calculated VHD file checksum, more details on both of these below.</p>
<p>Now again, as this stage is slow, and once it is completed we don’t need to run it again, we have conditional execution based on a second build variable so we can skip the step if it is not needed.</p>
<p>If all runs Ok, then at this point we have a local copy of a SysPre’d VHD. This can be considered the preparation phase over. These stages need to be completed only once for any given generation of an agent.</p>
<h3 id="pipeline-process--deployment-stages"><em>Pipeline Process –</em> Deployment Stages</h3>
<p>At this point we now have a SysPre’d VHD, but we don’t want to have to generate each agent by hand completing the post SysPrep mini setup and installing the Azure DevOps Agent.</p>
<p>To automate this configuration process we use <a href="https://github.com/VirtualEngine/Lability">Lability</a>. This is a PowerShell tool that wrappers PowerShell’s <a href="https://docs.microsoft.com/en-us/powershell/scripting/dsc/overview/overview?view=powershell-6">Desired State Configuration (DSC)</a>. Our usage of Lability and the wrapper scripts we use are</p>
<p>discussed in <a href="https://blogs.blackmarble.co.uk/rhepworth/2017/03/02/define-once-deploy-everywhere-sort-of/">this post</a> by a colleague and fellow MVP Rik Hepworth. However, the short summary is that Lability allows you to create an ‘environment’ which can include one or more VMs. In our case, we have a single VM in our environment so the terms are interchangeable in this post.</p>
<p>Each VM in an environment is based on one or more master disk images. Each instance of a VM uses its own Hyper-V diff disk off their master disk, thus greatly reducing the disk space required. This is very useful when adding multiple virtually identical agent VMs to a single Hyper-V host.</p>
<p>A Lability environment allows us to have a definition of what a build VM is i.e. what is its base VHD image, how much memory does it have, are there any extra disks, how many of CPU cores does it have, this list goes on. Also, it allows us to install software, in our case the Azure DevOps agent.</p>
<p>All the Lability definitions are stored in a separate Git repo. We have to make sure the current Lability definitions are already installed along with the Lability tools on the Azure DevOps agent that will be running these stages of the deployment pipeline. We do this by hand  on our one ‘special agent’ but it could be automated.</p>
<p>Remember, in our case, this ‘special agent’ is actually domain-joined, unlike all the agents we are about to create, and running on the Hyper-V host where we will be deploying the new VMs. As it is domain joined it can get to the previously downloaded Sysprep’d VHD and metadata file on a network UNC share. We are not too worried over the ‘effort’ keeping the Lability definitions update as they very rarely change, all changes tend to be in the Packer generated base VHD.</p>
<p>It should be remembered that this part of the deployment is a repeatable process, but we don’t just want to keep registering more and more agents. Before we add a new generation agent we want to remove an old generation one. Hence, cycling old agents out of the system, keeping things tidy.</p>
<p>We have experimented with naming of Lability environments to make it easier to keep things tidy. Currently, we provide two parameters into our Lability configuration</p>
<ul>
<li>Prefix – A short-code to identify the role of the agent we use e.g. ‘B’ for generic build agents and ‘BT’ for ones with the base features plus BizTalk</li>
<li>Index – This number is used for two jobs, the first is to identify the environment in a set of environments of the same Prefix. It is also used to work out which Hyper-V VNet the new environment should be attached to on the Hyper-V host. Lability automatically deals with the creation of these VNets if not present.</li>
</ul>
<p>So for on our system, for example, a VM will end up with a name in the form <em>B1BMAgent2019</em>, this means</p>
<ul>
<li>B - It is a generic agent</li>
<li>1 – It is on the subnet 192.168.254.0, and is the first of the B group of agents</li>
<li>BMAgent2019 – It is based on our VS2019 VHD image</li>
</ul>
<p><strong>Note:</strong> Also when an Azure DevOps Agent is registered with Azure DevOps, we also append a random number, based on the time, to the end of the agent name in Azure DevOps. This allows two VMs with the same prefix and index, but on different Hyper-V hosts, to be registered at the same time, or to have multiple agents on the same VM. In reality, we have not used this feature. We have ended up using unique prefix and index across agent our estate with a single agent per VM. </p>
<h4 id="stage-4--disable-the-old-agent-then-remove-it">Stage 4 – Disable the old agent then remove it</h4>
<p>The first deployment step is done with a PowerShell script. We check to see if there is an agent registered with the current Prefix and Index. If there is we disable it via the Azure DevOps Rest API. This will not stop the current build but will stop the agent picking up a new one when the current one completes.</p>
<p>Once the agent is disabled we keep polling it, via the API, until we see it go idle. Once the agent is idle we can use the Azure DevOps API to delete the agent’s registration on Azure DevOps.</p>
<script src="https://gist.github.com/rfennell/fcf23a63d19d76a71ead4ec58096d44f.js"></script>
<h4 id="stage-5--remove-the-old-environment">Stage 5 – Remove the old Environment</h4>
<p>Once the agent is no longer registered with Azure DevOps we can then remove the environment running the agent. This is a Lability command that we wrapper in PowerShell scripts</p>
<script src="https://gist.github.com/rfennell/d558549b853297db5a1ea904b39cc566.js"></script>
<p>This completely removes the Hyper-V VM and its diff disks that store its data, a very tidy process.</p>
<h4 id="stage-6--update-lability-settings">Stage 6 – Update Lability Settings</h4>
<p>I said previously that we rarely need to update the Lability definitions. There is one exception, that is the reference to the base VHD. We need to update this to point to the copy of the Packer generated SysPrep’d VHD on the local UNC file share.</p>
<p>We use another PowerShell script to handle this. It scans the UNC share for metadata files to find the one containing the request media type e.g. VS2017 or VS2019 (we only keep one of each type there). It then registers this VHD in Lability using the VHD file path and the previously calculated checksum. Lability uses the checksum to work out if the file has been updated.</p>
<script src="https://gist.github.com/rfennell/f1a0240639e3064f266de0d41f46aaf7.js"></script>
<h4 id="stage-7---deploy-new-environment">Stage 7 - Deploy New Environment</h4>
<p>So all we have to do at this point is request Lability to create a new environment based on the variable parameters we pass in i.e. environment definition, prefix and any override parameters (the Azure DevOps Agent configuration) into a wrapper script.</p>
<script src="https://gist.github.com/rfennell/fb19cb0cd4c6dbd32d041b020a237093.js"></script>
<p>When this script is run, it triggers Lability to create a new VM using an environment configuration.</p>
<script src="https://gist.github.com/rfennell/70f4013a09e9c111d766861f01878b83.js"></script>
<p>Lability’s first step is to create the VNet if not already present.</p>
<p>It then checks, using the checksum, if the base Sysprep’d VHD has been copied to the Hyper-V host. If it has not been copied it is done before continuing. This can take a while but is only done once.</p>
<p>Next, the environment (our agent VM) is created, firstly the VM settings are set e.g. CPU &amp; Memory and then the Windows mini setup is handled by DSC. This sets the following</p>
<ul>
<li>Administrator user Account and Password</li>
<li>Networking, here we have to rename an Ethernet adapter. We have seen the name of the first Ethernet change across different versions of the Packer image, so to make our lives easier we rename the primary adaptor connected to the VNet to a known value.</li>
<li>Swap Disk, set this allow the Operating System to manage this as the default on the Packer image is to use a dedicated drive D: which we don’t have.</li>
<li>Create a dedicated drive E: for the agent.</li>
<li>Download, install and configure the Azure DevOps agent</li>
</ul>
<p>DSC handles any reboots required.</p>
<p>After a few minutes, you should see a new registered agent in the requested Azure DevOps Agent Pool.</p>
<h4 id="stage-8--add-capabilities">Stage 8 – Add Capabilities</h4>
<p>Our builds make use of Azure DevOps <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&amp;tabs=browser#capabilities">user capabilities</a> to target them to the correct type of agent. We use an yet another PowerShell script that waits until the new agent been registered and then it adds our custom capabilities from a comma-separated string parameter.</p>
<script src="https://gist.github.com/rfennell/7bb96ce9de54425bed083919945bd615.js"></script>
<p>A little tip here. A side effect of our Lability configuration is that all the agents have the same machine name. This can make finding the host they are on awkward, especially if you have a few Hyper-V hosts. So to address this problem we add a capability of the Hyper-V hosts name, this is purely to make finding the VM easier if we have to.</p>
<h4 id="stage-9--copy-capabilities">Stage  9 – Copy Capabilities</h4>
<p>We have seen that some of the Azure DevOps tasks we use have demands that are not met by the System Capabilities. The classic is a task requiring a value for the capability ‘java’ or ‘jdk’ when  the one that is present on the agent is ‘JAVA_HOME’.</p>
<p>To address this, as opposed to adding our own capability which might not point to the correct location, is to copy an existing capability that has the correct value. Again this is done with a PowerShell script that takes as string parameter</p>
<script src="https://gist.github.com/rfennell/bec7899a85f5079aadde4b405d515934.js"></script>
<h3 id="so-what-do-we-end-up-with">So what do we end up with?</h3>
<p>When all this completes we have a running private agent that has all the features of the Microsoft hosted ones. As Microsoft adds new functionality or patch their agent images, as long as we regenerate our Packer images, we get the same features.</p>
<p>At this point in time, we have chosen to add any extra software we require after the end of this process, as opposed to within it. In our case, this is basically either BizTalk 2013 and BizTalk 2016 on a single agent in our pool. Again we do this with a series of scripts, but manually run this time. We would like to fully automate the process, but BizTalk does not lend itself to easy installation automation. So, after a good bit of experimentation, we decided the best option for now, was to keep our basic build process as close to the Microsoft Packer images as possible to minimise merge issues, and worry about BizTalk later. As we only have one BizTalk 2013 and one 2016 agent the cost of manually finishing off was not too high.</p>
<h3 id="where-do-we-go-from-here">Where do we go from here?</h3>
<p>We now have a process that is automated end to end. However, it can be ‘a little brittle’, but as all the stages tidy up after themselves rerunning jobs is not an issue other than in the time cost.</p>
<p>We still have not decided on a final workflow for the replacement of agent. At this time we use manual approvals before deploying an agent. I am sure this will change as we allow this process to mature.</p>
<p>It is a good starting point.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A fix for Lability &amp;lsquo;Datafile not found&amp;rsquo; error</title>
      <link>https://blog.richardfennell.net/posts/a-fix-for-lability-datafile-not-found-error/</link>
      <pubDate>Sat, 07 Dec 2019 15:36:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-fix-for-lability-datafile-not-found-error/</guid>
      <description>&lt;h3 id=&#34;issue&#34;&gt;Issue&lt;/h3&gt;
&lt;p&gt;I have been busy automating the provision of our private Azure DevOps Agents using &lt;a href=&#34;https://packer.io/&#34;&gt;Packer&lt;/a&gt; and &lt;a href=&#34;https://github.com/VirtualEngine/Lability&#34;&gt;Lability&lt;/a&gt;; a more details blog post is on the way. All has been going OK on my test rig, but when came to run the automation pipeline on our main build HyperV host I got an error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&amp;gt; Get-VmcConfigurationData : Datafile EnvironmentsBuildAgent-VS2017BuildAgent-VS2017.psd1 NOT FOUND. Exiting&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;But the file was there!&lt;/p&gt;
&lt;p&gt;I check the default Lability  paths, but all these looked OK, and none pointed to my environment location on C: anyway&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="issue">Issue</h3>
<p>I have been busy automating the provision of our private Azure DevOps Agents using <a href="https://packer.io/">Packer</a> and <a href="https://github.com/VirtualEngine/Lability">Lability</a>; a more details blog post is on the way. All has been going OK on my test rig, but when came to run the automation pipeline on our main build HyperV host I got an error</p>
<p><em>&gt; Get-VmcConfigurationData : Datafile EnvironmentsBuildAgent-VS2017BuildAgent-VS2017.psd1 NOT FOUND. Exiting</em></p>
<p>But the file was there!</p>
<p>I check the default Lability  paths, but all these looked OK, and none pointed to my environment location on C: anyway</p>
<p><em>&gt; Get-LabHostDefault</em></p>
<p><em>ConfigurationPath : D:VirtualisationConfiguration<br>
DifferencingBVhdPath : D:VirtualisationVMVirtualHardDisks<br>
HotfixPath  : D:VirtualisationHotfix<br>
IsoPath : D:VirtualisationISOs<br>
ModuleCachePath : D:VirtualisationModules<br>
ParentVhdPath  : D:VirtualisationMasterVirtualHardDisks<br>
RepositoryUri :</em> <em><a href="https://server/nuget/PowerShell/package">https://server/nuget/PowerShell/package</a></em><br>
<em>ResourcePath : D:VirtualisationResources<br>
ResourceShareName : Resources<br>
DisableLocalFileCaching : False<br>
DisableSwitchEnvironmentName : True<br>
EnableCallStackLogging  : False<br>
DismPath  : C:WindowsSystem32WindowsPowerShellv1.0ModulesDismMicrosoft.Dism.PowerShell.dll</em></p>
<h3 id="solution">Solution</h3>
<p>After a bit of digging in the lability PSM files I found the problem was the call</p>
<p><em>&gt; Get-PSFConfigValue -Fullname &ldquo;VMConfig.VMConfigsPath&rdquo;</em></p>
<p>This returned nothing. A check on my development system showed this should return <em>C:VmConfigs</em>, so I had a broken Lability install</p>
<p>So I tried the obvious fix, which was to set the missing value</p>
<p><em>&gt; Set-PSFConfig -FullName &ldquo;VMConfig.VMConfigsPath&rdquo; -Value c:VmConfigs</em></p>
<p>And it worked, my Lability installs ran without a problem</p>
]]></content:encoded>
    </item>
    <item>
      <title>Major enhancements to my Azure DevOps Cross Platform Release Notes Extension</title>
      <link>https://blog.richardfennell.net/posts/major-enhancements-to-my-azure-devops-cross-platform-release-notes-extension/</link>
      <pubDate>Wed, 04 Dec 2019 10:16:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/major-enhancements-to-my-azure-devops-cross-platform-release-notes-extension/</guid>
      <description>&lt;p&gt;Over the past few days I have published two major enhancements to my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Azure DevOps Cross Platform Release Notes Extension&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;added-support-for-builds&#34;&gt;Added Support for Builds&lt;/h3&gt;
&lt;p&gt;Prior to version 2.17.x this extension could only be used in Releases. This was because it used Release specific calls provided in the Microsoft API to work out the work items and changesets/commits associated with the Release. This is unlike my older &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task&#34;&gt;PowerShell based Release Notes Extension&lt;/a&gt; which was initially developed for Builds and only later enhanced to  work in Releases, but achieved this using my own logic to iterate across Builds associated with Releases to work out the associations. With the advent of &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&amp;amp;tabs=yaml&#34;&gt;YAML multistage Pipelines&lt;/a&gt; the difference between a Build and a Release is blurring, so I thought it high time to add Build support to my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Cross Platform Release Notes Extension&lt;/a&gt;. Which it now does.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the past few days I have published two major enhancements to my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Azure DevOps Cross Platform Release Notes Extension</a>.</p>
<h3 id="added-support-for-builds">Added Support for Builds</h3>
<p>Prior to version 2.17.x this extension could only be used in Releases. This was because it used Release specific calls provided in the Microsoft API to work out the work items and changesets/commits associated with the Release. This is unlike my older <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">PowerShell based Release Notes Extension</a> which was initially developed for Builds and only later enhanced to  work in Releases, but achieved this using my own logic to iterate across Builds associated with Releases to work out the associations. With the advent of <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&amp;tabs=yaml">YAML multistage Pipelines</a> the difference between a Build and a Release is blurring, so I thought it high time to add Build support to my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross Platform Release Notes Extension</a>. Which it now does.</p>
<h3 id="adding-tag-filters">Adding Tag Filters</h3>
<p>In the <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross Platform Release Notes Extension</a> you have been able to filter the work items returned in a generated document for a good while, but the filter was limited to a logical AND i.e. if the filter was</p>
<blockquote>
<p>@@WILOOP:TAG1:TAG2@@</p></blockquote>
<p>All work items matched would have to have both the TAG1 and TAG2 set Since 2.18.x there is now the option of a logic AND or an OR.</p>
<ul>
<li>@@WILOOP:TAG1:TAG2@@ matches work items that have all tags (legacy behaviour for backward compatibility)</li>
<li>@@WILOOP[ALL]:TAG1:TAG2@@ matches work items that have all tags</li>
<li>@@WILOOP[ANY]:TAG1:TAG2@@ matches work items that any of the tags</li>
</ul>
<p><em>Update 5th Dec</em> In 2.19.x there is also the option to filter on any field in a work item as well as tags</p>
<ul>
<li>@@WILOOP[ALL]:System.Title=This is a title:TAG 1@@</li>
</ul>
<p><a href="https://github.com/rfennell/AzurePipelines/wiki/GenerateReleaseNotes---Node-based-Cross-Platform-Task">For more details see the extension’s WIKI page</a></p>
<h3 id="futures">Futures</h3>
<p>My plan is to at some point deprecate my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">PowerShell based Release Notes Extension</a>. I have updated the documentation for this older extension state as much and to recommend the use of the newer <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross Platform Release Notes Extension</a>. At this time there is now little that this older extension can do that cannot be done by my newer <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Cross Platform Release Notes Extension</a>. Moving to it I think makes sense for everyone for the…</p>
<ul>
<li>Cross platform support</li>
<li>Use of the same means to find the associated items as the Microsoft UI to avoid confusion</li>
<li>Enhanced work item filtering</li>
</ul>
<p>Lets see of the new features and updated advisory documentation effect the tow extension relative download statistics</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot queue a new build on Azure DevOps Server 2019.1 due to the way a SQL cluster was setup</title>
      <link>https://blog.richardfennell.net/posts/cannot-queue-a-new-build-on-azure-devops-server-2019-1-due-to-the-way-a-sql-cluster-was-setup/</link>
      <pubDate>Thu, 17 Oct 2019 19:24:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-queue-a-new-build-on-azure-devops-server-2019-1-due-to-the-way-a-sql-cluster-was-setup/</guid>
      <description>&lt;p&gt;I have recently been doing a TFS 2015 to Azure DevOps Server 2019.1 upgrade for a client. The first for a while, I have been working with Azure DevOps Service mostly of late. Anyway I saw an issue I had never seen before with any version of TFS, and I could find no information on the Internet.&lt;/p&gt;
&lt;h3 id=&#34;the-problem&#34;&gt;The Problem&lt;/h3&gt;
&lt;p&gt;The error occurred when I tried to queue a new build after the upgrade, the build instantly failed with the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently been doing a TFS 2015 to Azure DevOps Server 2019.1 upgrade for a client. The first for a while, I have been working with Azure DevOps Service mostly of late. Anyway I saw an issue I had never seen before with any version of TFS, and I could find no information on the Internet.</p>
<h3 id="the-problem">The Problem</h3>
<p>The error occurred when I tried to queue a new build after the upgrade, the build instantly failed with the error</p>
<p><em>‘The module being executed is not trusted, Either the owner of the database of the module need to be granted authenticate permission, or the module needs to be digitally signed. Warning: Null value is eliminated by an aggregate or other SET operation, The statement has been terminated’.</em></p>
<h3 id="the-solution">The Solution</h3>
<p>It turns out the issue was the the client was using a enterprise wide SQL cluster to host the <em>tfs_</em> databases. After the Azure DevOps upgrade the DBAs has enabled a trigger based logging system to monitor the databases and this was causing the error.</p>
<p>As soon as this logging was switched off everything worked as expected.</p>
<p>I would not recommend using such a logging tool for any ‘out the box’ database for a product such as TFS/Azure DevOps Server where the DBA team don’t own the database schema’s changes, as these databases will only occur if the product is upgraded</p>
]]></content:encoded>
    </item>
    <item>
      <title>Release of my video on &#39;Introduction to GitHub Actions&#39;</title>
      <link>https://blog.richardfennell.net/posts/release-of-my-video-on-introduction-to-github-actions/</link>
      <pubDate>Thu, 10 Oct 2019 11:45:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/release-of-my-video-on-introduction-to-github-actions/</guid>
      <description>&lt;p&gt;I recently posted on &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2019/09/10/a-first-look-at-github-action-converting-my-azure-devops-tasks-to-github-actions/&#34;&gt;my initial experiences with GitHub Actions&lt;/a&gt;. I had hoped to deliver a session on this subject a &lt;a href=&#34;https://developerdeveloperdeveloper.com/&#34;&gt;DDD 14 in Reading&lt;/a&gt; , I even got so far as to propose a session.&lt;/p&gt;
&lt;p&gt;However, life happened and I found I could not make it to the event. So I decided to do the next best thing and recorded a video of the session. I event went as far as to try to get the &amp;lsquo;DDD event feel&amp;rsquo; by recording in front of a &amp;rsquo;live audience&amp;rsquo; at Black Marble&amp;rsquo;s offices.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently posted on <a href="https://blogs.blackmarble.co.uk/rfennell/2019/09/10/a-first-look-at-github-action-converting-my-azure-devops-tasks-to-github-actions/">my initial experiences with GitHub Actions</a>. I had hoped to deliver a session on this subject a <a href="https://developerdeveloperdeveloper.com/">DDD 14 in Reading</a> , I even got so far as to propose a session.</p>
<p>However, life happened and I found I could not make it to the event. So I decided to do the next best thing and recorded a video of the session. I event went as far as to try to get the &lsquo;DDD event feel&rsquo; by recording in front of a &rsquo;live audience&rsquo; at Black Marble&rsquo;s offices.</p>
<p><a href="https://www.youtube.com/watch?v=e">https://www.youtube.com/watch?v=e</a>_F_4OB9Mg4&amp;feature=youtu.be</p>
]]></content:encoded>
    </item>
    <item>
      <title>A first look at GitHub Action - converting my Azure DevOps tasks to GitHub Actions</title>
      <link>https://blog.richardfennell.net/posts/a-first-look-at-github-action-converting-my-azure-devops-tasks-to-github-actions/</link>
      <pubDate>Tue, 10 Sep 2019 21:17:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-first-look-at-github-action-converting-my-azure-devops-tasks-to-github-actions/</guid>
      <description>&lt;h3 id=&#34;introduction&#34;&gt;Introduction&lt;/h3&gt;
&lt;p&gt;&lt;a href=&#34;https://github.com/features/actions&#34;&gt;GitHub Actions&lt;/a&gt; open up an interesting new way to provide CI/CD automation for your GitHub projects, other than the historic options of &lt;a href=&#34;https://jenkins.io/&#34;&gt;Jenkins&lt;/a&gt;, &lt;a href=&#34;https://www.atlassian.com/software/bamboo&#34;&gt;Bamboo&lt;/a&gt;, &lt;a href=&#34;https://www.jetbrains.com/teamcity/&#34;&gt;Team City&lt;/a&gt;, &lt;a href=&#34;https://azure.microsoft.com/en-gb/services/devops/pipelines/&#34;&gt;Azure DevOps Pipelines&lt;/a&gt; etc. No longer do you have to leave the realm of GitHub to create a powerful CI/CD process or provision a separate system.&lt;/p&gt;
&lt;p&gt;For people familiar with Azure DevOps YAML based Pipelines you will notice some common concepts in GitHub Actions. However, GitHub Action’s YAML syntax is different and Actions are not Tasks.You can’t just re-use your old Azure DevOps tasks.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p><a href="https://github.com/features/actions">GitHub Actions</a> open up an interesting new way to provide CI/CD automation for your GitHub projects, other than the historic options of <a href="https://jenkins.io/">Jenkins</a>, <a href="https://www.atlassian.com/software/bamboo">Bamboo</a>, <a href="https://www.jetbrains.com/teamcity/">Team City</a>, <a href="https://azure.microsoft.com/en-gb/services/devops/pipelines/">Azure DevOps Pipelines</a> etc. No longer do you have to leave the realm of GitHub to create a powerful CI/CD process or provision a separate system.</p>
<p>For people familiar with Azure DevOps YAML based Pipelines you will notice some common concepts in GitHub Actions. However, GitHub Action’s YAML syntax is different and Actions are not Tasks.You can’t just re-use your old Azure DevOps tasks.</p>
<p>So my mind quickly went to the question ‘how much work is involved to allow me to re-use my Azure DevOps Pipeline Tasks?’. I know I probably won’t be moving them to Docker, but surely I can reuse my Node based ones somehow?</p>
<h3 id="the-migration-process">The Migration Process</h3>
<p>The first thing to consider is ‘are they of any use?’.</p>
<p>Any task that used the Azure DevOps API was going to need loads of work, if even relevant on GitHub. But, my <a href="https://github.com/rfennell/AzurePipelines/wiki/Version-Assemblies-and-Packages-Tasks">Versioning</a> tasks seemed good candidates. They still needed some edits, such as the logic to extract a version number from a the build number needed to be removed. This is because GitHub Actions have no concept of build numbers (it is recommended that <a href="https://github.com/actions/toolkit/blob/master/docs/action-versioning.md">versioning is done using SemVer and branching</a>).</p>
<p>Given all this I picked one for migration, my <a href="https://github.com/rfennell/AzurePipelines/tree/master/Extensions/Versioning/VersionJSONFileTask">JSONVersioner</a></p>
<p>The first step was to create a new empty GitHub repo for my new Action. I did this using the <a href="https://github.com/actions/javascript-template">JavaScript template</a> and followed the <a href="https://github.com/actions/toolkit/blob/master/docs/javascript-action.md">Getting Started instructions</a>. This allowed me to make sure I had a working starting point.</p>
<p>I then copied in my <a href="https://github.com/rfennell/AzurePipelines/tree/master/Extensions/Versioning/VersionJSONFileTask">JSON file versioner task</a> into the repo bit by bit</p>
<ul>
<li>
<p>Renamed my entry <a href="https://github.com/rfennell/AzurePipelines/blob/master/Extensions/Versioning/VersionJSONFileTask/src/ApplyVersionToJSONFile.ts">ApplyVersionToJSONFile.ts</a> file to <a href="https://github.com/rfennell/JSONFileVersioner/blob/master/src/main.ts">main.ts</a> to keep inline with the template standard</p>
</li>
<li>
<p>Copied over the AppyVersionToJSONFuncitons.js</p>
</li>
<li>
<p>I removed any Azure DevOps specific code that was not needed.</p>
</li>
<li>
<p>In both files swapped the references to &ldquo;vsts-task-lib/task&rdquo; to &ldquo;@actions/core&rdquo; and update the related function calls to use</p>
</li>
<li>
<p>core.getInput()</p>
</li>
<li>
<p>core.debug()</p>
</li>
<li>
<p>core.warning()</p>
</li>
<li>
<p>core.SetFailed()</p>
</li>
<li>
<p>Altered my handling of input variables defaults to use the <a href="https://help.github.com/en/articles/virtual-environments-for-github-actions#environment-variables">GitHub Actions</a> as opposed to <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&amp;tabs=yaml">Azure DevOps Pipeline variables</a> (to find the current working folder)</p>
</li>
<li>
<p>Migrated the manifest from the <a href="https://github.com/rfennell/AzurePipelines/blob/master/Extensions/Versioning/VersionJSONFileTask/task/task.json">task.json</a> to the <a href="https://github.com/rfennell/JSONFileVersioner/blob/master/action.yml">action.yaml</a></p>
</li>
<li>
<p>Updated the readme.md with suitable usage details.</p>
</li>
</ul>
<p>And that was basically it, the Action just worked, I could call my Action from a test workflow in another GitHub repo</p>
<p>However, I did decided to do a bit more work</p>
<ul>
<li>I moved my <a href="https://mochajs.org/">Mocha</a>/<a href="https://www.chaijs.com/">Chai</a> based tests over to use <a href="https://jestjs.io/docs/en/getting-started">Jest</a>, again to keep inline with the template example. This was actually the main area of effort for me. Jest runs it’s tests async, and this caused me problem with my temporary file handling that had to be reworked. I also took the chance to improve the tests handing of the JSON comparison, making it more resilient for cross platform testing.</li>
<li>I also added <a href="https://palantir.github.io/tslint/">TSLint</a> to the <a href="https://github.com/rfennell/JSONFileVersioner/blob/master/package.json">npm build process</a>, something I do for all my TypeScript based projects to keep up code quality</li>
</ul>
<h3 id="summary">Summary</h3>
<p>So the basic act of migration from Azure DevOps Pipeline Extension to GitHub Actions is not that hard if you take it step by step.</p>
<p>The difficultly will be with what your tasks do, are they even relevant to GitHub Actions? And are any APIs you need available?</p>
<p>So migration of Azure DevOps Extension Tasks to GitHub Actions is not an impossible task, have a look at my source at <a href="https://github.com/rfennell/JSONFileVersioner">JSONFileVersioner</a> or in the actual task in the <a href="https://github.com/marketplace/actions/apply-version-to-json-file">GitHub Marketplace</a> with the usage</p>
<pre tabindex="0"><code>jobs:

   build:

     runs-on: ubuntu-latest

     strategy:

        matrix:

           node-version: \[12.x\]

  steps:

     - uses: actions/checkout@v1

    - **uses: rfennell/JSONFileVersioner@v1**

 **with:**

 **Path: &#39;&#39;**

 **Field: &#39;version&#39;**

 **FilenamePattern: &#39;.json&#39;**

 **Recursion: &#39;true&#39;**

    - name: Use Node.js ${{ matrix.node-version }}   

      uses: actions/setup-node@v1

      with:

         node-version: ${{ matrix.node-version }}

    - name: npm install, build, and test

      run: |

        npm install

        npm run build --if-present

        npm test

  env:

     CI: true

  

  
</code></pre><hr>
<p><em>There is a nice series of posts on Actions from Microsoft’s Abel Wang -</em> <a href="http://bit.ly/2Zkr9VQ"><em>Github Actions 2.0 Is Here!!!</em></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Strange issue with multiple calls to the same REST WebClient in PowerShell</title>
      <link>https://blog.richardfennell.net/posts/strange-issue-with-multiple-calls-to-the-same-rest-webclient-in-powershell/</link>
      <pubDate>Thu, 29 Aug 2019 11:56:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/strange-issue-with-multiple-calls-to-the-same-rest-webclient-in-powershell/</guid>
      <description>&lt;p&gt;Hit a strange problem today trying to do a simple Work Item update via the Azure DevOps REST API.&lt;/p&gt;
&lt;p&gt;To do a &lt;a href=&#34;https://docs.microsoft.com/en-us/rest/api/azure/devops/wit/work%20items/update?view=azure-devops-rest-5.1&#34;&gt;WI update you need to call the REST API&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using the verb PATCH&lt;/li&gt;
&lt;li&gt;With the Header “Content-Type” set to “application/json-patch+json”&lt;/li&gt;
&lt;li&gt;Include in the Body the current WI update revision (to make sure you are updating the current version)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So the first step is to get the current WI values to find the current revision.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Hit a strange problem today trying to do a simple Work Item update via the Azure DevOps REST API.</p>
<p>To do a <a href="https://docs.microsoft.com/en-us/rest/api/azure/devops/wit/work%20items/update?view=azure-devops-rest-5.1">WI update you need to call the REST API</a></p>
<ul>
<li>Using the verb PATCH</li>
<li>With the Header “Content-Type” set to “application/json-patch+json”</li>
<li>Include in the Body the current WI update revision (to make sure you are updating the current version)</li>
</ul>
<p>So the first step is to get the current WI values to find the current revision.</p>
<p>So my update logic was along the lines of</p>
<ol>
<li>Create new WebClient with the Header “Content-Type” set to “application/json-patch+json”</li>
<li>Do a Get call to API to get the current work item</li>
<li>Build the update payload with my updated fields and the current revision.</li>
<li>Do a PATCH call to API using the client created in step 1 to update the current work item</li>
</ol>
<p>Problem was at Step 4 I got a 400 error. A general error, not too helpful</p>
<p>After much debugging I spotted the issue was that after Step 2. my WebClient’s Headers had changed, I had lost the content type – no idea why.</p>
<p>It all started to work if I recreated my WebClient after Step 2, so something like (in PowrShell)</p>
<pre tabindex="0"><code>Function Get-WebClient

{

param  
(

&gt; \[string\]$pat,  
&gt; \[string\]$ContentType = &#34;application/json&#34;

)

&gt; $wc = New-Object System.Net.WebClient  
&gt; $pair = &#34;:${password}&#34;  
&gt; $bytes = \[System.Text.Encoding\]::ASCII.GetBytes($pair)  
&gt; $base64 = \[System.Convert\]::ToBase64String($bytes)  
&gt; $wc.Headers.Add(“Authorization”,&#34;Basic $base64&#34;)  
&gt; $wc.Headers\[&#34;Content-Type&#34;\] = $ContentType  
&gt; $wc

}

  
</code></pre><p>function Update-WorkItemTitle {</p>
<p>param</p>
<p>(</p>
<blockquote>
<p>$baseUri ,<br>
$teamproject,<br>
$workItemID,<br>
$pat,<br>
$title</p></blockquote>
<p>)</p>
<blockquote>
<p>$wc = Get-WebClient -pat $pat -ContentType &ldquo;application/json-patch+json&rdquo;<br>
$uri = &ldquo;$($baseUri)/$teamproject/_apis/wit/workitems/$($workItemID)?api-version=5.1&rdquo;</p></blockquote>
<blockquote>
<p># you can only update a work item if you also pass in the rev, this makes sure you are updating lastest version<br>
$jsondata = $wc.DownloadString($uri) | ConvertFrom-Json</p></blockquote>
<blockquote>
<p>$wc = Get-WebClient -pat $pat -ContentType &ldquo;application/json-patch+json&rdquo;</p></blockquote>
<blockquote>
<p>$data = @(<br>
@{<br>
            op    = &ldquo;test&rdquo;;<br>
            path  = &ldquo;/rev&rdquo;;<br>
            value = $jsondata.Rev<br>
},<br>
@{<br>
            op    = &ldquo;add&rdquo;;<br>
            path  = &ldquo;/fields/System.Title&rdquo;;<br>
            value = $title<br>
}<br>
) | ConvertTo-Json</p></blockquote>
<blockquote>
<p>$jsondata = $wc.UploadString($uri, &ldquo;PATCH&rdquo;, $data) | ConvertFrom-Json<br>
$jsondata</p></blockquote>
<p>}</p>
<pre tabindex="0"><code></code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Authentication loops swapping organisations in Azure DevOps</title>
      <link>https://blog.richardfennell.net/posts/authentication-loops-swapping-organisations-in-azure-devops/</link>
      <pubDate>Tue, 13 Aug 2019 11:04:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/authentication-loops-swapping-organisations-in-azure-devops/</guid>
      <description>&lt;p&gt;I have recently been getting a problem swapping between different organisations in Azure DevOps. It happens when I swap between Black Mable ones and customer ones, where each is back by different Azure Active Directory (AAD) but I am using the same credentials; because I am either a member of that AAD or a guest.&lt;/p&gt;
&lt;p&gt;The problem is I get into an authentication loop. It happens to be in Chrome, but you might find the same problem in other browsers.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently been getting a problem swapping between different organisations in Azure DevOps. It happens when I swap between Black Mable ones and customer ones, where each is back by different Azure Active Directory (AAD) but I am using the same credentials; because I am either a member of that AAD or a guest.</p>
<p>The problem is I get into an authentication loop. It happens to be in Chrome, but you might find the same problem in other browsers.</p>
<p>It seems to be a recent issue, maybe related to MFA changes in AAD?</p>
<p>I used to be re-promoted for my ID when I swapped organisations in a browser tab, but not asked for further authentication</p>
<p>However, now the following happens</p>
<ul>
<li>I login to an organisation without a problem e.g <a href="https://dev.azure.com/someorg">https://dev.azure.com/someorg</a> using ID, password and MFA</li>
<li>In the same browser window, when I connect to another organisation e.g. <a href="https://dev.azure.com/someotherorg">https://dev.azure.com/someotherorg</a> </li>
<li>I am asked to pick an account, then there is the MFA challenge, but then go back to the login</li>
<li>…. and repeat.</li>
</ul>
<p>The fix is to go in the browser tab to <a href="https://dev.azure.com">https://dev.azure.com</a>. As you are already authenticated you will be able to sign out, then all is OK, you can login again.</p>
<p>The other options is to make even more use of <a href="https://support.google.com/chrome/answer/2364824?co=GENIE.Platform%3DDesktop&amp;hl=en">Chrome People</a>; one ‘person’ per customer, as opposed to my current usage on one ‘person’ per ID</p>
]]></content:encoded>
    </item>
    <item>
      <title>You can&amp;rsquo;t use Azure DevOps Pipeline Gates to check services behind a firewall</title>
      <link>https://blog.richardfennell.net/posts/you-cant-use-azure-devops-pipeline-gates-to-check-services-behind-a-firewall/</link>
      <pubDate>Thu, 25 Jul 2019 19:55:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-cant-use-azure-devops-pipeline-gates-to-check-services-behind-a-firewall/</guid>
      <description>&lt;p&gt;I have recently be working on a release pipeline that deploys to a server behind a corporate firewall. This is done using an &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&#34;&gt;Azure DevOps private build agent&lt;/a&gt; and works fine.&lt;/p&gt;
&lt;p&gt;As the service is a basic REST service and takes a bit of time to start-up I though a &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=azure-devops&#34;&gt;gate&lt;/a&gt; was a perfect way to pause the release pipeline until service was ready for the automated tests.&lt;/p&gt;
&lt;p&gt;However, I hit a problem, the gates always failed as the internal server could not be resolved.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently be working on a release pipeline that deploys to a server behind a corporate firewall. This is done using an <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops">Azure DevOps private build agent</a> and works fine.</p>
<p>As the service is a basic REST service and takes a bit of time to start-up I though a <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=azure-devops">gate</a> was a perfect way to pause the release pipeline until service was ready for the automated tests.</p>
<p>However, I hit a problem, the gates always failed as the internal server could not be resolved.</p>
<p>After a bit of thought I realised why. Gates as actual <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&amp;tabs=yaml">agentless</a> tasks, they don’t run on the agent but on the server, so are outside the firewall. They could never connect to the private service without ports being opened, which was never going to happen.</p>
<p>So at this point in time I can’t use gates on this pipeline. Any similar logic to do the same job would have to be developed as scripts I can run on an agent.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Review of &#39;Azure DevOps Server 2019 Cookbook&#39; - well worth getting</title>
      <link>https://blog.richardfennell.net/posts/review-of-azure-devops-server-2019-cookbook-well-worth-getting/</link>
      <pubDate>Thu, 06 Jun 2019 08:07:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/review-of-azure-devops-server-2019-cookbook-well-worth-getting/</guid>
      <description>&lt;p&gt;It always amazes me that people find time to write tech books whilst having a full time job. So given the effort I know it will have been, it is great to see an update to  Tarun Arora and Utkarsh Sigihalli&amp;rsquo;s book &lt;a href=&#34;https://amzn.to/2KQi5zA&#34;&gt;&amp;lsquo;Azure DevOps Server 2019 Cookbook&amp;rsquo;&lt;/a&gt;. &lt;img alt=&#34;Azure DevOps Server 2019 Cookbook - Second Edition&#34; loading=&#34;lazy&#34; src=&#34;https://www.packtpub.com/media/catalog/product/cache/e4d64343b1bc593f1c5348fe05efa4a6/c/o/cov-b09762.png&#34;&gt; I do like their format of ‘recipes’  that walk through common requirements. I find it particularly interesting that for virtually each recipes there is an associated Azure DevOps Extension that enhances the experience. It speaks well of the research the authors have done and the richness and variety of the 3rd party extensions in the Azure DevOps Marketplaces I think because of this format there is something in this book for everyone, whether new to Azure DevOps Server 2019 or someone who has been around the product since the days of TFS 2005. In my opinion, it is well worth having a copy on your shelf, whether physical or virtual&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It always amazes me that people find time to write tech books whilst having a full time job. So given the effort I know it will have been, it is great to see an update to  Tarun Arora and Utkarsh Sigihalli&rsquo;s book <a href="https://amzn.to/2KQi5zA">&lsquo;Azure DevOps Server 2019 Cookbook&rsquo;</a>. <img alt="Azure DevOps Server 2019 Cookbook - Second Edition" loading="lazy" src="https://www.packtpub.com/media/catalog/product/cache/e4d64343b1bc593f1c5348fe05efa4a6/c/o/cov-b09762.png"> I do like their format of ‘recipes’  that walk through common requirements. I find it particularly interesting that for virtually each recipes there is an associated Azure DevOps Extension that enhances the experience. It speaks well of the research the authors have done and the richness and variety of the 3rd party extensions in the Azure DevOps Marketplaces I think because of this format there is something in this book for everyone, whether new to Azure DevOps Server 2019 or someone who has been around the product since the days of TFS 2005. In my opinion, it is well worth having a copy on your shelf, whether physical or virtual</p>
]]></content:encoded>
    </item>
    <item>
      <title>Azure DevOps Repos branch build policies not triggering when expected in PRs  - Solved</title>
      <link>https://blog.richardfennell.net/posts/azure-devops-repos-branch-build-policies-not-triggering-when-expected-in-prs-solved/</link>
      <pubDate>Thu, 23 May 2019 15:01:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/azure-devops-repos-branch-build-policies-not-triggering-when-expected-in-prs-solved/</guid>
      <description>&lt;p&gt;I recently hit a problem with builds triggered by &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops&#34;&gt;branch policies in Azure DevOps Repos&lt;/a&gt;. With the help of Microsoft I found out the problem and I thought it worth writing up uncase others hit the issue.&lt;/p&gt;
&lt;h3 id=&#34;setup&#34;&gt;Setup&lt;/h3&gt;
&lt;h4 id=&#34;folders&#34;&gt;Folders&lt;/h4&gt;
&lt;p&gt;Assume you have a Git repo with source for the UI, backend Services and common code in sub folders&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;/ [root]&lt;br&gt;
     UI&lt;br&gt;
     Services&lt;br&gt;
     Common&lt;/p&gt;&lt;/blockquote&gt;
&lt;h4 id=&#34;branch-policies&#34;&gt;Branch Policies&lt;/h4&gt;
&lt;p&gt;On the Master branch there are a policies of running&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently hit a problem with builds triggered by <a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops">branch policies in Azure DevOps Repos</a>. With the help of Microsoft I found out the problem and I thought it worth writing up uncase others hit the issue.</p>
<h3 id="setup">Setup</h3>
<h4 id="folders">Folders</h4>
<p>Assume you have a Git repo with source for the UI, backend Services and common code in sub folders</p>
<blockquote>
<p>/ [root]<br>
     UI<br>
     Services<br>
     Common</p></blockquote>
<h4 id="branch-policies">Branch Policies</h4>
<p>On the Master branch there are a policies of running</p>
<ul>
<li>one build for anything in the UI folder/project or common folder/project</li>
<li>and a different build for anything in the Services folder/project or common folder/project</li>
</ul>
<p>These build were filtered by path using the filters</p>
<blockquote>
<p>/UX; /Common<br>
/Services; /Common</p></blockquote>
<h3 id="the-issue">The Issue</h3>
<p>I discovered the problem by doing the following</p>
<ul>
<li>Create a PR for some work that effects the UI project</li>
<li>As expected the UI build triggers</li>
<li>Update the PR with a second commit for the Services code</li>
<li>The Service build is <strong>not</strong> triggered</li>
</ul>
<h3 id="the-solution">The Solution</h3>
<p>The fix was simple it turns out. Remove the spaces from the filter paths so they become</p>
<blockquote>
<p>/UX;/Common<br>
/Services;/Common</p></blockquote>
<p>Once this was done the builds triggered as expected.</p>
<p>Thanks again to the Azure DevOps Product Group for the help</p>
]]></content:encoded>
    </item>
    <item>
      <title>Regex issues in Node</title>
      <link>https://blog.richardfennell.net/posts/regex-issues-in-node/</link>
      <pubDate>Thu, 23 May 2019 13:13:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/regex-issues-in-node/</guid>
      <description>&lt;p&gt;I have been trying to use Regex to select a block of an XML based .NET Core CSPROJ file, and yes before you say know I could use XPATH, but why am not is another story.&lt;/p&gt;
&lt;p&gt;I was trying to use the Regex&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;content.match(/&amp;lt;PropertyGroup&amp;gt;((.|n)\*)&amp;lt;/PropertyGroup&amp;gt;/gmi)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The strange thing was this selection string worked in online Regex testers and in online Javascript IDEs, but failed inside my Node based Azure DevOps Pipeline extension.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been trying to use Regex to select a block of an XML based .NET Core CSPROJ file, and yes before you say know I could use XPATH, but why am not is another story.</p>
<p>I was trying to use the Regex</p>
<pre tabindex="0"><code>content.match(/&lt;PropertyGroup&gt;((.|n)\*)&lt;/PropertyGroup&gt;/gmi)
</code></pre><p>The strange thing was this selection string worked in online Regex testers and in online Javascript IDEs, but failed inside my Node based Azure DevOps Pipeline extension.</p>
<p>After much experimentation I found that the following line worked</p>
<pre tabindex="0"><code>content.match(/&lt;PropertyGroup&gt;(\[sS\]\*?)&lt;/PropertyGroup&gt;/gmi)

  
</code></pre><p>Well that a a good few hours of my life I won’t get back. No idea why Node handles the wildcards differently</p>
]]></content:encoded>
    </item>
    <item>
      <title>A fix for Error: SignerSign() failed.&amp;quot; (-2146958839/0x80080209) with SignTool.exe</title>
      <link>https://blog.richardfennell.net/posts/a-fix-for-error-signersign-failed-2146958839-0x80080209-with-signtool-exe/</link>
      <pubDate>Tue, 30 Apr 2019 15:56:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-fix-for-error-signersign-failed-2146958839-0x80080209-with-signtool-exe/</guid>
      <description>&lt;p&gt;I have spent too long recently trying to sign a UWP .MSIXBUNDLE generated from an Azure DevOps build using the SignTool.exe and our code signing certificate. I kept getting the error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Done Adding Additional Store  
Error information: &amp;#34;Error: SignerSign() failed.&amp;#34; (-2146958839/0x80080209)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;From past experience, SignTool errors are usually due to the publisher details in the XML manifest files (in this case unpack the bundle with MakeAppx.exe and look in AppxMetadataAppxBundleManifest.xml, and also check the manifest in the bundled .MSIX files) does not match the subject details for the PFX file being used for signing. &lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have spent too long recently trying to sign a UWP .MSIXBUNDLE generated from an Azure DevOps build using the SignTool.exe and our code signing certificate. I kept getting the error</p>
<pre tabindex="0"><code>Done Adding Additional Store  
Error information: &#34;Error: SignerSign() failed.&#34; (-2146958839/0x80080209)
</code></pre><p>From past experience, SignTool errors are usually due to the publisher details in the XML manifest files (in this case unpack the bundle with MakeAppx.exe and look in AppxMetadataAppxBundleManifest.xml, and also check the manifest in the bundled .MSIX files) does not match the subject details for the PFX file being used for signing. </p>
<p>Or so I thought…..</p>
<p>Turns out you can get this error too if you use the wrong version of the SignTool, but it give no clue to this fact.</p>
<p>So the top tip is …</p>
<p>Make sure you use the SignTool.exe from the same folder as the MakeAppx.exe tool. In  my case in “C:Program Files (x86)Windows Kits10bin10.0.17763.0x64”</p>
<p>Once I did this, after of course updating all the manifest files with the correct publisher details, I was able to sign my bundle as I wanted.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Migrating a GUI based build to YAML in Azure DevOps Pipelines</title>
      <link>https://blog.richardfennell.net/posts/migrating-a-gui-based-build-to-yaml-in-azure-devops-pipelines/</link>
      <pubDate>Fri, 26 Apr 2019 14:02:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/migrating-a-gui-based-build-to-yaml-in-azure-devops-pipelines/</guid>
      <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I use Azure DevOps Pipelines for the build and release of my Azure DevOps Pipeline extensions, &lt;a href=&#34;https://github.com/rfennell/AzurePipelines/wiki/Outlining-my-VSTS-CI&#34;&gt;I previously detailed my process here&lt;/a&gt; .&lt;/p&gt;
&lt;p&gt;For a good few months now &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/create-first-pipeline?view=azure-devops&amp;amp;tabs=tfs-2018-2&#34;&gt;YAML builds&lt;/a&gt; have been available. These provide the key advantage that the build is defined in a YAML text file that is stored with your product’s source code, thus allowing you to more easily track build changes. Also bulk editing becomes easier as a simple text editor can be used.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="introduction">Introduction</h2>
<p>I use Azure DevOps Pipelines for the build and release of my Azure DevOps Pipeline extensions, <a href="https://github.com/rfennell/AzurePipelines/wiki/Outlining-my-VSTS-CI">I previously detailed my process here</a> .</p>
<p>For a good few months now <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/create-first-pipeline?view=azure-devops&amp;tabs=tfs-2018-2">YAML builds</a> have been available. These provide the key advantage that the build is defined in a YAML text file that is stored with your product’s source code, thus allowing you to more easily track build changes. Also bulk editing becomes easier as a simple text editor can be used.</p>
<p>I have been putting off moving my current GUI based builds for as there is a bit of work, this post document then step.</p>
<h2 id="process">Process</h2>
<h3 id="getting-the-old-build-content">Getting the old build content</h3>
<p>First I created a new branch in my local copy of my GitHub repo that stores the source for my extensions</p>
<p>I then created an empty file <strong>azure-pipelines-build.yaml</strong> the same folder as the root of the extension I was replacing the build for. I created the empty text file. I did this as the current create new build UI allows you to pick a file or a create one, but if you create one it gives you no control as to where or how it is named</p>
<p>In you existing build I then clicked the pipeline level ‘View YAML’</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2019/04/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2019/04/image_thumb.png" title="image"></a> </p>
<p><strong>Note:</strong>  Initially I found this link disabled, but if you click around the UI, into the task details, variables etc, it eventually becomes enabled. I have no idea why.</p>
<p>Copy this YAML into you newly created <strong>azure-pipelines-build.yaml</strong> file, committed the file and pushed it GitHub as the new branch.</p>
<h3 id="creating-the-yaml-build">Creating the YAML build</h3>
<p>I then created a new YAML based build, picking in my case GitHub as the source host, the correct branch, and correct file.</p>
<p>This YAML contains the  core of what is needed, but the build was missing some some items such as triggers, build number and variable.</p>
<p>I added</p>
<ul>
<li>the name (build number)</li>
<li>the PR triggers to the YAML</li>
</ul>
<p>to the .YAML file, but decided to declare my variables as they contained secrets within the build definition in Azure DevOps.</p>
<p><a href="https://github.com/rfennell/AzurePipelines/blob/master/Extensions/DevTestLab/azure-pipelines-build.yml">The final YAML file was can be viewed here</a></p>
<h3 id="what-i-fixed-in-passing">What I fixed in passing</h3>
<p>In the past I used to package up my extensions twice, once packaged as private (for testing) and once as public. This was due to the limitation of the Azure DevOps Marketplace and the release tasks I was using at the time. Whilst passing a took the chance to change to only building the public VSIX package, but updated my release pipeline process to dynamically inject the settings for private testing. This was done using the newer <a href="https://marketplace.visualstudio.com/items?itemName=ms-devlabs.vsts-developer-tools-build-tasks&amp;targetId=e59fd111-3faf-4b90-be81-9def48f2947b&amp;utm_source=vstsproduct&amp;utm_medium=ExtHubManageList">Azure DevOps Extensions Tasks</a>.</p>
<p>As I side note I had to upgrade to these newer release tasks anyway as the older ones had ceased to work due to using old API calls</p>
<h3 id="swapping-in-the-new-build-into-the-release-process">Swapping in the new build into the release process</h3>
<p>To replace the old GUI build with the new YAML build I did the following</p>
<ul>
<li>Renamed my old GUI build and disabled this (the disable is vital else it continues to be triggered by the GitHub PRs, even if the triggers are removed in the build)</li>
<li>Renamed my new YAML build to the old GUI build name (not vital, but it felt neater)</li>
<li>Updated my release pipeline to pick the new YAML build as opposed to the old GUI build. Even though the names were the same, their internal IDs are not, so this needs to be swapped. I made sure my ‘source alias’ did not change, so I did not have to make other changes to my release pipeline. </li>
</ul>
<p>Once this was done I triggered a new GitHub PR and everything worked as expects.</p>
<h2 id="what-next">What Next</h2>
<p>I have kept the old build about just in case there is a problem I have not spotted, but I intend to delete this soon.</p>
<p>I now need to make the same changes for all my other build. The only difference for from this process will be for builds that make use of Task Groups, such as all those for Node based extensions. Task Groups cannot be exported as YAML at this time, so I will have to manually rebuilding these steps in a text editor. So more prone to human error, but I think it needs to be done.</p>
<p>So a nice back burner project. I will probably update them as release new versions of extensions.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A task for documenting your Azure DevOps Pipeline extensions for YAML usage</title>
      <link>https://blog.richardfennell.net/posts/a-task-for-documenting-your-azure-devops-pipeline-extensions-for-yaml-usage/</link>
      <pubDate>Tue, 18 Dec 2018 22:13:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-task-for-documenting-your-azure-devops-pipeline-extensions-for-yaml-usage/</guid>
      <description>&lt;p&gt;I have posted in the past a &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2018/10/25/yaml-documentation-for-my-azure-pipeline-tasks-and-how-i-generated-it/&#34;&gt;quick script&lt;/a&gt; to generate some markdown documentation for the YAML usage of Azure DevOps Pipeline extensions. Well I decided that having this script as a task itself would be a good idea, so a wrote it, and please to say have just &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-YAMLGenerator&#34;&gt;release it to the marketplace&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The YAML Documenter task scans an extension’s &lt;strong&gt;vss-extension.json&lt;/strong&gt; and &lt;strong&gt;task.json&lt;/strong&gt; files to find the details it needs to build the markdown documentation on the YAML usage. It can also, optionally, copy the extension’s &lt;strong&gt;readme.md&lt;/strong&gt; as the extensions primary documentation.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have posted in the past a <a href="https://blogs.blackmarble.co.uk/rfennell/2018/10/25/yaml-documentation-for-my-azure-pipeline-tasks-and-how-i-generated-it/">quick script</a> to generate some markdown documentation for the YAML usage of Azure DevOps Pipeline extensions. Well I decided that having this script as a task itself would be a good idea, so a wrote it, and please to say have just <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-YAMLGenerator">release it to the marketplace</a></p>
<p>The YAML Documenter task scans an extension’s <strong>vss-extension.json</strong> and <strong>task.json</strong> files to find the details it needs to build the markdown documentation on the YAML usage. It can also, optionally, copy the extension’s <strong>readme.md</strong> as the extensions primary documentation.</p>
<p>I am starting to use this extension, with my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks">WIKIUpdater extension</a>, in my release pipelines to make sure <a href="https://github.com/rfennell/AzurePipelines/wiki">my extension’s GitHub WIki</a> is up to date.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/12/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/12/image_thumb.png" title="image"></a></p>
<p>It is going to take a bit of work to update all my pipelines, but the eventual plan is to use the YAML document generator in the builds, adding the readme and YAML markdown files to the build as artefacts. Then deploying these files to the wiki in a later stage of the pipeline.</p>
<p>Hope some of you find it of use.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Programmatically adding User Capabilities to Azure DevOps Agents</title>
      <link>https://blog.richardfennell.net/posts/programmatically-adding-user-capabilities-to-azure-devops-agents/</link>
      <pubDate>Thu, 06 Dec 2018 14:56:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/programmatically-adding-user-capabilities-to-azure-devops-agents/</guid>
      <description>&lt;p&gt;I am automating the process by which we keep our build agent up to date. The basic process is to use a fork of the standard Microsoft Azure DevOps Pipeline agent that has the additional code included we need, &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2018/02/27/building-private-vsts-build-agents-using-the-microsoft-packer-based-agent-image-creation-model/&#34;&gt;notably Biztalk&lt;/a&gt;. Once I have the Packer created VM up and running, I need to install the agent. This is well document, just run _.config.cmd –help _for details. However, there is no option to add user capabilities to the agent. I know I could set them via environment variables, but I don’t want the same user capabilities on each agent on a VM (we use multiple agents on a single VM). There was no documented Azure DevOps API I could find to add capabilities, but a bit of hacking around with Chrome Dev tools and Postman got me a solution, which I have provided a&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am automating the process by which we keep our build agent up to date. The basic process is to use a fork of the standard Microsoft Azure DevOps Pipeline agent that has the additional code included we need, <a href="https://blogs.blackmarble.co.uk/rfennell/2018/02/27/building-private-vsts-build-agents-using-the-microsoft-packer-based-agent-image-creation-model/">notably Biztalk</a>. Once I have the Packer created VM up and running, I need to install the agent. This is well document, just run _.config.cmd –help _for details. However, there is no option to add user capabilities to the agent. I know I could set them via environment variables, but I don’t want the same user capabilities on each agent on a VM (we use multiple agents on a single VM). There was no documented Azure DevOps API I could find to add capabilities, but a bit of hacking around with Chrome Dev tools and Postman got me a solution, which I have provided a</p>
<script src="https://gist.github.com/rfennell/13b014fc816822cc9007ae26cc2cb43f.js"></script>
]]></content:encoded>
    </item>
    <item>
      <title>Azure Pipeline YAML support on VSCode</title>
      <link>https://blog.richardfennell.net/posts/azure-pipeline-yaml-support-on-vscode/</link>
      <pubDate>Thu, 06 Dec 2018 12:45:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/azure-pipeline-yaml-support-on-vscode/</guid>
      <description>&lt;p&gt;A major problem when moving from the graphic editing of Azure Pipeline builds to YAML has been the difficulty in knowing the options available, and of course making typos.&lt;/p&gt;
&lt;p&gt;Microsoft have just released a VSCode extension to help address this problem – it is called &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=ms-azure-devops.azure-pipelines&#34;&gt;Azure Pipelines&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://pbs.twimg.com/media/Dtm59RqWoAEE1qA.jpg&#34;&gt;&lt;/p&gt;
&lt;p&gt;I have yet to give it a really good workout, but first impressions are good.&lt;/p&gt;
&lt;p&gt;It does not remove the need for good documentation of task options, there is a need for my &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2018/10/25/yaml-documentation-for-my-azure-pipeline-tasks-and-how-i-generated-it/&#34;&gt;script to generate YAML documentation from a task.json file&lt;/a&gt;, but anything extra to ease editing helps.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A major problem when moving from the graphic editing of Azure Pipeline builds to YAML has been the difficulty in knowing the options available, and of course making typos.</p>
<p>Microsoft have just released a VSCode extension to help address this problem – it is called <a href="https://marketplace.visualstudio.com/items?itemName=ms-azure-devops.azure-pipelines">Azure Pipelines</a></p>
<p><img loading="lazy" src="https://pbs.twimg.com/media/Dtm59RqWoAEE1qA.jpg"></p>
<p>I have yet to give it a really good workout, but first impressions are good.</p>
<p>It does not remove the need for good documentation of task options, there is a need for my <a href="https://blogs.blackmarble.co.uk/rfennell/2018/10/25/yaml-documentation-for-my-azure-pipeline-tasks-and-how-i-generated-it/">script to generate YAML documentation from a task.json file</a>, but anything extra to ease editing helps.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Keeping Azure DevOps organisations inherited process templates in sync</title>
      <link>https://blog.richardfennell.net/posts/keeping-azure-devops-organisations-inherited-process-templates-in-sync/</link>
      <pubDate>Thu, 29 Nov 2018 17:27:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/keeping-azure-devops-organisations-inherited-process-templates-in-sync/</guid>
      <description>&lt;h3 id=&#34;the-problem&#34;&gt;The problem&lt;/h3&gt;
&lt;p&gt;If you are like me for historic reasons you have multiple Azure DevOps organisations (instances) backed by the same Azure Active Directory (AAD). In my case for example: one was created when Azure DevOps was first released as &lt;em&gt;TFSPreview.com&lt;/em&gt; and another is from our migration from on-prem TFS using the &lt;a href=&#34;https://azure.microsoft.com/en-us/services/devops/migrate/&#34;&gt;DB Migration Tools&lt;/a&gt; method; and I have others. I make active use of all of these for different purposes, though one is primary with the majority of work done on it, and so I want to make sure the &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/organizations/settings/work/manage-process?view=vsts&amp;amp;tabs=new-nav&#34;&gt;inherited process templates&lt;/a&gt; are the same on each of them. Using the primary organisation as the master customisation. &lt;strong&gt;Note&lt;/strong&gt; I have already &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/reference/on-premises-xml-process-model?view=vsts&#34;&gt;converted all my old on-premises XML process models&lt;/a&gt; to inherited process templates. There is no out the box way to do keep processes in syncs, but it is possible using a few tools. The main one is the &lt;a href=&#34;https://github.com/Microsoft/process-migrator&#34;&gt;Microsoft Process Migrator for Node&lt;/a&gt; on GitHub.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="the-problem">The problem</h3>
<p>If you are like me for historic reasons you have multiple Azure DevOps organisations (instances) backed by the same Azure Active Directory (AAD). In my case for example: one was created when Azure DevOps was first released as <em>TFSPreview.com</em> and another is from our migration from on-prem TFS using the <a href="https://azure.microsoft.com/en-us/services/devops/migrate/">DB Migration Tools</a> method; and I have others. I make active use of all of these for different purposes, though one is primary with the majority of work done on it, and so I want to make sure the <a href="https://docs.microsoft.com/en-us/azure/devops/organizations/settings/work/manage-process?view=vsts&amp;tabs=new-nav">inherited process templates</a> are the same on each of them. Using the primary organisation as the master customisation. <strong>Note</strong> I have already <a href="https://docs.microsoft.com/en-us/azure/devops/reference/on-premises-xml-process-model?view=vsts">converted all my old on-premises XML process models</a> to inherited process templates. There is no out the box way to do keep processes in syncs, but it is possible using a few tools. The main one is the <a href="https://github.com/Microsoft/process-migrator">Microsoft Process Migrator for Node</a> on GitHub.</p>
<h3 id="the-solution">The Solution</h3>
<p>Firstly I cloned the Microsoft Process Migrator and built it as per the instructions on the repo. I created a config file and then ran the tool. On one organisation it ran fine. However on another I had errors like: [ERROR] [2018-11-26T14:35:44.880Z] Process import validation failed. Process with same name already exists on target account. <em>[ERROR] [2018-11-26T14:39:54.206Z] Import failed, see log file for details. Create field &lsquo;Location&rsquo; failed, see log for details</em> This was because I had in the past manually duplicated the inherited process template onto this organisation, so there was a process with the same name and fields of the same names. The first error was easy to fix, import the template with a new (temporary) name. The second is more problematic. I had two choice</p>
<ul>
<li>A manual fix</li>
<li>An automated fix using <a href="https://marketplace.visualstudio.com/items?itemName=nkdagility.vsts-sync-migration">Migration Tools for Azure DevOps</a> from <a href="https://marketplace.visualstudio.com/publishers/nkdagility">Martin Hinshelwood</a></li>
</ul>
<p>As I only had a few duplicated unused fields on a single organisation I picked the former. If I had many organisations to sort out I would picked the latter. So my process ended up being</p>
<ol>
<li>Run the Microsoft Process Migrator to migrate ‘My Process’ on the source organisation to ‘My Process 1’ on the target organisation</li>
<li>It gave an error, providing the name of the duplicated field</li>
<li>I checked on the target organisation using a work item query that the field was empty or only had defaulted data (if it had not been I would have used Martin’s tool to migrate the data to a temporary field and then deleted the problem field, moving the data back to the correct field from the temporary field when the import of the process template was completed)</li>
<li>I deleted the field from the work item type that referenced it</li>
<li>I deleted the field</li>
<li>I deleted the process template ‘My Process 1’, a failed import leaves a half created process</li>
<li>I went back to step 1 and repeated until the import completed without error</li>
<li>I tested my migrated inherited process was OK</li>
<li>On the target organisation I then renamed ‘My Process’ to ‘My Process – Old’</li>
<li>I then renamed ‘My Process 1’ to ‘My Process’</li>
<li>In my case I also made ‘My Process’ as the default, you might not do this if another process is the default, but step 13 does require the process template is not the default</li>
<li>I moved all the team projects using the process template now called ‘My Process – Old’ to ‘My Process’</li>
<li>I was then able to delete the process template ‘My Process – Old’ as it has no associated team projects and was not the default</li>
</ol>
<p>As I customise my primary organisation’s process templates I can repeat this process to keep the processes in sync between organisations.  Note that in future migrations I won’t have to do steps 2..6 as there are no manually created duplicated fields. So it should be more straight forward. So a valid solution until any similar functionality is built into Azure DevOps, and there is no sign of that on the roadmap.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DPI problems after upgrading from Camtasia 8 to 2018</title>
      <link>https://blog.richardfennell.net/posts/dpi-problems-after-upgrading-from-camtasia-8-to-2018/</link>
      <pubDate>Mon, 26 Nov 2018 14:18:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dpi-problems-after-upgrading-from-camtasia-8-to-2018/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is another of those posts I do so I don’t forget how I fixed something.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I have a requirement to record videos for a client in 720p resolution. As I use as SurfaceBook with a High-Res screen I have found the best way to do this is set my Windows screen resolution to 1280x720 and do all my recording at this as native resolution. Any attempt to record smaller portions of a screen or scale video in production have lead to quality problems, especially as remote desktops within remote desktops are required.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>This is another of those posts I do so I don’t forget how I fixed something.</em></p>
<p>I have a requirement to record videos for a client in 720p resolution. As I use as SurfaceBook with a High-Res screen I have found the best way to do this is set my Windows screen resolution to 1280x720 and do all my recording at this as native resolution. Any attempt to record smaller portions of a screen or scale video in production have lead to quality problems, especially as remote desktops within remote desktops are required.</p>
<p>This has been working fine with Camtasia 8, but when I upgrade to Camtasia  2018.0.7 I got problems. The whole UI of the tool was unusable, it ignored the resizing/DPI changes.</p>
<p>The only fix I could find was to create a desktop shortcut to the EXE and set the Properties &gt; Compatibility &gt; Change high DPI settings &gt; and check the ‘Override high DPI scaling behaviour’ and set this to ‘System’.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/11/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/11/image_thumb.png" title="image"></a></p>
<p>Even after doing this I still found the preview in the editing screen a little blurred, but usable. The final produced MP4s were OK.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Just released a new Azure Pipelines Extension to update Git based WIKIs</title>
      <link>https://blog.richardfennell.net/posts/just-released-a-new-azure-pipelines-extension-to-update-git-based-wikis/</link>
      <pubDate>Tue, 20 Nov 2018 10:18:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/just-released-a-new-azure-pipelines-extension-to-update-git-based-wikis/</guid>
      <description>&lt;p&gt;I have just release a new &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks&#34;&gt;Azure DevOps Pipelines extension to update a page in a Git based WIKI.&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;It has been tested again&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Azure DevOps WIKI – running as the build agent (so the same Team Project)&lt;/li&gt;
&lt;li&gt;Azure DevOps WIKI – using provided credentials (so any Team Project)&lt;/li&gt;
&lt;li&gt;GitHub – using provided credentials&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It takes a string (markdown) input and writes it to a new page, or updates it if it already exists. It is designed to be used with my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;Generate Release Notes Extension&lt;/a&gt;, but you will no doubt find other uses&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just release a new <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks">Azure DevOps Pipelines extension to update a page in a Git based WIKI.</a> </p>
<p>It has been tested again</p>
<ul>
<li>Azure DevOps WIKI – running as the build agent (so the same Team Project)</li>
<li>Azure DevOps WIKI – using provided credentials (so any Team Project)</li>
<li>GitHub – using provided credentials</li>
</ul>
<p>It takes a string (markdown) input and writes it to a new page, or updates it if it already exists. It is designed to be used with my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">Generate Release Notes Extension</a>, but you will no doubt find other uses</p>
]]></content:encoded>
    </item>
    <item>
      <title>Azure DevOps Services &amp;amp; Server Alerts DSL - an alternative to TFS Aggregator?</title>
      <link>https://blog.richardfennell.net/posts/azure-devops-services-server-alerts-dsl-an-alternative-to-tfs-aggregator/</link>
      <pubDate>Tue, 30 Oct 2018 21:10:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/azure-devops-services-server-alerts-dsl-an-alternative-to-tfs-aggregator/</guid>
      <description>&lt;p&gt;Whilst listening to a recent  &lt;a href=&#34;http://www.radiotfs.com/Show/167/TasksandReleaseGateswithJesseHouwing&#34;&gt;Radio TFS&lt;/a&gt; it was mentioned that &lt;a href=&#34;https://tfsaggregator.github.io/&#34;&gt;TFS Aggregator&lt;/a&gt; uses the C# SOAP based Azure DevOps APIs; hence needed a major re-write as these &lt;a href=&#34;https://blogs.msdn.microsoft.com/devops/2018/05/21/announcing-the-deprecation-of-the-wit-and-test-client-om-at-jan-1-2020-2/&#34;&gt;APIs are being deprecated.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Did you know that there was a REST API alternative to TFS Aggregator?&lt;/p&gt;
&lt;p&gt;My &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-ServiceHooks-DSL#overview&#34;&gt;Azure DevOps Services &amp;amp; Server Alerts DSL&lt;/a&gt; is out there, and has been for a while, but I don’t think used by many people. It aims to do the same as TFS Aggregator, but is based around Python scripting.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst listening to a recent  <a href="http://www.radiotfs.com/Show/167/TasksandReleaseGateswithJesseHouwing">Radio TFS</a> it was mentioned that <a href="https://tfsaggregator.github.io/">TFS Aggregator</a> uses the C# SOAP based Azure DevOps APIs; hence needed a major re-write as these <a href="https://blogs.msdn.microsoft.com/devops/2018/05/21/announcing-the-deprecation-of-the-wit-and-test-client-om-at-jan-1-2020-2/">APIs are being deprecated.</a></p>
<p>Did you know that there was a REST API alternative to TFS Aggregator?</p>
<p>My <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-ServiceHooks-DSL#overview">Azure DevOps Services &amp; Server Alerts DSL</a> is out there, and has been for a while, but I don’t think used by many people. It aims to do the same as TFS Aggregator, but is based around Python scripting.</p>
<p>However, I do have to say it is more limited in flexibility as it has only been developed for my (and a few of my clients needs), but its an alternative that is based on the REST APIs. </p>
<p>Scripts are of the following form, this one changes the state of a work item if all it children are done</p>
<pre tabindex="0"><code>import sys
# Expect 2 args the event type and a value unique ID for the wi
if sys.argv\[0\] == &#34;workitem.updated&#34; : 
    wi = GetWorkItem(int(sys.argv\[1\]))
    parentwi = GetParentWorkItem(wi)
    if parentwi == None:
        LogInfoMessage(&#34;Work item &#39;&#34; + str(wi.id) + &#34;&#39; has no parent&#34;)
    else:
        LogInfoMessage(&#34;Work item &#39;&#34; + str(wi.id) + &#34;&#39; has parent &#39;&#34; + str(parentwi.id) + &#34;&#39;&#34;)

        results = \[c for c in GetChildWorkItems(parentwi) if c\[&#34;fields&#34;\]\[&#34;System.State&#34;\] != &#34;Done&#34;\]
        if  len(results) == 0 :
            LogInfoMessage(&#34;All child work items are &#39;Done&#39;&#34;)
            parentwi\[&#34;fields&#34;\]\[&#34;System.State&#34;\] = &#34;Done&#34;
            UpdateWorkItem(parentwi)
            msg = &#34;Work item &#39;&#34; + str(parentwi.id) + &#34;&#39; has been set as &#39;Done&#39; as all its child work items are done&#34;
            SendEmail(&#34;richard@blackmarble.co.uk&#34;,&#34;Work item &#39;&#34; + str(parentwi.id) + &#34;&#39; has been updated&#34;, msg)
            LogInfoMessage(msg)
        else:
            LogInfoMessage(&#34;Not all child work items are &#39;Done&#39;&#34;)
else:
	LogErrorMessage(&#34;Was not expecting to get here&#34;)
	LogErrorMessage(sys.argv)
</code></pre><p>I have recently done a fairly major update to the project. The key changes are:</p>
<ul>
<li>
<p>Rename of project, repo, and namespaces to reflect Azure DevOps (the namespace change is a breaking change for existing users)</p>
</li>
<li>
<p>The scripts that are run can now be</p>
</li>
<li>
<p>A fixed file name for the web instance running the service</p>
</li>
<li>
<p>Based on the event type sent to the service</p>
</li>
<li>
<p>Use the subscription ID, thus allowing many scripts (new)</p>
</li>
<li>
<p>A single instance of the web site running the events processor can now handle calls from many Azure DevOps instances.</p>
</li>
<li>
<p>Improved installation process on Azure (well at least tried to make the documentation clearer and sort out a couple of MSDeploy issues)</p>
</li>
</ul>
<p>Full details are on the project can be seen on the solutions <a href="https://github.com/rfennell/VSTSServiceHookDsl/wiki">WIKI</a>, maybe you will find it of use. Let me know if the documentation is good enough</p>
]]></content:encoded>
    </item>
    <item>
      <title>YAML documentation for my Azure Pipeline Tasks (and how I generated it)</title>
      <link>https://blog.richardfennell.net/posts/yaml-documentation-for-my-azure-pipeline-tasks-and-how-i-generated-it/</link>
      <pubDate>Thu, 25 Oct 2018 16:42:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/yaml-documentation-for-my-azure-pipeline-tasks-and-how-i-generated-it/</guid>
      <description>&lt;p&gt;There is a general move in &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started/what-is-azure-pipelines?toc=/azure/devops/pipelines/toc.json&amp;amp;bc=/azure/devops/boards/pipelines/breadcrumb/toc.json&amp;amp;view=vsts&#34;&gt;Azure DevOps Pipelines&lt;/a&gt; to using &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-yaml?view=vsts&#34;&gt;YAML&lt;/a&gt;, as opposed to the designer, to define your pipelines. This is particularly enforced when using them via the new &lt;a href=&#34;https://github.com/marketplace/azure-pipelines&#34;&gt;GitHub Marketplace Azure Pipelines&lt;/a&gt; method where YAML appears to be the only option.&lt;/p&gt;
&lt;p&gt;This has shown up a hole in my &lt;a href=&#34;https://github.com/rfennell/AzurePipelines/wiki&#34;&gt;Pipeline Tasks documentation&lt;/a&gt;, I had nothing on YAML!&lt;/p&gt;
&lt;p&gt;So I have added a YAML usage page for each set of tasks in each of my extensions e.g &lt;a href=&#34;https://github.com/rfennell/AzurePipelines/wiki/File-Copier-Tasks-YAML&#34;&gt;the file utilities tasks&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is a general move in <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started/what-is-azure-pipelines?toc=/azure/devops/pipelines/toc.json&amp;bc=/azure/devops/boards/pipelines/breadcrumb/toc.json&amp;view=vsts">Azure DevOps Pipelines</a> to using <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-yaml?view=vsts">YAML</a>, as opposed to the designer, to define your pipelines. This is particularly enforced when using them via the new <a href="https://github.com/marketplace/azure-pipelines">GitHub Marketplace Azure Pipelines</a> method where YAML appears to be the only option.</p>
<p>This has shown up a hole in my <a href="https://github.com/rfennell/AzurePipelines/wiki">Pipeline Tasks documentation</a>, I had nothing on YAML!</p>
<p>So I have added a YAML usage page for each set of tasks in each of my extensions e.g <a href="https://github.com/rfennell/AzurePipelines/wiki/File-Copier-Tasks-YAML">the file utilities tasks</a>.</p>
<p>Now, as are most developers, I am lazy. I was not going to type all that information. So I wrote a <a href="https://github.com/rfennell/AzurePipelines/blob/master/scripts/Generate-YAMLDocumation.ps1">script</a> to generate the markdown from respective <strong>task.json</strong> files in the repo. Now this script will need some work for others to use as it relies on some special handling due to quirks of my directory structure, but I hope it will be of use to others.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft post root cause analysis on recent Azure DevOps Issues</title>
      <link>https://blog.richardfennell.net/posts/microsoft-post-root-cause-analysis-on-recent-azure-devops-issues/</link>
      <pubDate>Wed, 17 Oct 2018 10:34:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-post-root-cause-analysis-on-recent-azure-devops-issues/</guid>
      <description>&lt;p&gt;Azure DevOps has had some serious issue over the past couple of weeks with availability here in Europe.&lt;/p&gt;
&lt;p&gt;A really good open and detailed &lt;a href=&#34;https://blogs.msdn.microsoft.com/vsoservice/?p=17665&#34;&gt;root cause analysis has just been posted&lt;/a&gt; by the Azure DevOps team at Microsoft. It also covers the mitigations they are putting place to make sure this same issues do not occur again.&lt;/p&gt;
&lt;p&gt;We all have to remember that the cloud is not magic. Cloud service providers will have problems like any on-premise services; but trying to hide them does nothing to build confidence. So I for one applaud posts like this. I just wish all cloud service providers were as open when problem occur.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Azure DevOps has had some serious issue over the past couple of weeks with availability here in Europe.</p>
<p>A really good open and detailed <a href="https://blogs.msdn.microsoft.com/vsoservice/?p=17665">root cause analysis has just been posted</a> by the Azure DevOps team at Microsoft. It also covers the mitigations they are putting place to make sure this same issues do not occur again.</p>
<p>We all have to remember that the cloud is not magic. Cloud service providers will have problems like any on-premise services; but trying to hide them does nothing to build confidence. So I for one applaud posts like this. I just wish all cloud service providers were as open when problem occur.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Paths in PR Triggers on an Azure DevOps Pipelines Builds</title>
      <link>https://blog.richardfennell.net/posts/using-paths-in-pr-triggers-on-an-azure-devops-pipelines-builds/</link>
      <pubDate>Tue, 02 Oct 2018 19:49:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-paths-in-pr-triggers-on-an-azure-devops-pipelines-builds/</guid>
      <description>&lt;p&gt;When I started creating OSS extensions for Azure DevOps Pipelines (starting on TFSPreview, then VSO, then VSTS and now named Azure DevOps) I made the mistake of putting all my extensions in a single GitHub repo. I thought this would make life easier, I was wrong, it should have been a repo per extension.&lt;/p&gt;
&lt;p&gt;I have considered splitting the GitHub repo, but as a number of people have forked it, over 100 at the last count, I did not want to start a chain of chaos for loads of people.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When I started creating OSS extensions for Azure DevOps Pipelines (starting on TFSPreview, then VSO, then VSTS and now named Azure DevOps) I made the mistake of putting all my extensions in a single GitHub repo. I thought this would make life easier, I was wrong, it should have been a repo per extension.</p>
<p>I have considered splitting the GitHub repo, but as a number of people have forked it, over 100 at the last count, I did not want to start a chain of chaos for loads of people.</p>
<p>This initial choice has meant that until very recently I could not use the Pull Request triggers in Azure DevOps Pipelines against my GitHub repo. This was because all builds associated with the repo triggered on any extension PR. So, I had to trigger builds manually, providing the branch name by hand. A bit of a pain, and prone to error.</p>
<p>I am pleased to say that with the roll out of <a href="https://docs.microsoft.com/en-us/azure/devops/release-notes/2018/sprint-140-update">Sprint 140</a> we now get the option to add a path filter to PR triggers on builds linked to GitHub repo; something we have had for Azure DevOps hosted Git repos since <a href="https://docs.microsoft.com/en-us/azure/devops/release-notes/2017/nov-28-vsts">Sprint 126</a>.</p>
<p>So now my <a href="https://blogs.blackmarble.co.uk/rfennell/2018/03/20/using-vsts-gates-to-help-improve-my-deployment-pipeline-of-vsts-extensions-to-the-visual-studio-marketplace/">release process</a> is improved. If I add a path filter as shown below, my build and hence release process trigger on a PR just as I need.</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/10/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/10/image_thumb.png" title="image"></a></p>
<p>It is just a shame that the GitHub PR only checks the build, not the whole release, before saying all is OK. Hope we see linking to complete Azure DevOps Pipelines in the future.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Registration open for free Black Marble events on modern process adoption using the cloud</title>
      <link>https://blog.richardfennell.net/posts/registration-open-for-free-black-marble-events-on-modern-process-adoption-using-the-cloud/</link>
      <pubDate>Mon, 01 Oct 2018 16:41:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/registration-open-for-free-black-marble-events-on-modern-process-adoption-using-the-cloud/</guid>
      <description>&lt;p&gt;Registration for the new season of &lt;a href=&#34;http://blackmarble.com/events/&#34;&gt;Black Marble events&lt;/a&gt; have just been opened. If you can make it to Yorkshire why not come to an event (or two)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://blackmarble.com/events/384&#34;&gt;Azure DevOps: So VSTS has Changed&lt;/a&gt; - 24 Oct 2018&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blackmarble.com/events/386&#34;&gt;Next Generation Business Productivity with Office 365 and Teams&lt;/a&gt; - 25 Oct 2018&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blackmarble.com/events/375&#34;&gt;Moving your Development Process to the Cloud with Azure DevOps&lt;/a&gt; - 14 Nov 2018&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blackmarble.com/events/373&#34;&gt;Modernising Enterprise Information Architecture&lt;/a&gt; - 21 Nov 2018&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blackmarble.com/events/387&#34;&gt;Next Generation Business Productivity with Office 365 and Teams&lt;/a&gt; - 05 Dec 2018&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blackmarble.com/events/376&#34;&gt;Architecture Forum in the North – 11&lt;/a&gt; - 12 Dec 2018&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blackmarble.com/events/383&#34;&gt;Moving your Development Process to the Cloud with Azure DevOps&lt;/a&gt; -16 Jan 2019&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you are stuck in the grim south, why not look out for us at &lt;a href=&#34;http://blackmarble.com/news/black-marble-at-future-decoded-2018/&#34;&gt;Future Decoded&lt;/a&gt; in London at the end of the month&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Registration for the new season of <a href="http://blackmarble.com/events/">Black Marble events</a> have just been opened. If you can make it to Yorkshire why not come to an event (or two)</p>
<ul>
<li><a href="http://blackmarble.com/events/384">Azure DevOps: So VSTS has Changed</a> - 24 Oct 2018</li>
<li><a href="http://blackmarble.com/events/386">Next Generation Business Productivity with Office 365 and Teams</a> - 25 Oct 2018</li>
<li><a href="http://blackmarble.com/events/375">Moving your Development Process to the Cloud with Azure DevOps</a> - 14 Nov 2018</li>
<li><a href="http://blackmarble.com/events/373">Modernising Enterprise Information Architecture</a> - 21 Nov 2018</li>
<li><a href="http://blackmarble.com/events/387">Next Generation Business Productivity with Office 365 and Teams</a> - 05 Dec 2018</li>
<li><a href="http://blackmarble.com/events/376">Architecture Forum in the North – 11</a> - 12 Dec 2018</li>
<li><a href="http://blackmarble.com/events/383">Moving your Development Process to the Cloud with Azure DevOps</a> -16 Jan 2019</li>
</ul>
<p>If you are stuck in the grim south, why not look out for us at <a href="http://blackmarble.com/news/black-marble-at-future-decoded-2018/">Future Decoded</a> in London at the end of the month</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2018 Update 3 Released</title>
      <link>https://blog.richardfennell.net/posts/tfs-2018-update-3-released/</link>
      <pubDate>Mon, 17 Sep 2018 13:15:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2018-update-3-released/</guid>
      <description>&lt;p&gt;Whilst I was off work last week TFS 2018 Update 3 was released. As stated in the &lt;a href=&#34;https://docs.microsoft.com/en-us/visualstudio/releasenotes/tfs2018-update3&#34;&gt;2018.3 release notes&lt;/a&gt; this is the final bug fix update release of TFS 2018.&lt;/p&gt;
&lt;p&gt;The next major release of TFS will not be named TFS 2019 as you might have expected, but will use the new name of &lt;a href=&#34;https://azure.microsoft.com/en-us/blog/introducing-azure-devops/&#34;&gt;Azure DevOps Server&lt;/a&gt;. You can see the features planned for this next release in the &lt;a href=&#34;https://docs.microsoft.com/en-us/azure/devops/release-notes/&#34;&gt;Azure DevOps Features Timeline&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst I was off work last week TFS 2018 Update 3 was released. As stated in the <a href="https://docs.microsoft.com/en-us/visualstudio/releasenotes/tfs2018-update3">2018.3 release notes</a> this is the final bug fix update release of TFS 2018.</p>
<p>The next major release of TFS will not be named TFS 2019 as you might have expected, but will use the new name of <a href="https://azure.microsoft.com/en-us/blog/introducing-azure-devops/">Azure DevOps Server</a>. You can see the features planned for this next release in the <a href="https://docs.microsoft.com/en-us/azure/devops/release-notes/">Azure DevOps Features Timeline</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Postmortem published by the Microsoft VSTS Team on last week&#39;s Azure outage</title>
      <link>https://blog.richardfennell.net/posts/postmortem-published-by-the-microsoft-vsts-team-on-last-weeks-azure-outage/</link>
      <pubDate>Tue, 11 Sep 2018 07:40:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/postmortem-published-by-the-microsoft-vsts-team-on-last-weeks-azure-outage/</guid>
      <description>&lt;p&gt;The Azure DevOps (VSTS) team have published the &lt;a href=&#34;https://blogs.msdn.microsoft.com/vsoservice/?p=17485&#34;&gt;promised postmortem&lt;/a&gt; on the outage on the 4th of September. It gives good detail on what actually happened to the South Central Azure Datacenter and how it effected VSTS (as it was then called). More interestingly it provides a discussion of mitigations they plan to put in place to stop a single datacentre failure having such a serious effect in the future. Great openness as always from the team&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The Azure DevOps (VSTS) team have published the <a href="https://blogs.msdn.microsoft.com/vsoservice/?p=17485">promised postmortem</a> on the outage on the 4th of September. It gives good detail on what actually happened to the South Central Azure Datacenter and how it effected VSTS (as it was then called). More interestingly it provides a discussion of mitigations they plan to put in place to stop a single datacentre failure having such a serious effect in the future. Great openness as always from the team</p>
]]></content:encoded>
    </item>
    <item>
      <title>VSTS becomes Azure DevOps</title>
      <link>https://blog.richardfennell.net/posts/vsts-becomes-azure-devops/</link>
      <pubDate>Mon, 10 Sep 2018 15:49:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vsts-becomes-azure-devops/</guid>
      <description>&lt;p&gt;Today Microsoft made a big announcement, VSTS is now Azure DevOps. The big change is they have split VSTS into 5 services you can use together or independently, including Azure Pipelines for CI/CD - free for open source and available in the GitHub CI marketplace. An important thing to note is that &lt;strong&gt;IT IS NOT JUST FOR AZURE.&lt;/strong&gt; Don&amp;rsquo;t be afraid of the name. There a wide range of connectors to other cloud providers such as AWS and Google Cloud, as will as many other DevOps tools &lt;a href=&#34;https://azure.microsoft.com/en-us/blog/introducing-azure-devops/&#34;&gt;Learn more at have a look at the official post&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today Microsoft made a big announcement, VSTS is now Azure DevOps. The big change is they have split VSTS into 5 services you can use together or independently, including Azure Pipelines for CI/CD - free for open source and available in the GitHub CI marketplace. An important thing to note is that <strong>IT IS NOT JUST FOR AZURE.</strong> Don&rsquo;t be afraid of the name. There a wide range of connectors to other cloud providers such as AWS and Google Cloud, as will as many other DevOps tools <a href="https://azure.microsoft.com/en-us/blog/introducing-azure-devops/">Learn more at have a look at the official post</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Videos do not play in VSTS WIKI via relative links - workaround</title>
      <link>https://blog.richardfennell.net/posts/videos-do-not-play-in-vsts-wiki-via-relative-links-workaround/</link>
      <pubDate>Fri, 31 Aug 2018 12:01:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/videos-do-not-play-in-vsts-wiki-via-relative-links-workaround/</guid>
      <description>&lt;h3 id=&#34;the-problem&#34;&gt;The Problem&lt;/h3&gt;
&lt;p&gt;The &lt;a href=&#34;https://docs.microsoft.com/en-us/vsts/project/wiki/markdown-guidance?view=vsts&#34;&gt;documentation&lt;/a&gt; for the VSTS WIKI suggests you can embed a video in a VSTS WIKI using the markdown/HTML```
&lt;video src=&#34;\_media/vstswiki\_mid.mp4&#34; width=400 controls&gt;
&lt;/video&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Problem is that this does not seem to work, the MP4 just does not appear, you get an empty video player. However, if you swap to a full URL it does work e.g.&lt;/code&gt;
&lt;video src=&#34;https://sec.ch9.ms/ch9/7247/7c8ddc1a-348b-4ba9-ab61-51fded6e7247/vstswiki\_high.mp4&#34; width=400 controls&gt;
&lt;/video&gt;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-This&#34; data-lang=&#34;This&#34;&gt;
### The Workaround

The workaround is to either place the MP4 file in some URL accessible location e.g. some Azure web space (not really addressing the problem), or more usefully use the [VSTS API to get the file](https://docs.microsoft.com/en-us/rest/api/vsts/git/items/get?view=vsts-rest-4.1) out the repo that backs the WIKI. The format of the HTML tag becomes```
&amp;lt;video src=&amp;#34;https://vstsinstance.visualstudio.com/MyTeamProject/\_apis/git/repositories/MyTeamProject.wiki/Items?path=\_media%2Fvstswiki\_high.mp4 width=400&amp;#34; controls&amp;gt;
&amp;lt;/video&amp;gt;
```This will get the current version of the file on default branch, you can add extra parameters to to specify versions and branches if required as per the [API documentation](https://docs.microsoft.com/en-us/rest/api/vsts/git/?view=vsts-rest-4.1). So not a perfect solution as you have to think about branches and versions, they are not handled automatically, but at least it does work
&lt;/code&gt;&lt;/pre&gt;</description>
      <content:encoded><![CDATA[<h3 id="the-problem">The Problem</h3>
<p>The <a href="https://docs.microsoft.com/en-us/vsts/project/wiki/markdown-guidance?view=vsts">documentation</a> for the VSTS WIKI suggests you can embed a video in a VSTS WIKI using the markdown/HTML```
<video src="\_media/vstswiki\_mid.mp4" width=400 controls>
</video></p>
<p><code>Problem is that this does not seem to work, the MP4 just does not appear, you get an empty video player. However, if you swap to a full URL it does work e.g.</code>
<video src="https://sec.ch9.ms/ch9/7247/7c8ddc1a-348b-4ba9-ab61-51fded6e7247/vstswiki\_high.mp4" width=400 controls>
</video></p>
<pre tabindex="0"><code class="language-This" data-lang="This">
### The Workaround

The workaround is to either place the MP4 file in some URL accessible location e.g. some Azure web space (not really addressing the problem), or more usefully use the [VSTS API to get the file](https://docs.microsoft.com/en-us/rest/api/vsts/git/items/get?view=vsts-rest-4.1) out the repo that backs the WIKI. The format of the HTML tag becomes```
&lt;video src=&#34;https://vstsinstance.visualstudio.com/MyTeamProject/\_apis/git/repositories/MyTeamProject.wiki/Items?path=\_media%2Fvstswiki\_high.mp4 width=400&#34; controls&gt;
&lt;/video&gt;
```This will get the current version of the file on default branch, you can add extra parameters to to specify versions and branches if required as per the [API documentation](https://docs.microsoft.com/en-us/rest/api/vsts/git/?view=vsts-rest-4.1). So not a perfect solution as you have to think about branches and versions, they are not handled automatically, but at least it does work
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Registering an agent with VSTS and getting the message &amp;quot;Agent pool not found&amp;quot;</title>
      <link>https://blog.richardfennell.net/posts/registering-an-agent-with-vsts-and-getting-the-message-agent-pool-not-found/</link>
      <pubDate>Thu, 16 Aug 2018 20:58:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/registering-an-agent-with-vsts-and-getting-the-message-agent-pool-not-found/</guid>
      <description>&lt;p&gt;When you want to register a build agent with VSTS, you use the VSTS instance’s URL and a user’s Personal Access Token (PAT). Whilst doing this today I connected to the VSTS instance OK but got the error &amp;ldquo;Agent pool not found&amp;rdquo;.when I was asked to pick the agent pool to add the new agent to.&lt;/p&gt;
&lt;p&gt;As the user who’s PAT I was using was a Build Administrator I was a bit confused, but then I remembered to check their user access level. It was set to Stakeholder, once this was changed to Basic I was able to register the agent without use.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you want to register a build agent with VSTS, you use the VSTS instance’s URL and a user’s Personal Access Token (PAT). Whilst doing this today I connected to the VSTS instance OK but got the error &ldquo;Agent pool not found&rdquo;.when I was asked to pick the agent pool to add the new agent to.</p>
<p>As the user who’s PAT I was using was a Build Administrator I was a bit confused, but then I remembered to check their user access level. It was set to Stakeholder, once this was changed to Basic I was able to register the agent without use.</p>
<p>Also, so as to not use up a Basic license, when I did not need to, I swapped the user back to being a Stakeholder once the agent was registered. This can be done as the token used for the actual build is not the one used to register it, but one assigned at build time by VSTS.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences migrating TFS XML Team Project Templates to Inherited Team Project Templates</title>
      <link>https://blog.richardfennell.net/posts/experiences-migrating-tfs-xml-team-project-templates-to-inherited-team-project-templates/</link>
      <pubDate>Thu, 02 Aug 2018 20:20:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-migrating-tfs-xml-team-project-templates-to-inherited-team-project-templates/</guid>
      <description>&lt;p&gt;You have always been able to customise your Team Projects in TFS, &lt;a href=&#34;https://docs.microsoft.com/en-us/vsts/work/customize/reference/process-templates/customize-process?view=vsts&#34;&gt;by editing a host of XML files&lt;/a&gt;, but it was not a pleasant experience. In VSTS a far more pleasant to use web based &lt;a href=&#34;https://docs.microsoft.com/en-us/vsts/organizations/settings/work/customize-process?view=vsts&#34;&gt;inherited customisation model&lt;/a&gt; was added, much to, I think, most administrators relief.&lt;/p&gt;
&lt;p&gt;If you used the &lt;a href=&#34;https://docs.microsoft.com/en-us/vsts/articles/migration-overview?view=vsts&#34;&gt;TFS DB migration service&lt;/a&gt; you ended up with a VSTS instance full of the the XML style team projects, and you were stuck there, with no way to change these to the new inherited mode, that is until now as &lt;a href=&#34;https://blogs.msdn.microsoft.com/devops/2018/07/06/moving-from-hosted-xml-to-inheritance/&#34;&gt;Microsoft have released to preview a conversion tool&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You have always been able to customise your Team Projects in TFS, <a href="https://docs.microsoft.com/en-us/vsts/work/customize/reference/process-templates/customize-process?view=vsts">by editing a host of XML files</a>, but it was not a pleasant experience. In VSTS a far more pleasant to use web based <a href="https://docs.microsoft.com/en-us/vsts/organizations/settings/work/customize-process?view=vsts">inherited customisation model</a> was added, much to, I think, most administrators relief.</p>
<p>If you used the <a href="https://docs.microsoft.com/en-us/vsts/articles/migration-overview?view=vsts">TFS DB migration service</a> you ended up with a VSTS instance full of the the XML style team projects, and you were stuck there, with no way to change these to the new inherited mode, that is until now as <a href="https://blogs.msdn.microsoft.com/devops/2018/07/06/moving-from-hosted-xml-to-inheritance/">Microsoft have released to preview a conversion tool</a>.</p>
<p>I have been trying this tool to migrate all our active XML based team projects to Inherited equivalents, and it has worked very well for me, but I have to say, we don’t have any hugely complex customisations, so your mileage may vary depending on how much you modified your XML based team project templates.</p>
<p>However, I wanted to go further than the basic process as <a href="https://blogs.msdn.microsoft.com/devops/2018/07/06/moving-from-hosted-xml-to-inheritance/">documented</a></p>
<p>Since moving to VSTS all our new team project have been created  using an inherited template based on Scrum. I wanted to move all our active XML based team project to this standardised template. This took a little extra work.</p>
<p>The basic process is easy as to change Inherited Process you</p>
<ol>
<li>In the instance admin page <code>https://\[instance\].visualstudio.com/\_settings/process</code> pick the target template</li>
<li>Click on the ellipse (…) and pick ‘Change team project to use ….’</li>
<li>Pick the team project you wish to migrate and you are done</li>
</ol>
<p><strong>REMEMBER</strong>: A really nice touch (with XML and Inherited templates) is that you can just switch back if you don’t like the result, no data is lost when you swap templates, but some of it might be hidden as fields are not shown on the new work item types.</p>
<p>The problem I had was that our old XML based team project templates had different customisation to our current inherited standard. To address this I needed to</p>
<ul>
<li>Adding a few fields to our standard inherited template based on Scrum, for critical legacy customisations</li>
<li>In one case a work item types in use in the XML template had no match in our current template. In this case I changed the work item type to it’s equivalent (we had used a variety of ‘flavours of PBI’ so this was not a major problem), adding a tag to the work items to they could be identified, then deleting the offending work item type.</li>
<li>I could conceive of a need for more complex remapping for work item types and fields, but I was able to avoid this.</li>
</ul>
<p>So once done, I was then able to migrate all the team project to my target process template.</p>
<p>So a very nice experience, and one that now means we can make sure all our team projects use the same set of customisation. No longer do we need to worry about customising in XML and Inherited models.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Hello sign in does not work on my first generation Surface Book after a rebuild - fixed</title>
      <link>https://blog.richardfennell.net/posts/hello-sign-in-does-not-work-on-my-first-generation-surface-book-after-a-rebuild-fixed/</link>
      <pubDate>Thu, 02 Aug 2018 16:13:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/hello-sign-in-does-not-work-on-my-first-generation-surface-book-after-a-rebuild-fixed/</guid>
      <description>&lt;p&gt;I have just rebuilt my first generation Surface Book from our company standard Windows image. These images are used by all our staff all the time without issue (on Surface and Lenovo devices), so I was not expecting any problems.&lt;/p&gt;
&lt;p&gt;I used to rebuild my PC every 6 months or so, but got out the habit when I moved to a model I could not swap the hard drive out of as backup during the process (using a new disk for new install). I got around this by using &lt;a href=&#34;https://docs.microsoft.com/en-us/sysinternals/downloads/disk2vhd&#34;&gt;Disk2VHD&lt;/a&gt;, not quite as good as I can’t just swap the disk back in, but I won’t have lost any stray files, even though I always aim to keep data in OneDrive, Source Control or SharePoint, so it should not been an issue anyway.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just rebuilt my first generation Surface Book from our company standard Windows image. These images are used by all our staff all the time without issue (on Surface and Lenovo devices), so I was not expecting any problems.</p>
<p>I used to rebuild my PC every 6 months or so, but got out the habit when I moved to a model I could not swap the hard drive out of as backup during the process (using a new disk for new install). I got around this by using <a href="https://docs.microsoft.com/en-us/sysinternals/downloads/disk2vhd">Disk2VHD</a>, not quite as good as I can’t just swap the disk back in, but I won’t have lost any stray files, even though I always aim to keep data in OneDrive, Source Control or SharePoint, so it should not been an issue anyway.</p>
<p>Anyway, the rebuild went fine, no issues until I tried to enable <a href="https://support.microsoft.com/en-gb/help/17215/windows-10-what-is-hello">Hello</a> to login using the camera. The process seemed to start OK, the wizard ran, but after a reboot there was no Hello login option.</p>
<p>After a bit of digging I found that in device manager there were no imaging devices at all – strange as the Camera App and Skype worked OK.</p>
<p>The Internet proved little help, suggesting the usual set of ‘you have a virus install our tool’ but after much more digging around I found that the cameras were all under ‘System Devices’ in device manager. So I then…</p>
<ol>
<li>Uninstalled them all (front, front IR and back)</li>
<li>Scanned for hardware changes (they reappeared, still in ‘System Devices’</li>
<li>Ran a Windows Update and some new Intel Drivers were downloaded</li>
<li>I could then run the Hello setup wizard again, seems all old settings were lost</li>
</ol>
<p>That was all much more complex than hoped for. My guess is that the system rebuild changed some firmware in a strange way that makes for the misdetection of the cameras.</p>
<p>Anyway it is working now.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting Remote Desktop Manager 2.7 working sanely with mixed high DPI screens</title>
      <link>https://blog.richardfennell.net/posts/getting-remote-desktop-manager-2-7-working-sanely-with-mixed-high-dpi-screens/</link>
      <pubDate>Fri, 08 Jun 2018 10:00:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-remote-desktop-manager-2-7-working-sanely-with-mixed-high-dpi-screens/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 3 July 2018&lt;/strong&gt; - A colleague, &lt;a href=&#34;https://blogs.blackmarble.co.uk/adavidson/&#34;&gt;Andy Davidson&lt;/a&gt;,  suggested &lt;a href=&#34;https://mremoteng.org/&#34;&gt;mRemoteNG&lt;/a&gt; as an alternative tool to this address this issue. mRemoteNG also has the advantage that it support most major remoting technologies not just RDP, so I am giving that a try for a while. &lt;em&gt;This is one of those post I do mostly for myself so I don’t forget how I did something, it is all based on answers on&lt;/em&gt; &lt;a href=&#34;https://superuser.com/questions/891413/remote-connection-desktop-manager-2-7-does-not-support-dpi-scaling-anymore&#34;&gt;&lt;em&gt;SuperUser.Com&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, I can claim no credit&lt;/em&gt; I have a SurfaceBook (first generation) and when I am in the office it is linked to an external monitor, with a different lower DPI, via a dock. If I use Remote Desktop (MSTSC) as built into Windows 10, I can drag sessions between the two monitors and the DPI shift is handled OK. However, if I use my preferred tool &lt;a href=&#34;https://www.microsoft.com/en-gb/download/details.aspx?id=44989&#34;&gt;Remote Desktop Manager 2.7&lt;/a&gt; (as it allow me to store all my commonly used RDP settings) I am in DPI hell. I either get huge fonts or microscopic ones. This is bad whether working on the single high DPI laptop screen work with an external screen. As the &lt;a href=&#34;https://superuser.com/questions/891413/remote-connection-desktop-manager-2-7-does-not-support-dpi-scaling-anymore&#34;&gt;&lt;em&gt;SuperUser.Com&lt;/em&gt;&lt;/a&gt; post states the answer is to change the compatibility settings for the manager by right clicking on the file &amp;ldquo;C:Program Files (x86)MicrosoftRemote Desktop Connection ManagerRDCMan.exe&amp;rdquo;, selecting compatibility, change high DPI settings, and unchecking high DPI setting override &lt;a href=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/06/image.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/06/image_thumb.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt; Once this was done, I have readable resolutions on all screens. Why did I not do a better search months ago?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 3 July 2018</strong> - A colleague, <a href="https://blogs.blackmarble.co.uk/adavidson/">Andy Davidson</a>,  suggested <a href="https://mremoteng.org/">mRemoteNG</a> as an alternative tool to this address this issue. mRemoteNG also has the advantage that it support most major remoting technologies not just RDP, so I am giving that a try for a while. <em>This is one of those post I do mostly for myself so I don’t forget how I did something, it is all based on answers on</em> <a href="https://superuser.com/questions/891413/remote-connection-desktop-manager-2-7-does-not-support-dpi-scaling-anymore"><em>SuperUser.Com</em></a><em>, I can claim no credit</em> I have a SurfaceBook (first generation) and when I am in the office it is linked to an external monitor, with a different lower DPI, via a dock. If I use Remote Desktop (MSTSC) as built into Windows 10, I can drag sessions between the two monitors and the DPI shift is handled OK. However, if I use my preferred tool <a href="https://www.microsoft.com/en-gb/download/details.aspx?id=44989">Remote Desktop Manager 2.7</a> (as it allow me to store all my commonly used RDP settings) I am in DPI hell. I either get huge fonts or microscopic ones. This is bad whether working on the single high DPI laptop screen work with an external screen. As the <a href="https://superuser.com/questions/891413/remote-connection-desktop-manager-2-7-does-not-support-dpi-scaling-anymore"><em>SuperUser.Com</em></a> post states the answer is to change the compatibility settings for the manager by right clicking on the file &ldquo;C:Program Files (x86)MicrosoftRemote Desktop Connection ManagerRDCMan.exe&rdquo;, selecting compatibility, change high DPI settings, and unchecking high DPI setting override <a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/06/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/06/image_thumb.png" title="image"></a> Once this was done, I have readable resolutions on all screens. Why did I not do a better search months ago?</p>
]]></content:encoded>
    </item>
    <item>
      <title>A workaround for the error &amp;lsquo;TF14061: The workspace ws_1_18;Project Collection Build Service does not exist&amp;rsquo; when mapping a TFVC workspace</title>
      <link>https://blog.richardfennell.net/posts/a-workaround-for-the-error-tf14061-the-workspace-ws_1_18project-collection-build-service-does-not-exist-when-mapping-a-tfvc-workspace/</link>
      <pubDate>Wed, 30 May 2018 12:17:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-workaround-for-the-error-tf14061-the-workspace-ws_1_18project-collection-build-service-does-not-exist-when-mapping-a-tfvc-workspace/</guid>
      <description>&lt;p&gt;Whilst writing some training material for VSTS I hit a problem creating a TFVC workspace. I was using VS2017, linking a TFVC Repo to a local folder. I was connecting to the VSTS instance using an MSA.&lt;/p&gt;
&lt;p&gt;In Team Explorer, when I came to do a ‘Map &amp;amp; Get’ to map the source locations I got a ‘TF14061: The workspace ws_1_18;Project Collection Build Service does not exist’ error&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/05/image.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/05/image_thumb.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Strange error, which I could see no obvious reason for. Turns out the work around was just to press the ‘Advanced’ link/button and accept the defaults&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst writing some training material for VSTS I hit a problem creating a TFVC workspace. I was using VS2017, linking a TFVC Repo to a local folder. I was connecting to the VSTS instance using an MSA.</p>
<p>In Team Explorer, when I came to do a ‘Map &amp; Get’ to map the source locations I got a ‘TF14061: The workspace ws_1_18;Project Collection Build Service does not exist’ error</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/05/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/05/image_thumb.png" title="image"></a></p>
<p>Strange error, which I could see no obvious reason for. Turns out the work around was just to press the ‘Advanced’ link/button and accept the defaults</p>
]]></content:encoded>
    </item>
    <item>
      <title>Still a few spaces left at the Yorkshire Global DevOps BootCamp Venue hosted at Black Marble</title>
      <link>https://blog.richardfennell.net/posts/still-a-few-spaces-left-at-the-yorkshire-global-devops-bootcamp-venue-hosted-at-black-marble/</link>
      <pubDate>Fri, 25 May 2018 14:54:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/still-a-few-spaces-left-at-the-yorkshire-global-devops-bootcamp-venue-hosted-at-black-marble/</guid>
      <description>&lt;p&gt;There are still a few spaces left at the Yorkshire &lt;a href=&#34;https://globaldevopsbootcamp.com/&#34;&gt;Global DevOps BootCamp&lt;/a&gt; Venue hosted at Black Marble &lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/2018/05/GDBC-3-Twitter_preview-300x150.png&#34;&gt; Come and learn about all things cool in DevOps, including&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Video keynote by Microsoft&lt;/li&gt;
&lt;li&gt;Local keynote: Breaking down the Monolith&lt;/li&gt;
&lt;li&gt;Hackathon/HandsOn DevOps challenges. The hands-on part with be based on a common application where we try to solve as many challenges as possible, including ideas like
&lt;ul&gt;
&lt;li&gt;How to containerize an existing application&lt;/li&gt;
&lt;li&gt;How to add telemetry (app insights) to the application and gather hypothesis information&lt;/li&gt;
&lt;li&gt;How to use telemetry to monitor availability&lt;/li&gt;
&lt;li&gt;How to use feature toggles to move application into production without disrupting end users&lt;/li&gt;
&lt;li&gt;How to use release gates&lt;/li&gt;
&lt;li&gt;How to make DB schema changes&lt;/li&gt;
&lt;li&gt;Use Blue Green Deployments&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And there is free lunch too! &lt;a href=&#34;https://www.eventbrite.com/e/global-devops-bootcamp-2018-black-marble-tickets-42496702782&#34;&gt;To register click here&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There are still a few spaces left at the Yorkshire <a href="https://globaldevopsbootcamp.com/">Global DevOps BootCamp</a> Venue hosted at Black Marble <img loading="lazy" src="/wp-content/uploads/sites/2/2018/05/GDBC-3-Twitter_preview-300x150.png"> Come and learn about all things cool in DevOps, including</p>
<ul>
<li>Video keynote by Microsoft</li>
<li>Local keynote: Breaking down the Monolith</li>
<li>Hackathon/HandsOn DevOps challenges. The hands-on part with be based on a common application where we try to solve as many challenges as possible, including ideas like
<ul>
<li>How to containerize an existing application</li>
<li>How to add telemetry (app insights) to the application and gather hypothesis information</li>
<li>How to use telemetry to monitor availability</li>
<li>How to use feature toggles to move application into production without disrupting end users</li>
<li>How to use release gates</li>
<li>How to make DB schema changes</li>
<li>Use Blue Green Deployments</li>
</ul>
</li>
</ul>
<p>And there is free lunch too! <a href="https://www.eventbrite.com/e/global-devops-bootcamp-2018-black-marble-tickets-42496702782">To register click here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Where do I put my testing effort?</title>
      <link>https://blog.richardfennell.net/posts/where-do-i-put-my-testing-effort/</link>
      <pubDate>Thu, 17 May 2018 20:28:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-do-i-put-my-testing-effort/</guid>
      <description>&lt;p&gt;In the past I have blog on the subject of &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2013/02/26/for-those-hard-to-mock-moments-microsoft-fakes-or-typemock-isolator/&#34;&gt;using advanced unit test mocking tools to ‘mock the unmockable’&lt;/a&gt;. It is an interesting question to revisit; how important today are units tests where this form of complex mocking is required?&lt;/p&gt;
&lt;p&gt;Of late I have certainly seen a bit of a move towards using more functional style tests; still using unit test frameworks, but relying on APIs as access points with real backend systems such as DBs and WebServices being deployed as test environments.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In the past I have blog on the subject of <a href="https://blogs.blackmarble.co.uk/rfennell/2013/02/26/for-those-hard-to-mock-moments-microsoft-fakes-or-typemock-isolator/">using advanced unit test mocking tools to ‘mock the unmockable’</a>. It is an interesting question to revisit; how important today are units tests where this form of complex mocking is required?</p>
<p>Of late I have certainly seen a bit of a move towards using more functional style tests; still using unit test frameworks, but relying on APIs as access points with real backend systems such as DBs and WebServices being deployed as test environments.</p>
<p>This practice is made far easier than in the past due to cloud services such as <a href="https://azure.microsoft.com/en-gb/">Azure</a> and tools to treat creation of complex environments  as code such as <a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview">Azure Resource Manager</a> and <a href="https://azure.microsoft.com/en-gb/services/devtest-lab/">Azure DevTest Labs</a>. Both <a href="https://blogs.blackmarble.co.uk/rfennell/tag/devtest-labs/">myself</a> and my colleague <a href="https://blogs.blackmarble.co.uk/rhepworth/category/lability/">RIk Hepworth</a> have posted widely on  the provisioning of such systems.</p>
<p>However, this type of functional testing is still fairly slow, the environments have to be provisioned from scratch, or spun up from saved images, it all takes time. Hence, there is still the space for fast unit tests, and sometimes, usually due to limitations of legacy codebases that were not designed for testing, there is a need to still ‘mock the un-mockable’.</p>
<p>This is where tools like <a href="https://www.typemock.com/">Typemock Isolator</a> and <a href="https://docs.microsoft.com/en-us/visualstudio/test/isolating-code-under-test-with-microsoft-fakes">Microsoft Fakes</a> are still needed. </p>
<p>It has to be said, both are premium products, you need the top Enterprise SKU of Visual Studio to get Fakes or a Typemock Isolator license for Isolator, but when you have a need them their functionality they are the only option. Whether this be to <a href="https://blogs.blackmarble.co.uk/rfennell/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator/">mock out a product like SharePoint for faster development cycles</a>, or to provide a great base to write <a href="http://www.everydayunittesting.com/2014/09/from-legacy-code-to-testable-code-introduction.html">unit tests on for a legacy code base prior to refactoring</a>.</p>
<p><a href="https://blogs.blackmarble.co.uk/rfennell/2013/02/26/for-those-hard-to-mock-moments-microsoft-fakes-or-typemock-isolator/">As I have said before</a>, for me Typemock Isolator easily has the edge over Microsoft Fakes, the syntax is so much easier to use. Hence, it is great to see the Typemock Isolator being have further extended with updated versions for <a href="https://www.typemock.com/isolatorpp-product-page/">C++ and now Linux</a>.</p>
<p>So in answer to my own question, testing is a layered process. Where you put your investment is going to be down to your systems needs. It is true, I think we are going to all invest a bit more in functional testing on ‘cheap to build and run’ cloud test labs. But you can’t beat the speed of tools like Typemock for those particularly nasty legacy code bases where it is hard to create a copy of the environment in a modern test lab.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Making sure when you use VSTS build numbers to version Android Packages they can be uploaded to the Google Play Store</title>
      <link>https://blog.richardfennell.net/posts/making-sure-when-you-use-vsts-build-numbers-to-version-android-packages-they-can-be-uploaded-to-the-google-play-store/</link>
      <pubDate>Sat, 12 May 2018 15:13:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/making-sure-when-you-use-vsts-build-numbers-to-version-android-packages-they-can-be-uploaded-to-the-google-play-store/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;I have a &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task&#34;&gt;VSTS build extension that can apply a VSTS generated build number to Android APK packages&lt;/a&gt;. This takes a VSTS build number and generates, and applies, the Version Name (a string) and Version Code (an integer) to the APK file manifest.&lt;/p&gt;
&lt;p&gt;The default parameters mean that the behaviour of this task is to assume (using a regular expression) the VSTS build number has at least three fields &lt;strong&gt;major.minor.patch&lt;/strong&gt; e.g. 1.2.3, and uses the 1.2 as the Version Name and the 3 as the Version Code.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>I have a <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">VSTS build extension that can apply a VSTS generated build number to Android APK packages</a>. This takes a VSTS build number and generates, and applies, the Version Name (a string) and Version Code (an integer) to the APK file manifest.</p>
<p>The default parameters mean that the behaviour of this task is to assume (using a regular expression) the VSTS build number has at least three fields <strong>major.minor.patch</strong> e.g. 1.2.3, and uses the 1.2 as the Version Name and the 3 as the Version Code.</p>
<p>Now, it is important to note that the Version Code must be a integer between 1 and 2100000000 and for the <a href="https://developer.android.com/studio/publish/versioning">Google Play Store it must be incrementing between versions</a>.</p>
<p>So maybe these default parameter values for this task are not the best options?</p>
<h3 id="the-problem-the-way-we-use-the-task">The problem the way we use the task</h3>
<p>When we use the Android Manifest Versioning task for our <a href="http://tuserv.com/">tuServ Android packages</a> we use different parameter values, but we recently found these values still cause a problem.</p>
<p>Our VSTS build generates  build numbers with four parts <strong>$(Major).$(Minor).$(Year:yy)$(DayOfYear).$(rev:r)</strong></p>
<ul>
<li>$(Major) – set as a VSTS variable e.g. 1</li>
<li>$(Minor) – set as a VSTS variable e.g. 2</li>
<li>$(Year:yy)$(DayOfYear) – the day for the year e.g. 18101</li>
<li>$(rev:r) – the build count for the build definition for the day e.g. 1</li>
</ul>
<p>So we end up with build numbers in the form 1.2.18101.1</p>
<p>The Android version task is set in the build to make</p>
<ul>
<li>the Version Number {1}.{2}.{3}.{4}  - 1.2.18101.1</li>
<li>the Version Code {1}{2}{3}{4} – 12181011</li>
</ul>
<p>The problem is if we do more than 9 builds in a day, which is likely due to our continuous integration process, and release one of the later builds to the Google Play store, then the next day any build with a lower revision than 9 cannot be released to the store as its Version Code is lower than the previously published one e.g.</p>
<ul>
<li>day 1 the published build is 1.2.18101.11 so the Version Code is 121810111</li>
<li>day 2 the published build is 1.2.18102.1 so the Version Code is 12181021</li>
</ul>
<p>So the second Version Code is 10x smaller, hence the package cannot be published.</p>
<h3 id="the-solution">The Solution</h3>
<p>The answer in the end was straightforward and found by one of our engineers <a href="https://twitter.com/sarkimedes">Peter (@sarkimedes)</a>. It was to change the final block of the VSTS build number to <strong>$(rev:rrr)</strong>, as detailed in the <a href="https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions/build/options?view=vsts">VSTS documentation</a>. Thus zero padding the revision from .1 to .001. This allows up to 1000 builds per day before the problem of altering the Version Code order of magnitude problem occurs. Obviously, if you think you might do more than 1000 internal builds in a day you could zero pack as many digits as you want.</p>
<p>So using the new build version number</p>
<ul>
<li>day 1 the published build is 1.2.18101.011 so the Version Code is 1218101011</li>
<li>day 2 the published build is 1.2.18102.001 so the Version Code is 1218102001</li>
</ul>
<p>So a nice fix without any need to alter the Android Manifest Versioning task’s code. However, changing the default Version Code parameter to {1}{2}{3} is probably advisable.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Major new release of my VSTS Cross Platform Extension to build Release Notes</title>
      <link>https://blog.richardfennell.net/posts/major-new-release-of-my-vsts-cross-platform-extension-to-build-release-notes/</link>
      <pubDate>Fri, 27 Apr 2018 13:18:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/major-new-release-of-my-vsts-cross-platform-extension-to-build-release-notes/</guid>
      <description>&lt;p&gt;Today I have released a major new release, V2, of my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes&#34;&gt;VSTS Cross Platform Extension to build release notes&lt;/a&gt;. This new version is all down to the efforts of &lt;a href=&#34;https://github.com/gregpakes&#34;&gt;Greg Pakes&lt;/a&gt; who has completely re-written the task to use newer VSTS APIs.&lt;/p&gt;
&lt;p&gt;A minor issue is that this re-write has introduced a couple of breaking changes, as detailed below and on the &lt;a href=&#34;https://github.com/rfennell/vNextBuild/wiki/GenerateReleaseNotes---Node-based-Cross-Platform-Task&#34;&gt;project wiki&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;oAuth script access has to be enabled on the agent running the task&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/04/image.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/04/image_thumb.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today I have released a major new release, V2, of my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">VSTS Cross Platform Extension to build release notes</a>. This new version is all down to the efforts of <a href="https://github.com/gregpakes">Greg Pakes</a> who has completely re-written the task to use newer VSTS APIs.</p>
<p>A minor issue is that this re-write has introduced a couple of breaking changes, as detailed below and on the <a href="https://github.com/rfennell/vNextBuild/wiki/GenerateReleaseNotes---Node-based-Cross-Platform-Task">project wiki</a></p>
<ul>
<li>oAuth script access has to be enabled on the agent running the task</li>
</ul>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/04/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/04/image_thumb.png" title="image"></a></p>
<ul>
<li>There are minor changes in the template format, but for the good, as it means both TFVC and GIT based releases now use a common template format. Samples can be found in the <a href="https://github.com/rfennell/vNextBuild/tree/master/SampleTemplates/XplatGenerateReleaseNotes%20%28Node%20based%29">project repo</a></li>
</ul>
<p>Because of the breaking changes, we made the decision to release both V1 and V2 of the task in the same extension package, so not forcing anyone to update unless they wish to. A technique I have not tried before, but seems to work well in testing.</p>
<p>Hope people still find the task of use and thanks again to Greg for all the work on the extension</p>
]]></content:encoded>
    </item>
    <item>
      <title>Backing up your TFVC and Git Source from VSTS</title>
      <link>https://blog.richardfennell.net/posts/backing-up-your-tfvc-and-git-source-from-vsts/</link>
      <pubDate>Fri, 20 Apr 2018 12:49:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/backing-up-your-tfvc-and-git-source-from-vsts/</guid>
      <description>&lt;h3 id=&#34;the-issue&#34;&gt;The Issue&lt;/h3&gt;
&lt;p&gt;Azure is a highly resilient service, and &lt;a href=&#34;https://azure.microsoft.com/en-gb/support/legal/sla/visual-studio-team-services/v1_2/&#34;&gt;VSTS has excellent SLAs&lt;/a&gt;. However, a question that is often asked is ‘How do I backup my VSTS instance?’. The simple answer is you don’t. Microsoft handle keeping the instance up, patched and serviceable. Hence, there is no built in means for you to get a local copy of all your source code, work items or CI/CD definitions. &lt;a href=&#34;https://visualstudio.uservoice.com/forums/330519-visual-studio-team-services/suggestions/5339461-provide-a-backup-service-for-visual-studio-team-se&#34;&gt;Though there have been requests for such a service&lt;/a&gt;. This can be an issue for some organisations, particularly for source control, where there can be a need to have a way to keep a private copy of source code for escrow, DR or similar purposes.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="the-issue">The Issue</h3>
<p>Azure is a highly resilient service, and <a href="https://azure.microsoft.com/en-gb/support/legal/sla/visual-studio-team-services/v1_2/">VSTS has excellent SLAs</a>. However, a question that is often asked is ‘How do I backup my VSTS instance?’. The simple answer is you don’t. Microsoft handle keeping the instance up, patched and serviceable. Hence, there is no built in means for you to get a local copy of all your source code, work items or CI/CD definitions. <a href="https://visualstudio.uservoice.com/forums/330519-visual-studio-team-services/suggestions/5339461-provide-a-backup-service-for-visual-studio-team-se">Though there have been requests for such a service</a>. This can be an issue for some organisations, particularly for source control, where there can be a need to have a way to keep a private copy of source code for escrow, DR or similar purposes.</p>
<h3 id="a-solution">A Solution</h3>
<p>To address this issue I decided to write a <a href="https://gist.github.com/rfennell/dc9e49a40d98d77c26698574d298dc53">PowerShell script</a> to download all the GIT and TFVC source in a VSTS instance. The following tactics were used</p>
<ul>
<li>Using the REST API to download each project’s TFVC code as a ZIP file. The use of a ZIP file avoids any long file path issues, a common problem with larger Visual Studio solutions with complex names</li>
<li>Clone each Git repo. I could download single Git branches as ZIP files via the API as per TFVC, but this seemed a poorer solution given how important branches are in Git.</li>
</ul>
<p>So that the process was run on regular basis I designed it to be run within a VSTS build. Again here I had choices:</p>
<ul>
<li>To pass in a Personal Access Token (PAT) to provide access rights to read the required source to be backed up. This has the advantage the script can be run inside or outside of a VSTS build. It also means that a single VSTS build can backup other VSTS instances as long as it has a suitable PAT for access</li>
<li>To use the <a href="https://stackoverflow.com/questions/38670306/executing-git-commands-inside-a-build-job-in-visual-studio-team-services-was-vs">System Token already available to the build agent</a>. This makes the script very neat, and PATs won’t expire, but means it only works within a VSTS build, and can only backup the VSTS instance the build is running on.</li>
</ul>
<p>I chose the former, so a single scheduled build could backup all my VSTS instances by running the script a number of time with different parameters</p>
<script src="https://gist.github.com/rfennell/dc9e49a40d98d77c26698574d298dc53.js"></script>
<p>To use this script you just pass in</p>
<ul>
<li>
<p>The name of the instance to backup</p>
</li>
<li>
<p>A valid PAT for the named instance</p>
</li>
<li>
<p>The path to backup too, which can be a UNC share assuming the build agent has rights to the location</p>
</li>
</ul>
<h3 id="whats-next">What’s Next</h3>
<p>The obvious next step is to convert the PowerShell script to be a VSTS Extension, at this point it would make sense to make it optional to use a provided PAT or the System Access Token. Also I could add code to allow a number of multiple cycling backups to be taken e.g. keep the last 3 backups These are maybe something for the future, but they don’t really seems a good return in investment at this time to package up a working script as an extensions just for a single VSTS instance.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Opps, I made that test VSTS extension public by mistake, what do I do now?</title>
      <link>https://blog.richardfennell.net/posts/opps-i-made-that-test-vsts-extension-public-by-mistake-what-do-i-do-now/</link>
      <pubDate>Sat, 14 Apr 2018 13:50:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/opps-i-made-that-test-vsts-extension-public-by-mistake-what-do-i-do-now/</guid>
      <description>&lt;p&gt;I recently, whilst changing a &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/11/09/major-update-to-my-cicd-process-for-vsts-extensions/&#34;&gt;CI/CD release pipeline&lt;/a&gt;, updated what was previously a private version of a VSTS extension in the &lt;a href=&#34;https://marketplace.visualstudio.com/&#34;&gt;VSTS Marketplace&lt;/a&gt; with a version of the VSIX package set to be public.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note, in my CI/CD process I have a private and public version of each extension (set of tasks), the former is used for functional testing within the CD process, the latter is the one everyone can see&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;So, this meant I had two public versions of the same extension, confusing.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently, whilst changing a <a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/09/major-update-to-my-cicd-process-for-vsts-extensions/">CI/CD release pipeline</a>, updated what was previously a private version of a VSTS extension in the <a href="https://marketplace.visualstudio.com/">VSTS Marketplace</a> with a version of the VSIX package set to be public.</p>
<p><em>Note, in my CI/CD process I have a private and public version of each extension (set of tasks), the former is used for functional testing within the CD process, the latter is the one everyone can see</em>.</p>
<p>So, this meant I had two public versions of the same extension, confusing.</p>
<p>Turns out you can’t change a public extension back to be private, either via the UI or by uploading a corrected VSIX. Also you can’t delete any public extension that has ever been downloaded, and my previously private one had been downloaded once, by me for testing.</p>
<p>So my only option was to un-publish the previously private extension so only the correct version was visible in the public marketplace.</p>
<p>This meant I had to also alter my CI/CD process to change the extensionID of my private extension so I could publish a new private version of the extension.</p>
<p>Luckily, as all the GUIDs for the tasks within the extension did not change once I had installed the new version of the extension I had mispublished in my test VSTS instance my pipeline still worked.</p>
<p>Only downside is I am left with an un-publish ‘dead’ version listed in my private view of the marketplace. This is not a problem, just does not look ‘neat and tidy’</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for 2755 and 1632 &amp;lsquo;The Temp folder is on a drive that is full or is inaccessible&amp;rsquo; errors</title>
      <link>https://blog.richardfennell.net/posts/fix-for-2755-and-1632-the-temp-folder-is-on-a-drive-that-is-full-or-is-inaccessible-errors/</link>
      <pubDate>Wed, 21 Mar 2018 17:01:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-2755-and-1632-the-temp-folder-is-on-a-drive-that-is-full-or-is-inaccessible-errors/</guid>
      <description>&lt;p&gt;Whilst trying to install an MSI package we kept getting the errors 2755 and 1632 ‘The Temp folder is on a drive that is full or is inaccessible’.&lt;/p&gt;
&lt;p&gt;After much fiddling we found the problem was that the &lt;strong&gt;%systemroot%installer&lt;/strong&gt; folder was missing. Once this was manually re-added the MSIs installed without a problem. The actual TEMP folder setting was a red herring&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst trying to install an MSI package we kept getting the errors 2755 and 1632 ‘The Temp folder is on a drive that is full or is inaccessible’.</p>
<p>After much fiddling we found the problem was that the <strong>%systemroot%installer</strong> folder was missing. Once this was manually re-added the MSIs installed without a problem. The actual TEMP folder setting was a red herring</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using VSTS Gates to help improve my deployment pipeline of VSTS Extensions to the Visual Studio Marketplace</title>
      <link>https://blog.richardfennell.net/posts/using-vsts-gates-to-help-improve-my-deployment-pipeline-of-vsts-extensions-to-the-visual-studio-marketplace/</link>
      <pubDate>Tue, 20 Mar 2018 11:30:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-vsts-gates-to-help-improve-my-deployment-pipeline-of-vsts-extensions-to-the-visual-studio-marketplace/</guid>
      <description>&lt;p&gt;My existing &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/11/09/major-update-to-my-cicd-process-for-vsts-extensions/&#34;&gt;VSTS CI/CD process&lt;/a&gt; has a problem that the deployment of a VSTS extension, from the moment it is uploaded to when it’s tasks are available to a build agent, is not instantiation. The process can potentially take a few minutes to roll out. The problem this delay causes is a perfect candidate for using &lt;a href=&#34;https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions/release/approvals/gates&#34;&gt;VSTS Release Gates&lt;/a&gt;; using the gate to make sure the expected version of a task is available to an agent before running the next stage of the CD pipeline e.g waiting after deploying a private build of an extension before trying to run functional tests. The problem is how to achieve this with the current VSTS gate options?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My existing <a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/09/major-update-to-my-cicd-process-for-vsts-extensions/">VSTS CI/CD process</a> has a problem that the deployment of a VSTS extension, from the moment it is uploaded to when it’s tasks are available to a build agent, is not instantiation. The process can potentially take a few minutes to roll out. The problem this delay causes is a perfect candidate for using <a href="https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions/release/approvals/gates">VSTS Release Gates</a>; using the gate to make sure the expected version of a task is available to an agent before running the next stage of the CD pipeline e.g waiting after deploying a private build of an extension before trying to run functional tests. The problem is how to achieve this with the current VSTS gate options?</p>
<h3 id="what-did-not-work">What did not work</h3>
<p>My first thought was to use the <a href="https://docs.microsoft.com/en-us/vsts/build-release/tasks/utility/http-rest-api">Invoke HTTP REST API gate</a>, calling the VSTS API <strong>https://<your vsts instance name>.visualstudio.com/_apis/distributedtask/tasks/<GUID of Task>.</strong> This API call returns a block of JSON containing details about the deployed task visible to the specified VSTS instance. In theory you can parse this data with a <a href="http://goessner.net/articles/JsonPath/">JSONPATH</a> query in the gates success criteria parameter to make sure the correct version of the task is deployed e.g. <strong>eq($.value[?(@.name == &ldquo;BuildRetensionTask&rdquo;)].contributionVersion, &ldquo;1.2.3&rdquo;)</strong> However, there is a problem. At this time the Invoke HTTP REST API gate task does not support the <strong>==</strong> equality operator in it’s success criteria field. I understand this will be addressed in the future, but the fact it is currently missing is a block to my current needs. Next I thought I could write a custom VSTS gate. These are basically ‘run on server’ tasks with a suitably crafted JSON manifest. The problem here is that this type of task does not allow any code (Node.JS or PowerShell) to be run. They only have a limited capability to invoke HTTP APIs or write messages to service bus. So I could not implement the code I needed to process the API response. So another dead end.</p>
<h3 id="what-did-work">What did work</h3>
<p>The answer, after a suggestion from the VSTS Release Management team at Microsoft, was to try the <a href="https://docs.microsoft.com/en-us/vsts/build-release/tasks/utility/azure-function">Azure Function gate</a>. To do this I created <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function">a new Azure Function</a>. I did this using the Azure Portal, picking the consumption billing model, C# and securing the function with a function key, basically the default options. I then added <a href="https://github.com/rfennell/vNextBuild/blob/master/AzureFunctions/VSTSExtensionGate/run.csx">the C# function code (stored in GitHub)</a>, to my newly created Azure Function. This function code takes</p>
<ul>
<li>The name of the VSTS instance</li>
<li>A personal access token (PAT) to access the VSTS instance</li>
<li>The GUID of the task to check for</li>
<li>And the version to check for</li>
</ul>
<p>It then returns a JSON block with true or false based on whether the required task version can be found. If any of the parameters are invalid an API error is returned By passing in this set of arguments my idea was that a single Azure Function could be used to check for the deployment of all my tasks. <strong>Note:</strong> Now I do realise I could also create a release pipeline for the Azure Function, but I chose to just create it via the Azure Portal. I know this is not best practice, but this was just a proof of concept. As usual the danger here is that this proof of concept might be one of those that is too useful and lives forever!</p>
<h3 id="to-use-the-azure-function">To use the Azure Function</h3>
<p>Using the Azure function is simple</p>
<ul>
<li>
<ul>
<li>Added an Azure Function <a href="https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions/release/approvals/gates">gate to a VSTS release</a></li>
<li>Set the <strong>URL</strong> parameter for the Azure Function. This value can be found from the Azure Portal. Note that you don’t need the Function Code query parameter in the URL as this is provided with the next gate parameter. I chose to use a <a href="https://docs.microsoft.com/en-us/vsts/build-release/concepts/library/variable-groups">variable group variable</a> for this parameter so it was easy to reuse between many CD pipelines</li>
<li>Set the <strong>Function Key</strong> parameter for the Azure Function, again you get this from the Azure Portal. This time I used a secure variable group variable</li>
<li>Set the <strong>Method</strong> parameter to POST</li>
<li>Set the <strong>Header</strong> content type as JSON</li>
</ul>
</li>
</ul>
<pre tabindex="0"><code>{
      &#34;Content-Type&#34;: &#34;application/json&#34;
}
</code></pre><ul>
<li>
<ul>
<li>Set the <strong>Body</strong> to contain the details of the VSTS instance and Task to check. This time I used a mixture of variable group variables, release specific variables (the GUID) and environment build/release variables. The key here is I got the version from the primary release artifact <strong>$(BUILD.BUILDNUMBER)</strong> so the correct version of the tasks is tested for automatically</li>
</ul>
</li>
</ul>
<pre tabindex="0"><code>{
     &#34;instance&#34;: &#34;$(instance)&#34;,
     &#34;pat&#34;: &#34;$(pat)&#34;,
     &#34;taskguid&#34;: &#34;$(taskGuid)&#34;,
     &#34;version&#34;: &#34;$(BUILD.BUILDNUMBER)&#34;
}
</code></pre><ul>
<li>Finally set  the <strong>Advanced/Completion Event</strong> to ApiResponse with the success criteria of```
eq(root[&lsquo;Deployed&rsquo;], &rsquo;true&rsquo;)
<pre tabindex="0"><code></code></pre></li>
</ul>
<p>Once this was done I was able to use the Azure function as a VSTS gate as required</p>
<h3 id="image"><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/03/image-1.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/03/image_thumb-1.png" title="image"></a></h3>
<h3 id="summary">Summary</h3>
<p>So I now have a gate that makes sure that for a given VSTS instance a task of a given version has been deployed. If you need this functionality all you need to do is create your own Azure Function instance, drop in <a href="https://github.com/rfennell/vNextBuild/blob/master/AzureFunctions/VSTSExtensionGate/run.csx">my code</a> and configure the VSTS gate appropriately. When equality == operator becomes available for JSONPATH in the REST API Gate I might consider a swap back to a basic REST call, it is less complex to setup, but we shall see. The Azure function model does appear to work well</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fixing a &amp;lsquo;git-lfs filter-process: gif-lfs: command not found&amp;rsquo; error in Visual Studio 2017</title>
      <link>https://blog.richardfennell.net/posts/fixing-a-git-lfs-filter-process-gif-lfs-command-not-found-error-in-visual-studio-2017/</link>
      <pubDate>Fri, 09 Mar 2018 12:02:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixing-a-git-lfs-filter-process-gif-lfs-command-not-found-error-in-visual-studio-2017/</guid>
      <description>&lt;p&gt;I am currently looking at the best way to migrate a large legacy codebase from TFVC to Git. There are a number of ways I could do this, as I have &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/05/10/options-migrating-tfs-to-vsts/&#34;&gt;posted about before&lt;/a&gt;. Obviously, I have ruled out anything that tries to migrate history as ‘that way hell lies’; if people need to see history they will be able to look at the archived TFVC instance. TFVC and Git are just too different in the way they work to make history migrations worth the effort in my opinion.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am currently looking at the best way to migrate a large legacy codebase from TFVC to Git. There are a number of ways I could do this, as I have <a href="https://blogs.blackmarble.co.uk/rfennell/2017/05/10/options-migrating-tfs-to-vsts/">posted about before</a>. Obviously, I have ruled out anything that tries to migrate history as ‘that way hell lies’; if people need to see history they will be able to look at the archived TFVC instance. TFVC and Git are just too different in the way they work to make history migrations worth the effort in my opinion.</p>
<p>So as part of this migration and re-structuring I am looking at using <a href="https://git-scm.com/docs/git-submodule">Git Submodules</a> and <a href="https://git-lfs.github.com/">Git Large File System (LFS)</a> to help divide the monolithic code base into front-end, back-end and shared service modules; using LFS to manage large media files used in integration test cases.</p>
<p>From the PowerShell command prompt, using Git 2.16.2, all my trials were successful, I could achieve what I wanted. However when I tried accessing my trial repos using Visual Studio 2017 I saw issues</p>
<h3 id="submodules">Submodules</h3>
<p>Firstly there are known limitations with Git submodules in Visual Studio Team Explorer. At this time you can clone a repo that has submodules, but you cannot manage the relationships between repos or commit to a submodule from inside Visual Studio.</p>
<p>This is unlike the Git command line, which allows actions to span a parent and child repo with a single command, Git just works it out if <a href="https://git-scm.com/book/en/v2/Git-Tools-Submodules">you pass the right parameters</a></p>
<p>There is a request on <a href="https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/8960629-handle-multiple-git-repositories-in-a-visual-studi">UserVoice</a> to add these functions to Visual Studio, vote for it if you think it is important, I have.</p>
<h3 id="large-file-system">Large File System</h3>
<p>The big problem I had was with LFS, which is meant to work in Visual Studio since 2015.2.</p>
<p>Again from the command line operations were seamless, I just installed Git 2.16.2 via <a href="https://chocolatey.org/search?q=git">Chocolaty</a> and got LFS support without installing anything else. So I was able to enable LFS support on a repo</p>
<pre tabindex="0"><code>git lfs install
</code></pre><p>git lfs track &lsquo;*.bin&rsquo;</p>
<pre tabindex="0"><code>git add .gitattributes
```

and manage standard and large (.bin) files without any problems

However, when I tried to make use of this cloned LFS enabled repo from inside Visual Studio by staging a new large .bin file I got an error ‘git-lfs filter-process: gif-lfs: command not found’

[![image](https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/03/image_thumb.png &#34;image&#34;)](https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/03/image.png)

On reading around this error it suggested that the separate [git-lfs package](https://git-lfs.github.com/) needed to be installed. I did this, making sure that the path to the **git-lfs.exe** (C:Program FilesGit LFS) was in my path, but I still had the problem.

This is where I got stuck and hence needed to get some help from the Microsoft Visual Studio support team.

After a good deal tracing they spotted the problem. The path to **git-lfs.exe** was at the end of my rather long PATH list. It seems Visual Studio was truncating this list of paths, so as the error suggested Visual Studio could not find **git-lfs.exe**.

It is unclear to me whether the command prompt just did not suffer this PATH length issue, or was using a different means to resolve LFS feature. It should be noted from the command line LFS commands were available as soon as I installed Git 2.16.2. I did not have to add the Git LFS package.

So the fix was simple, move the entry for ‘C:Program FilesGit LFS’ to the start of my PATH list and everything worked in Visual Studio.

It should be noted I really need to look at whether I need everything in my somewhat long PATH list. It’s been too long since I re-paved my laptop, there is a lot of strange bits installed.

Thanks again to the Visual Studio Support team for getting me unblocked on this.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Building private VSTS build agents using the Microsoft Packer based agent image creation model</title>
      <link>https://blog.richardfennell.net/posts/building-private-vsts-build-agents-using-the-microsoft-packer-based-agent-image-creation-model/</link>
      <pubDate>Tue, 27 Feb 2018 18:48:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/building-private-vsts-build-agents-using-the-microsoft-packer-based-agent-image-creation-model/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;Having automated builds is essential to any good development process. Irrespective of the build engine in use, VSTS, Jenkins etc. you need to have a means to create the VMs that are running the builds.&lt;/p&gt;
&lt;p&gt;You can of course do this by hand, but in many ways you are just extending the old ‘it works on my PC – the developer can build it only on their own PC’ problem i.e. it is hard to be sure what version of tools are in use. This is made worse by the fact it is too tempting for someone to remote onto the build VM to update some SDK or tool without anyone else’s knowledge.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>Having automated builds is essential to any good development process. Irrespective of the build engine in use, VSTS, Jenkins etc. you need to have a means to create the VMs that are running the builds.</p>
<p>You can of course do this by hand, but in many ways you are just extending the old ‘it works on my PC – the developer can build it only on their own PC’ problem i.e. it is hard to be sure what version of tools are in use. This is made worse by the fact it is too tempting for someone to remote onto the build VM to update some SDK or tool without anyone else’s knowledge.</p>
<p>In an endeavour to address this problem we need a means to create our build VMs in a consistent standardised manner i.e a configuration as code model.</p>
<p><a href="https://blogs.blackmarble.co.uk/rhepworth/2017/03/02/define-once-deploy-everywhere-sort-of/">At Black Marble we have been using Lability</a> to build our lab environments and there is no reason we could not use the same system to create our VSTS build agent VMs</p>
<ul>
<li>Creating base VHDs disk images with patched copies of Windows installed (which we update on a regular basis)</li>
<li>Use <a href="https://github.com/VirtualEngine/Lability">Lability</a> to provision all the required tools – this would need to include all the associated reboots these installers would require. Noting that rebooting and restarting at the correct place, for non DSC based resources, is not Lability’s strongest feature i.e. you have to do all the work in custom code</li>
</ul>
<p>However, there is an alternative. Microsoft have made their <a href="https://www.packer.io/">Packer</a> based method of creating VSTS Azure hosted agents available on <a href="https://github.com/Microsoft/vsts-image-generation">GitHub</a>. Hence, it made sense to me to base our build agent creation system on this standardised image; thus allowing easier migration of builds between private and hosted build agent pools whether in the cloud or on premises, due to the fact they had the same tools installed.</p>
<h3 id="the-basic-process">The Basic Process</h3>
<p>To enable this way of working I <a href="https://github.com/blackmarble/vsts-image-generation">forked the Microsoft repo</a> and modified the Packer JSON configuration file to build Hyper-V based images as opposed to Azure ones. I aimed to make as few changes as possible to ease the process of keeping my forked repo in sync with future changes to the Microsoft standard build agent. In effect replacing the <strong>builder</strong> section of the packer configuration and leaving the <strong>providers</strong> unaltered</p>
<p>So, in doing this I learnt a few things</p>
<h3 id="which-iso-to-use">Which ISO to use?</h3>
<p>Make sure you use a current Operating System ISO. First it save time as it is already patched; but more importantly the provider scripts in the Microsoft configuration assume certain Windows features are available for installation (Containers with Docker support specifically) that were not present on the 2016 RTM ISO</p>
<h3 id="building-an-answeriso">Building an Answer.ISO</h3>
<p>In the <a href="https://www.packer.io/docs/builders/hyperv-iso.html">sample I found</a> for the Packer <strong>hyperv-iso</strong> builder the <strong>AutoUnattended.XML</strong> answers file is provided on an ISO (as opposed to a virtual floppy as floppies are not support on Gen2 HyperV VMs). This means when you edit the answers file you need to rebuild the ISO prior to running Packer.</p>
<p>The sample script to do this has lines to ‘Enable UEFI and disable Non EUFI’; I found that if these lines of PowerShell were run the answers file was ignored on the ISO. I had to comment them out. It seems an <strong>AutoUnattended.XML</strong> answers file edited in VSCode is the correct encoding by default.</p>
<p>I also found that if I ran the PowerShell script to create the ISO from within VSCode’s integrated terminal the ISO builder <strong>mkisofs.exe</strong> failed with an internal error. However, it worked fine from a default PowerShell windows.</p>
<h3 id="installing-the-net-35-feature">Installing the .NET 3.5 Feature</h3>
<p>When a provider tried to install the .NET 3.5 feature using the command</p>
<p><strong>Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature</strong></p>
<p>it failed.</p>
<p>Seems this is a bug in Windows 2016 and the workaround is to specify the –Source location on the install media</p>
<p><strong>Install-WindowsFeature -Name NET-Framework-Features -IncludeAllSubFeature -Source &ldquo;D:sourcessxs&rdquo;</strong></p>
<p>Once the script was modified in this manner it ran without error</p>
<h3 id="well-how-long-does-it-take">Well how long does it take?</h3>
<p>The Packer process is slow, Microsoft say for an Azure VM it can take up to over 8 hours. A HyperV VM is no faster.</p>
<p>I also found the process a bit brittle. I had to restart the process a good few times as….</p>
<ul>
<li>I ran out of disk space (no unsurprising this broke the process)</li>
<li>The new VM did not get a DHCP assigned IP address when connected to the network via the HyperV Default Switch. A reboot of my HyperV host PC fixed this.</li>
<li>Packer decided the VM had rebooted when it had not – usually due to a slow install of some feature or network issues</li>
<li>My Laptop went to sleep and caused one of the above problems</li>
</ul>
<h3 id="so-i-have-a-sysprepd-vhd-now-what-do-i-do-with-it-now">So I have a SysPrep’d VHD now what do I do with it now?</h3>
<p>At this point I have options of what to do with this new exported HyperV image. I could manually create build agent VM instances.</p>
<p>However, it appeals to me to use this new VHD as a based image for Lability, replacing our default ‘empty patched Operating System’ image creation system, so I have a nice consistent way to provision VMs onto our Hyper-V servers.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Yorkshire Venue for the Global DevOps BootCamp 2018</title>
      <link>https://blog.richardfennell.net/posts/yorkshire-venue-for-the-global-devops-bootcamp-2018/</link>
      <pubDate>Thu, 22 Feb 2018 18:15:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/yorkshire-venue-for-the-global-devops-bootcamp-2018/</guid>
      <description>&lt;p&gt;I am really pleased that we at Black Marble are again the first UK location announcing that we are hosting an event on June 16th as part of the 2018 &lt;a href=&#34;https://globaldevopsbootcamp.com/&#34;&gt;Global DevOps BootCamp&lt;/a&gt;. As the event’s site says…&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“The Global DevOps Bootcamp takes place once a year on venues all over the world. The more people joining in, the better it gets! The Global DevOps Bootcamp is a free one-day event hosted by local passionate DevOps communities around the globe and centrally organized by Xpirit &amp;amp; Solidify and sponsored by Microsoft. This event is all about DevOps on the Microsoft Stack. It shows the latest DevOps trends and insights in modern technologies. It is an amazing combination between getting your hands dirty and sharing experience and knowledge in VSTS, Azure, DevOps with other community members.”&lt;/em&gt; &lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really pleased that we at Black Marble are again the first UK location announcing that we are hosting an event on June 16th as part of the 2018 <a href="https://globaldevopsbootcamp.com/">Global DevOps BootCamp</a>. As the event’s site says…</p>
<p><em>“The Global DevOps Bootcamp takes place once a year on venues all over the world. The more people joining in, the better it gets! The Global DevOps Bootcamp is a free one-day event hosted by local passionate DevOps communities around the globe and centrally organized by Xpirit &amp; Solidify and sponsored by Microsoft. This event is all about DevOps on the Microsoft Stack. It shows the latest DevOps trends and insights in modern technologies. It is an amazing combination between getting your hands dirty and sharing experience and knowledge in VSTS, Azure, DevOps with other community members.”</em> </p>
<p><img alt="Global DevOps Bootcamp 2018 @ Black Marble" loading="lazy" src="https://cdn.evbuc.com/images/40011454/196466870134/1/original.jpg"></p>
<p>For more details of the planned event content see the <a href="https://globaldevopsbootcamp.com/">central Global DevOps Bootcamp site</a> and <a href="https://www.eventbrite.com/e/global-devops-bootcamp-2018-black-marble-tickets-42496702782">to register for the Black Marble hosted venue click here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>New release of my Generate Parameters.xml tools to add support for app.config files</title>
      <link>https://blog.richardfennell.net/posts/new-release-of-my-generate-parameters-xml-tools-to-add-support-for-app-config-files/</link>
      <pubDate>Tue, 13 Feb 2018 09:39:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-release-of-my-generate-parameters-xml-tools-to-add-support-for-app-config-files/</guid>
      <description>&lt;p&gt;I recently released an updated version of my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=RichardFennellMVP.ParametersXmlGenerator&#34;&gt;Generate Parameters.XML tool for Visual Studio&lt;/a&gt;. This release adds support for generating &lt;strong&gt;parameters.xml&lt;/strong&gt; files from &lt;strong&gt;app.config&lt;/strong&gt; files as well as &lt;strong&gt;web.config&lt;/strong&gt; files Why you might ask why add support for &lt;strong&gt;app.config&lt;/strong&gt; files when the &lt;strong&gt;parameters.xml&lt;/strong&gt; model is only part of &lt;a href=&#34;https://www.iis.net/downloads/microsoft/web-deploy&#34;&gt;WebDeploy&lt;/a&gt;? Well, at Black Marble we like the model of updating a single file using a tokenised set of parameters from within our DevOps CI/CD pipelines. It makes it easy to take release variables and write them, at deploy time, into a &lt;strong&gt;parameters.xml&lt;/strong&gt; file to be injected into a machine’s configuration. We wanted to extend this to configuring services and the like where for example a DLL based service is configured with a &lt;strong&gt;mycode.dll.config&lt;/strong&gt; file The injection process of the &lt;strong&gt;parameters.xml&lt;/strong&gt; into a &lt;strong&gt;web.config&lt;/strong&gt; file is automatically done as part of the WebDeploy process (or a VSTS extension wrapping WebDeploy), but if you want to use a similar model for &lt;strong&gt;app.config&lt;/strong&gt; files then you need some PowerShell. For example, if we have the &lt;strong&gt;app.config&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently released an updated version of my <a href="https://marketplace.visualstudio.com/items?itemName=RichardFennellMVP.ParametersXmlGenerator">Generate Parameters.XML tool for Visual Studio</a>. This release adds support for generating <strong>parameters.xml</strong> files from <strong>app.config</strong> files as well as <strong>web.config</strong> files Why you might ask why add support for <strong>app.config</strong> files when the <strong>parameters.xml</strong> model is only part of <a href="https://www.iis.net/downloads/microsoft/web-deploy">WebDeploy</a>? Well, at Black Marble we like the model of updating a single file using a tokenised set of parameters from within our DevOps CI/CD pipelines. It makes it easy to take release variables and write them, at deploy time, into a <strong>parameters.xml</strong> file to be injected into a machine’s configuration. We wanted to extend this to configuring services and the like where for example a DLL based service is configured with a <strong>mycode.dll.config</strong> file The injection process of the <strong>parameters.xml</strong> into a <strong>web.config</strong> file is automatically done as part of the WebDeploy process (or a VSTS extension wrapping WebDeploy), but if you want to use a similar model for <strong>app.config</strong> files then you need some PowerShell. For example, if we have the <strong>app.config</strong></p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="cp">&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34; ?&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;configuration&gt;</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&lt;applicationSettings&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;Service.Properties.Settings&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;setting</span> <span class="na">name=</span><span class="s">&#34;Directory1&#34;</span> <span class="na">serializeAs=</span><span class="s">&#34;String&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">         <span class="nt">&lt;value&gt;</span>C:ABC1111<span class="nt">&lt;/value&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;/setting&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;setting</span> <span class="na">name=</span><span class="s">&#34;Directory2&#34;</span> <span class="na">serializeAs=</span><span class="s">&#34;String&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">         <span class="nt">&lt;value&gt;</span>C:abc2222<span class="nt">&lt;/value&gt;</span>
</span></span><span class="line"><span class="cl">       <span class="nt">&lt;/setting&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;/Service.Properties.Settings&gt;</span>
</span></span><span class="line"><span class="cl">   <span class="nt">&lt;/applicationSettings&gt;</span>
</span></span><span class="line"><span class="cl">   <span class="nt">&lt;appSettings&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;add</span> <span class="na">key=</span><span class="s">&#34;AppSetting1&#34;</span> <span class="na">value=</span><span class="s">&#34;123&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;add</span> <span class="na">key=</span><span class="s">&#34;AppSetting2&#34;</span> <span class="na">value=</span><span class="s">&#34;456&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">   <span class="nt">&lt;/appSettings&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;startup&gt;</span> 
</span></span><span class="line"><span class="cl">         <span class="nt">&lt;supportedRuntime</span> <span class="na">version=</span><span class="s">&#34;v4.0&#34;</span> <span class="na">sku=</span><span class="s">&#34;.NETFramework,Version=v4.6&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;/startup&gt;</span>
</span></span><span class="line"><span class="cl"><span class="nt">&lt;/configuration&gt;</span>
</span></span></code></pre></div><p>My extension generates a tokenised <strong>parameters.xml</strong> file</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-xml" data-lang="xml"><span class="line"><span class="cl"><span class="nt">&lt;parameters&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;parameter</span> <span class="na">name=</span><span class="s">&#34;AppSetting1&#34;</span> <span class="na">description=</span><span class="s">&#34;Description for AppSetting1&#34;</span> <span class="na">defaultvalue=</span><span class="s">&#34;\_\_APPSETTING1\_\_&#34;</span> <span class="na">tags=</span><span class="s">&#34;&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;parameterentry</span> <span class="na">kind=</span><span class="s">&#34;XmlFile&#34;</span> <span class="na">scope=</span><span class="s">&#34;\\App.config$&#34;</span> <span class="na">match=</span><span class="s">&#34;/configuration/appSettings/add\[@key=&#39;AppSetting1&#39;\]/@value&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/parameter&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;parameter</span> <span class="na">name=</span><span class="s">&#34;AppSetting2&#34;</span> <span class="na">description=</span><span class="s">&#34;Description for AppSetting2&#34;</span> <span class="na">defaultvalue=</span><span class="s">&#34;\_\_APPSETTING2\_\_&#34;</span> <span class="na">tags=</span><span class="s">&#34;&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;parameterentry</span> <span class="na">kind=</span><span class="s">&#34;XmlFile&#34;</span> <span class="na">scope=</span><span class="s">&#34;\\App.config$&#34;</span> <span class="na">match=</span><span class="s">&#34;/configuration/appSettings/add\[@key=&#39;AppSetting2&#39;\]/@value&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/parameter&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;parameter</span> <span class="na">name=</span><span class="s">&#34;Directory1&#34;</span> <span class="na">description=</span><span class="s">&#34;Description for Directory1&#34;</span> <span class="na">defaultvalue=</span><span class="s">&#34;\_\_DIRECTORY1\_\_&#34;</span> <span class="na">tags=</span><span class="s">&#34;&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;parameterentry</span> <span class="na">kind=</span><span class="s">&#34;XmlFile&#34;</span> <span class="na">scope=</span><span class="s">&#34;\\App.config$&#34;</span> <span class="na">match=</span><span class="s">&#34;/configuration/applicationSettings/Service.Properties.Settings/setting\[@name=&#39;Directory1&#39;\]/value/text()&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/parameter&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;parameter</span> <span class="na">name=</span><span class="s">&#34;Directory2&#34;</span> <span class="na">description=</span><span class="s">&#34;Description for Directory2&#34;</span> <span class="na">defaultvalue=</span><span class="s">&#34;\_\_DIRECTORY2\_\_&#34;</span> <span class="na">tags=</span><span class="s">&#34;&#34;</span><span class="nt">&gt;</span>
</span></span><span class="line"><span class="cl">     <span class="nt">&lt;parameterentry</span> <span class="na">kind=</span><span class="s">&#34;XmlFile&#34;</span> <span class="na">scope=</span><span class="s">&#34;\\App.config$&#34;</span> <span class="na">match=</span><span class="s">&#34;/configuration/applicationSettings/Service.Properties.Settings/setting\[@name=&#39;Directory2&#39;\]/value/text()&#34;</span> <span class="nt">/&gt;</span>
</span></span><span class="line"><span class="cl">  <span class="nt">&lt;/parameter&gt;</span>
</span></span><span class="line"><span class="cl"> <span class="nt">&lt;/parameters&gt;</span>
</span></span></code></pre></div><p>The values in this <strong>parameters.xml</strong> file can be updated using a CI/CD replace tokens task, we use Colin&rsquo;s <a href="https://marketplace.visualstudio.com/items?itemName=colinsalmcorner.colinsalmcorner-buildtasks">ALM Corner Build &amp; Release Tools</a>, <a href="https://github.com/colindembovsky/cols-agent-tasks/tree/master/Tasks/ReplaceTokens">Replace Tokens</a>, in exactly the same way as we would for a <strong>web.config</strong> Finally the following PowerShell can be used to update the <strong>app.config</strong> from this <strong>parameters.xml</strong></p>
<script src="https://gist.github.com/rfennell/01de24bd01fe44a5b6d67be2f719a876.js"></script>
<p>Thus giving a consistent way of updating configuration files for both <strong>web.config</strong> and <strong>app.config</strong> files</p>
]]></content:encoded>
    </item>
    <item>
      <title>Versioning your ARM templates within a VSTS CI/CD pipeline with Semantic Versioning</title>
      <link>https://blog.richardfennell.net/posts/versioning-your-arm-templates-within-a-vsts-ci-cd-pipeline-with-semantic-versioning/</link>
      <pubDate>Sat, 03 Feb 2018 15:23:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/versioning-your-arm-templates-within-a-vsts-ci-cd-pipeline-with-semantic-versioning/</guid>
      <description>&lt;p&gt;I wrote a post recently &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2018/01/22/versioning-your-arm-templates-within-a-vsts-ci-cd-pipeline/&#34;&gt;Versioning your ARM templates within a VSTS CI/CD pipeline&lt;/a&gt;. I realised since writing it that it does not address the issue of if you wish to version your ARM Templates using &lt;a href=&#34;https://semver.org/&#34;&gt;Semantic Versioning&lt;/a&gt;. My JSON versioning task I used did not support the option of not extracting a numeric version number e.g. 1.2.3.4 from a VSTS build number. To address this limitation I have modified &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task&#34;&gt;my Version JSON file task&lt;/a&gt; to address. This change to my task allows it to be used with the &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=gittools.gitversion#overview&#34;&gt;GitVersion VSTS task&lt;/a&gt; to manage the semantic versioning. For more details on &lt;a href=&#34;http://gitversion.readthedocs.io/en/latest/&#34;&gt;GitVersion see the project documentation&lt;/a&gt;. Hence, I my now able to generate a version number using GitVersion and pass this in to the versioning task directly using a build variable.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I wrote a post recently <a href="https://blogs.blackmarble.co.uk/rfennell/2018/01/22/versioning-your-arm-templates-within-a-vsts-ci-cd-pipeline/">Versioning your ARM templates within a VSTS CI/CD pipeline</a>. I realised since writing it that it does not address the issue of if you wish to version your ARM Templates using <a href="https://semver.org/">Semantic Versioning</a>. My JSON versioning task I used did not support the option of not extracting a numeric version number e.g. 1.2.3.4 from a VSTS build number. To address this limitation I have modified <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">my Version JSON file task</a> to address. This change to my task allows it to be used with the <a href="https://marketplace.visualstudio.com/items?itemName=gittools.gitversion#overview">GitVersion VSTS task</a> to manage the semantic versioning. For more details on <a href="http://gitversion.readthedocs.io/en/latest/">GitVersion see the project documentation</a>. Hence, I my now able to generate a version number using GitVersion and pass this in to the versioning task directly using a build variable.</p>
<ul>
<li>Add the GitVersion task at the start of the build, with its default parameters</li>
<li>Add my JSON versioning task with default parameters apart from
<ul>
<li><strong>Version Number</strong> set to <strong>$(GitVersion.SemVer)</strong></li>
<li><strong>Use Version Number without Processing (Advanced)</strong> checked</li>
<li><strong>Filename Pattern (Advanced)</strong> set to <strong>azuredeploy.json</strong></li>
<li><strong>Field to update (Advanced)</strong> set to <strong>contentVersion</strong></li>
</ul>
</li>
</ul>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/02/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/02/image_thumb.png" title="image"></a> In the logs you see output similar to the following```
Source Directory: E:Build2_work361s
Filename Pattern: azuredeploy.json
Version Number/Build Number: 0.1.0-unstable.843
Use Build Number Directly: true
Version Filter to extract build number: d+.d+.d+.d+
Version Format for JSON File: {1}.{2}.{3}
Field to update (all if empty): contentVersion
Output: Version Number Parameter Name: OutputedVersion
Using the provided build number without any further processing
JSON Version Name will be: 0.1.0-unstable.843
Will apply 0.1.0-unstable.843 to 12 files.
Updating the field &lsquo;contentVersion&rsquo; version
Existing Tag: contentVersion&quot;: &ldquo;1.0.0.0&rdquo;
Replacement Tag: contentVersion&quot;: &ldquo;0.1.0-unstable.843&rdquo;
…</p>
<pre tabindex="0"><code></code></pre>]]></content:encoded>
    </item>
    <item>
      <title>What I wish I had known when I started developing Lability DevTest Lab Environments</title>
      <link>https://blog.richardfennell.net/posts/what-i-wish-i-had-known-when-i-started-developing-lability-devtest-lab-environments/</link>
      <pubDate>Wed, 31 Jan 2018 12:33:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-i-wish-i-had-known-when-i-started-developing-lability-devtest-lab-environments/</guid>
      <description>&lt;p&gt;At Black Marble we have been migrating our DevTest labs to from on-premises &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/02/06/how-you-can-keep-using-lab-management-after-a-move-to-vsts-after-a-fashion/&#34;&gt;TFS Lab Management&lt;/a&gt; to a mixture of on-premise and Azure hosted &lt;a href=&#34;https://github.com/VirtualEngine/Lability&#34;&gt;Lability&lt;/a&gt; defined Labs &lt;a href=&#34;https://blogs.blackmarble.co.uk/rhepworth/2017/03/02/define-once-deploy-everywhere-sort-of/&#34;&gt;as discussed by Rik Hepworth on his blog&lt;/a&gt;. I have only been tangentially involved in this effort until recently, consuming the labs but not creating the definitions.&lt;/p&gt;
&lt;p&gt;So this post is one of those I do where I don’t want to forget things I learnt the hard way, or to put it another way asking &lt;a href=&#34;https://blogs.blackmarble.co.uk/rhepworth&#34;&gt;Rik&lt;/a&gt; or &lt;a href=&#34;https://chrislgardner.github.io/&#34;&gt;Chris&lt;/a&gt; after watching a 2 hour environment deploy fail for the Xth time.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At Black Marble we have been migrating our DevTest labs to from on-premises <a href="https://blogs.blackmarble.co.uk/rfennell/2017/02/06/how-you-can-keep-using-lab-management-after-a-move-to-vsts-after-a-fashion/">TFS Lab Management</a> to a mixture of on-premise and Azure hosted <a href="https://github.com/VirtualEngine/Lability">Lability</a> defined Labs <a href="https://blogs.blackmarble.co.uk/rhepworth/2017/03/02/define-once-deploy-everywhere-sort-of/">as discussed by Rik Hepworth on his blog</a>. I have only been tangentially involved in this effort until recently, consuming the labs but not creating the definitions.</p>
<p>So this post is one of those I do where I don’t want to forget things I learnt the hard way, or to put it another way asking <a href="https://blogs.blackmarble.co.uk/rhepworth">Rik</a> or <a href="https://chrislgardner.github.io/">Chris</a> after watching a 2 hour environment deploy fail for the Xth time.</p>
<ul>
<li>
<p>You can’t log tool much. The log files are your friends, both the DSC ones and any generated by tools triggered by DSC. This is because most of the configuration process is done during boots so there is no UI to watch.</p>
</li>
<li>
<p>The DSC log is initially created in working folder the .MOF file is in on the target VM; but after a reboot (e.g. after joining a domain) the next and subsequent DSC log files are created in  <em>C:WindowsSystem32ConfigurationConfigurationStatus</em></p>
</li>
<li>
<p>Make sure you specify the full path for any bespoke logging you do, relative paths make it too easy to lose the log file</p>
</li>
<li>
<p>Stupid typos get you every time, many will be spotted when the MOF file is generated, but too many such as ones in command lines or arguments are only spotted when you deploy an environment. Also too many of these don’t actually cause error messages, they just mean nothing happens. So if you expect a script/tool to be run and it doesn’t check the log and the definition for mismatches in names.</p>
</li>
<li>
<p>If you are using the <a href="https://docs.microsoft.com/en-us/powershell/dsc/packageresource">Package DSC Resource</a> to install an EXE or MSI couple of gotcha’s</p>
</li>
<li>
<p>For MSIs the <strong>ProductName</strong> parameter must exactly match the one in the MSI definition, and this <strong>must match</strong> the GUID <strong>ProductCode</strong>.  Both of these can be found using the <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/aa370557%28v=vs.85%29.aspx">Orca tool</a></p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image-3.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image_thumb-3.png" title="image"></a></p>
</li>
</ul>
<pre tabindex="0"><code>Package MongoDb {

PsDscRunAsCredential = $DomainCredentialsAtDomain

DependsOn = &#39;\[Package\]VCRedist&#39;

Ensure = &#39;Present&#39;

Arguments = &#34;/qn /l\*v c:bootstrapMongoDBInstall.log INSTALLLOCATION=\`&#34;C:Program FilesMongoDBServer3.6\`&#34;&#34;

Name = &#34;MongoDB 3.6.2 2008R2Plus SSL (64 bit)&#34;

Path = &#34;c:bootstrapmongodb-win32-x86\_64-2008plus-ssl-3.6.2-signed.msi&#34;

ProductId = &#34;88B5F0D8-0692-4D86-8FF4-FB3CDBC6B40F&#34;

ReturnCode = 0

}


```*   For EXEs the ProductName does not appear to be as critical, but you still need the Product ID. You can get this with PowerShell on a machine that already has the EXE installed  
      
    
</code></pre><p>Get-WmiObject Win32_Product | Format-Table IdentifyingNumber, Name, Version</p>
<pre tabindex="0"><code>
*   I had network issues, they could mostly be put does to incorrect [Network Address Translation](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/setup-nat-network). In my case this should have been setup when Lability was initially configured, the commands ran OK creating a virtual switch and NetNat, but I ended up with a Windows failback network address of 169.x.x.x when I should have had an address of 192.168.x.x on my virtual switch. So if in doubt check the settings on your virtual switch, in the Windows ‘Networking and Share Center’ before you start doubting your environment definitions.

Hope these pointers help others, as well as myself, next time Lability definitions are written
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Versioning your ARM templates within a VSTS CI/CD pipeline</title>
      <link>https://blog.richardfennell.net/posts/versioning-your-arm-templates-within-a-vsts-ci-cd-pipeline/</link>
      <pubDate>Mon, 22 Jan 2018 14:55:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/versioning-your-arm-templates-within-a-vsts-ci-cd-pipeline/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 3 Feb 2018&lt;/strong&gt; - &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2018/02/03/versioning-your-arm-templates-within-a-vsts-ci-cd-pipeline-with-semantic-versioning/&#34;&gt;Also see Versioning your ARM templates within a VSTS CI/CD pipeline with Semantic Versioning&lt;/a&gt; Azure Resource Templates (ARM) allow your DevOps infrastructure deployments to be treated as ‘content as code’. So infrastructure definitions can be stored in source control. As with any code it is really useful to know which version you have out in production. Now a CI/CD process and its usage logs can help here, but just having a version string stored somewhere accessible on the production systems is always useful. In an ARM Template this can be achieved using the ‘content version’ field in the template (&lt;a href=&#34;https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates&#34;&gt;see documentation for more detail on this file&lt;/a&gt;). The question becomes how best to update this field with a version number? The solution I used was a &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task&#34;&gt;VSTS JSON Versioning Task&lt;/a&gt; I had already created to update the template’s .JSON definition file. I popped this task at the start of my ARM templates CI build process and it set the value prior to the storage of the template as a build artifact used within the CD pipeline &lt;a href=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image-2.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image_thumb-2.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 3 Feb 2018</strong> - <a href="https://blogs.blackmarble.co.uk/rfennell/2018/02/03/versioning-your-arm-templates-within-a-vsts-ci-cd-pipeline-with-semantic-versioning/">Also see Versioning your ARM templates within a VSTS CI/CD pipeline with Semantic Versioning</a> Azure Resource Templates (ARM) allow your DevOps infrastructure deployments to be treated as ‘content as code’. So infrastructure definitions can be stored in source control. As with any code it is really useful to know which version you have out in production. Now a CI/CD process and its usage logs can help here, but just having a version string stored somewhere accessible on the production systems is always useful. In an ARM Template this can be achieved using the ‘content version’ field in the template (<a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates">see documentation for more detail on this file</a>). The question becomes how best to update this field with a version number? The solution I used was a <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">VSTS JSON Versioning Task</a> I had already created to update the template’s .JSON definition file. I popped this task at the start of my ARM templates CI build process and it set the value prior to the storage of the template as a build artifact used within the CD pipeline <a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image-2.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image_thumb-2.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Creating test data for my Generate Release Notes Extension for use in CI/CD process</title>
      <link>https://blog.richardfennell.net/posts/creating-test-data-for-my-generate-release-notes-extension-for-use-in-ci-cd-process/</link>
      <pubDate>Fri, 19 Jan 2018 20:13:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/creating-test-data-for-my-generate-release-notes-extension-for-use-in-ci-cd-process/</guid>
      <description>&lt;p&gt;As part of the continued improvement to my CI/CD process I needed to provide a means so that whenever I test my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task&#34;&gt;Generate Release Notes Task&lt;/a&gt;, within it’s CI/CD process, new commits and work item associations are made. This is required because the task only picks up new commits and work items since the last successful running of a given build. So if the last release of the task extension was successful then the next set of tests have no associations to go in the release notes, not exactly exercising all the code paths! In the past I added this test data by hand, a new manual commit to the repo prior to a release; but why have a dog and bark yourself? Better to automate the process. This can done using a PowerShell file, run inline or stored in the builds source repo and run within a VSTS build. The code is shown below, you can pass in the required parameters, but I set sensible default for my purposes&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As part of the continued improvement to my CI/CD process I needed to provide a means so that whenever I test my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">Generate Release Notes Task</a>, within it’s CI/CD process, new commits and work item associations are made. This is required because the task only picks up new commits and work items since the last successful running of a given build. So if the last release of the task extension was successful then the next set of tests have no associations to go in the release notes, not exactly exercising all the code paths! In the past I added this test data by hand, a new manual commit to the repo prior to a release; but why have a dog and bark yourself? Better to automate the process. This can done using a PowerShell file, run inline or stored in the builds source repo and run within a VSTS build. The code is shown below, you can pass in the required parameters, but I set sensible default for my purposes</p>
<script src="https://gist.github.com/rfennell/2cc36232158518b3b36866bfd321644d.js"></script>
<p>For this PowerShell code to work you do need make some security changes to allow the build agent service user to write to the Git repo. <a href="https://docs.microsoft.com/en-gb/vsts/build-release/actions/scripts/git-commands">This is documented by Microsoft</a>. The PowerShell task to run this code is placed in a build as the only task <a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image_thumb.png" title="image"></a> This build is then triggered as part of the release process <a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image-1.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2018/01/image_thumb-1.png" title="image"></a> Note that the triggering of this build has to be such that it runs on a non-blocking build agent <a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/23/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/">as discussed in my previous posts</a>. In my case I trigger the build to add the extra commits and work items just before triggering the validation build on my private Azure hosted agent. Now, there is no reason you can’t just run the PowerShell directly within the release if you wanted to. I chose to use a build so that the build could be reused between different VSTS extension CI/CD pipelines; remember I have two Generate Release Note Extensions, PowerShell and NodeJS Based. So another step to fully automating the whole release process.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How I fixed my problem that my VSTS Build Extension was too big to upload to the Marketplace</title>
      <link>https://blog.richardfennell.net/posts/how-i-fixed-my-problem-that-my-vsts-build-extension-was-too-big-to-upload-to-the-marketplace/</link>
      <pubDate>Fri, 05 Jan 2018 14:48:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-i-fixed-my-problem-that-my-vsts-build-extension-was-too-big-to-upload-to-the-marketplace/</guid>
      <description>&lt;p&gt;Whist adding a couple of new tasks to my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task&#34;&gt;VSTS Manifest Versioning Extension&lt;/a&gt; I hit the problem that VSIX package became too big to upload to the Marketplace. The error I saw in my CI/CD VSTS pipeline was```
##vso[task.logissue type=error;]error:
Failed Request: Bad Request(400) -
The extension package size &amp;lsquo;23255292 bytes&amp;rsquo; exceeds the
maximum package size &amp;lsquo;20971520 bytes&amp;rsquo;&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code class=&#34;language-This&#34; data-lang=&#34;This&#34;&gt;
1.  They get a list of files
2.  Extract a version number from the build number
3.  Then apply this to one or more files in a product/task specific manner

there has been some cut and paste coding. This means that I have NPM modules in the tasks **package.json** file that were not needed for a given task. I could manually address this but there is an NPM module to help, DepCheck. First install the [DepCheck](https://www.npmjs.com/package/depcheck) module```
npm install depcheck –g
```then run **depcheck** from the command line whist within your task’s folder. This returns a list of modules listed in the **package.json** that are not referenced in the code files. These can then be removed from the **package.json.**  e.g. I saw```
Unused dependencies
\* @types/node
\* @types/q
\* Buffer
\* fs
\* request
\* tsd
Unused devDependencies
\* @types/chai
\* @types/mocha
\* @types/node
\* mocha-junit-reporter
\* ts-loader
\* ts-node
\* typings
```The important ones to focus on are the first block (non-development references), as these are the ones that are packaged with the production code in the VSIX; I was already pruning the **node\_module** folder of development dependencies prior to creating the VSIX to remove devDependancies using the command```
npm prune –production
```I did find some of the listed modules strange, as I knew they really were needed and a quick test of removing them did show the code failed if they were missing. These are what [depchecks documentation calls false alerts](https://www.npmjs.com/package/depcheck). I found I could remove the **@type/xxx** and **tsd** references, which were the big ones, that are only needed in development when working in TypeScript. Once these were removed for all four of my NodeJS based tasks my VSIX dropped in size from 22Mb to 7Mb. So problem solved.
&lt;/code&gt;&lt;/pre&gt;</description>
      <content:encoded><![CDATA[<p>Whist adding a couple of new tasks to my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">VSTS Manifest Versioning Extension</a> I hit the problem that VSIX package became too big to upload to the Marketplace. The error I saw in my CI/CD VSTS pipeline was```
##vso[task.logissue type=error;]error:
Failed Request: Bad Request(400) -
The extension package size &lsquo;23255292 bytes&rsquo; exceeds the
maximum package size &lsquo;20971520 bytes&rsquo;</p>
<pre tabindex="0"><code class="language-This" data-lang="This">
1.  They get a list of files
2.  Extract a version number from the build number
3.  Then apply this to one or more files in a product/task specific manner

there has been some cut and paste coding. This means that I have NPM modules in the tasks **package.json** file that were not needed for a given task. I could manually address this but there is an NPM module to help, DepCheck. First install the [DepCheck](https://www.npmjs.com/package/depcheck) module```
npm install depcheck –g
```then run **depcheck** from the command line whist within your task’s folder. This returns a list of modules listed in the **package.json** that are not referenced in the code files. These can then be removed from the **package.json.**  e.g. I saw```
Unused dependencies
\* @types/node
\* @types/q
\* Buffer
\* fs
\* request
\* tsd
Unused devDependencies
\* @types/chai
\* @types/mocha
\* @types/node
\* mocha-junit-reporter
\* ts-loader
\* ts-node
\* typings
```The important ones to focus on are the first block (non-development references), as these are the ones that are packaged with the production code in the VSIX; I was already pruning the **node\_module** folder of development dependencies prior to creating the VSIX to remove devDependancies using the command```
npm prune –production
```I did find some of the listed modules strange, as I knew they really were needed and a quick test of removing them did show the code failed if they were missing. These are what [depchecks documentation calls false alerts](https://www.npmjs.com/package/depcheck). I found I could remove the **@type/xxx** and **tsd** references, which were the big ones, that are only needed in development when working in TypeScript. Once these were removed for all four of my NodeJS based tasks my VSIX dropped in size from 22Mb to 7Mb. So problem solved.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Added a new JSON version task to my VSTS Version Extension</title>
      <link>https://blog.richardfennell.net/posts/added-a-new-json-version-task-to-my-vsts-version-extension/</link>
      <pubDate>Fri, 05 Jan 2018 11:04:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/added-a-new-json-version-task-to-my-vsts-version-extension/</guid>
      <description>&lt;p&gt;In response to requests on the VSTS Marketplace I have added a pair of tasks to added/edit entries in a .JSON format files.&lt;/p&gt;
&lt;p&gt;The first is for adding a version to a file like a package.json file e.g.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;{  
&amp;#34;name&amp;#34;: &amp;#34;myapp&amp;#34;,  
&amp;#34;version&amp;#34;: &amp;#34;1.0.0&amp;#34;,  
&amp;#34;license&amp;#34;: &amp;#34;MIT&amp;#34;  
}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The second is designed for angular environment.ts file e.g.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;export const environment = {  
production: true,  
version: &amp;#39;1.0.0.0&amp;#39;  
};
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;But I bet people find other uses, they always do.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In response to requests on the VSTS Marketplace I have added a pair of tasks to added/edit entries in a .JSON format files.</p>
<p>The first is for adding a version to a file like a package.json file e.g.</p>
<pre tabindex="0"><code>{  
&#34;name&#34;: &#34;myapp&#34;,  
&#34;version&#34;: &#34;1.0.0&#34;,  
&#34;license&#34;: &#34;MIT&#34;  
}
</code></pre><p>The second is designed for angular environment.ts file e.g.</p>
<pre tabindex="0"><code>export const environment = {  
production: true,  
version: &#39;1.0.0.0&#39;  
};
</code></pre><p>But I bet people find other uses, they always do.</p>
<p>You can find the extension in the <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">marketplace</a>, you need 1.31.x or later to see the new versioner tasks</p>
]]></content:encoded>
    </item>
    <item>
      <title>Announcing a new VSTS Extension for Starting and Stopping Azure DevTest Labs VMs</title>
      <link>https://blog.richardfennell.net/posts/announcing-a-new-vsts-extension-for-starting-and-stopping-azure-devtest-labs-vms/</link>
      <pubDate>Thu, 30 Nov 2017 12:13:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/announcing-a-new-vsts-extension-for-starting-and-stopping-azure-devtest-labs-vms/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;I have &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/11/23/creating-a-%3Cpre.vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/&#34;&gt;recently been posting&lt;/a&gt; on using Azure to host private VSTS build/release agents to avoid agent queue deadlocking issues with more complex release pipelines. One of the areas discussed is reducing cost of running a private agent in Azure by only running the private agent within a limited time range, when you guess it might be needed. I have done this using DevTest Labs &lt;a href=&#34;https://azure.microsoft.com/en-gb/updates/azure-devtest-labs-schedule-vm-auto-start/&#34;&gt;Auto Start&lt;/a&gt; and &lt;a href=&#34;https://azure.microsoft.com/en-gb/updates/azure-devtest-labs-set-auto-shutdown-for-a-single-lab-vm/&#34;&gt;Auto Stop&lt;/a&gt; features. This works, but is it not better to only start the agent VM when it is actually really needed, not when you guess it might be? I need this private agent only when working on my VSTS extensions, not something I do everyday. Why waste CPU cycles that are never used?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>I have <a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/23/creating-a-%3Cpre.vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/">recently been posting</a> on using Azure to host private VSTS build/release agents to avoid agent queue deadlocking issues with more complex release pipelines. One of the areas discussed is reducing cost of running a private agent in Azure by only running the private agent within a limited time range, when you guess it might be needed. I have done this using DevTest Labs <a href="https://azure.microsoft.com/en-gb/updates/azure-devtest-labs-schedule-vm-auto-start/">Auto Start</a> and <a href="https://azure.microsoft.com/en-gb/updates/azure-devtest-labs-set-auto-shutdown-for-a-single-lab-vm/">Auto Stop</a> features. This works, but is it not better to only start the agent VM when it is actually really needed, not when you guess it might be? I need this private agent only when working on my VSTS extensions, not something I do everyday. Why waste CPU cycles that are never used?</p>
<h3 id="new-vsts-extension">New VSTS Extension</h3>
<p>I had expected there would already be a VSTS  extension to Start and Stop DevTest Lab VMs, but the <a href="https://marketplace.visualstudio.com/items?itemName=ms-azuredevtestlabs.tasks">Microsoft provided extension for DevTest Labs</a> only provides tasks for the creation and deletion of VMs within a lab. So I am pleased to announce the release of my new <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-DevTestLab">DevTest Labs VSTS Extension</a> to fill this gap, adding tasks to start and stop a DevTest Lab VM on demand from within a build or a release. My Usage I have been able to use the tasks in this extension to start my private Azure hosted agent only when I need it for functional tests within a release. However, they could equally be used for a variety of different testing scenarios where any form of pre-built/configured VMs needs to be started or stopped as opposed to slower processes of creating/deploying a new deployment of a DevTest lab VM. In may case I added an extra agent phases to my release pipeline to start the VM prior to it being needed. <a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2017/11/image-2.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2017/11/image_thumb-2.png" title="image"></a> I could also have used another agent phase to stop the VM once the tests were completed. However, I made the call to leave the VM running and let DevTest Labs’ Auto Stop shut it down at the end of the day. The reason for this is that VM start up and shutdown is still fairly slow, a minute or two, and I often find I need to run a set of function tests a few times during my development cycle; so it is a bit more efficient to leave the VM running until the end of the day. Only taking the start-up cost once. You may have course have different needs, hence providing both the Start and Stop Tasks</p>
<h3 id="development">Development</h3>
<p>This new extension aims to act as a supplement to the <a href="https://marketplace.visualstudio.com/items?itemName=ms-azuredevtestlabs.tasks">Microsoft provided Azure DevTest Lab Extension</a>. Hence to make development and adoption easier, it uses exactly the same <a href="https://github.com/Azure/azure-devtestlab/tree/master/VstsTasks">source code structure</a> and task parameters as the Microsoft provided extension. The task parameters being:</p>
<ul>
<li><strong>Azure RM Subscription</strong> - Azure Resource Manager subscription to configure before running.</li>
<li><strong>Source Lab VM ID</strong> - Resource ID of the source lab VM. The source lab VM must be in the selected lab, as the custom image will be created using its VHD file. You can use any variable such as <em>$(labVMId)</em>, the output of calling Create Azure DevTest Labs VM, that contains a value in the form <em>/subscriptions/{subId}/resourceGroups/{rgName}/providers/Microsoft.DevTestLab/labs/{labName}/virtualMachines/{vmName}</em>.</li>
</ul>
<p>The issue I had was that the DevTest Labs PowerShell API did not provide a command to start or stop a VM in a lab. I needed to load the Azure PowerShell library to use the <strong>Invoke-AzureRmResourceAction</strong>  command. This requires you first call <strong>Login-AzureRmAccount</strong> to authenticate prior to calling the actual <strong>Invoke-AzureRmResourceAction</strong> required. This required a bit of extra code to get and reuse the AzureRM endpoint to find the authentication details.```
# Get the parameters
$ConnectedServiceName = Get-VstsInput -Name &ldquo;ConnectedServiceName&rdquo;</p>
<h1 id="get-the-end-point-from-the-name-passed-as-a-parameter">Get the end point from the name passed as a parameter</h1>
<p>$Endpoint = Get-VstsEndpoint -Name $ConnectedServiceName -Require</p>
<h1 id="get-the-authentication-details">Get the authentication details</h1>
<p>$clientID = $Endpoint.Auth.parameters.serviceprincipalid
$key = $Endpoint.Auth.parameters.serviceprincipalkey
$tenantId = $Endpoint.Auth.parameters.tenantid
$SecurePassword = $key | ConvertTo-SecureString -AsPlainText -Force
$cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $clientID, $SecurePassword</p>
<h1 id="authenticate">Authenticate</h1>
<p>Login-AzureRmAccount -Credential $cred -TenantId $tenantId -ServicePrincipal
<code>Important to note that for this code to work you have to set the task’s **task.json** to run **PowerShell3** and package the Powershell VSTS API module in with the task.</code>
&ldquo;execution&rdquo;: {
&ldquo;PowerShell3&rdquo;: {
&ldquo;target&rdquo;: &ldquo;$(currentDirectory)\StartVM.ps1&rdquo;,
&ldquo;argumentFormat&rdquo;: &ldquo;&rdquo;,
&ldquo;workingDirectory&rdquo;: &ldquo;$(currentDirectory)&rdquo;
    }
  }</p>
<pre tabindex="0"><code class="language-If" data-lang="If">
### In Summary

I have certainly found this extension useful, and I have leant more that I had expect I would about VSTS endpoints and Azure authentication. Hope it is useful to you too.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Creating a VSTS build agent on an Azure DevLabs Windows Server VM with no GUI  - Using Artifacts</title>
      <link>https://blog.richardfennell.net/posts/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui-using-artifacts/</link>
      <pubDate>Tue, 28 Nov 2017 20:34:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui-using-artifacts/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/11/23/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/&#34;&gt;In my last post I discussed creating a private VSTS build agent within an Azure DevTest Lab on a VM with no GUI&lt;/a&gt;. It was pointed out to me today, by &lt;a href=&#34;https://blogs.blackmarble.co.uk/rhepworth&#34;&gt;Rik Hepworth&lt;/a&gt;, that I had overlooked an obvious alternative way to get the VSTS agent onto the VM i.e. not having to use a series of commands at an RDP connected command prompt. The alternative I missed is to use a DevTest Lab Artifact; in fact there is such an artifact available within the standard set in DevTest Labs. You just provide a few parameters and you are good to go. &lt;a href=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2017/11/image-1.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2017/11/image_thumb-1.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt; Well you should be good to go, but there is an issue. The PowerShell used to extract the downloaded Build Agent ZIP file does not work on a non-UI based Windows VM. &lt;a href=&#34;http://www.codewrecks.com/blog/index.php/2016/05/27/avoid-using-shell-command-in-powershell-scipts/&#34;&gt;The basic issue here is discussed in this post by my fellow ALM MVP Ricci Gian Maria&lt;/a&gt;. Luckily the fix is simple; I just used the same code to do the extraction of the ZIP file that I used in my &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/11/23/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/&#34;&gt;previous post&lt;/a&gt;. I have submitted this fix as a &lt;a href=&#34;https://github.com/Azure/azure-devtestlab/pull/319&#34;&gt;Pull Request to the DevTest Lab Team&lt;/a&gt; so hopefully the standard repository will have the fix soon and you won’t need to do a fork to create a private artifacts repo as I have. &lt;strong&gt;Update 1st December 2017&lt;/strong&gt; The &lt;a href=&#34;https://github.com/Azure/azure-devtestlab/pull/319&#34;&gt;Pull Request to the DevTest Lab Team&lt;/a&gt; with the fixed code has been accepted and the fix is now in the master branch of the public artifact repo, so automatically available to all&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/23/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/">In my last post I discussed creating a private VSTS build agent within an Azure DevTest Lab on a VM with no GUI</a>. It was pointed out to me today, by <a href="https://blogs.blackmarble.co.uk/rhepworth">Rik Hepworth</a>, that I had overlooked an obvious alternative way to get the VSTS agent onto the VM i.e. not having to use a series of commands at an RDP connected command prompt. The alternative I missed is to use a DevTest Lab Artifact; in fact there is such an artifact available within the standard set in DevTest Labs. You just provide a few parameters and you are good to go. <a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2017/11/image-1.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2017/11/image_thumb-1.png" title="image"></a> Well you should be good to go, but there is an issue. The PowerShell used to extract the downloaded Build Agent ZIP file does not work on a non-UI based Windows VM. <a href="http://www.codewrecks.com/blog/index.php/2016/05/27/avoid-using-shell-command-in-powershell-scipts/">The basic issue here is discussed in this post by my fellow ALM MVP Ricci Gian Maria</a>. Luckily the fix is simple; I just used the same code to do the extraction of the ZIP file that I used in my <a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/23/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/">previous post</a>. I have submitted this fix as a <a href="https://github.com/Azure/azure-devtestlab/pull/319">Pull Request to the DevTest Lab Team</a> so hopefully the standard repository will have the fix soon and you won’t need to do a fork to create a private artifacts repo as I have. <strong>Update 1st December 2017</strong> The <a href="https://github.com/Azure/azure-devtestlab/pull/319">Pull Request to the DevTest Lab Team</a> with the fixed code has been accepted and the fix is now in the master branch of the public artifact repo, so automatically available to all</p>
]]></content:encoded>
    </item>
    <item>
      <title>Creating a VSTS build agent on an Azure DevLabs Windows Server VM with no GUI</title>
      <link>https://blog.richardfennell.net/posts/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/</link>
      <pubDate>Thu, 23 Nov 2017 12:38:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updates&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;28th Nov 2017&lt;/strong&gt;: Also see this second post &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/11/28/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui-using-artifacts/&#34;&gt;Creating a VSTS build agent on an Azure DevLabs Windows Server VM with no GUI - Using Artifacts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;30th Nov 2017&lt;/strong&gt;: Also see associated post &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/11/30/announcing-a-new-vsts-extension-for-starting-and-stopping-azure-devtest-labs-vms/&#34;&gt;Announcing a new VSTS Extension for Starting and Stopping Azure DevTest Labs VMs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;p&gt;As I &lt;a href=&#34;https://blogs.blackmarble.co.uk/rfennell/2017/11/09/major-update-to-my-cicd-process-for-vsts-extensions/&#34;&gt;posted recently&lt;/a&gt; I have been trying to add more functional tests to the VSTS based release CI/CD pipeline for my &lt;a href=&#34;https://marketplace.visualstudio.com/search?term=fennell&amp;amp;target=VSTS&amp;amp;category=All%20categories&amp;amp;sortBy=Relevance&#34;&gt;VSTS Extensions&lt;/a&gt;, and as I noted depending on how you want to run your tests e.g. trigger sub-builds, you can end up with scheduling deadlocks where a single build agent is scheduling the release and trying to run a new build. The answer is to use a second build agent in a different agent pool e.g. if the release is running on the Hosted build agent use a private build agent for the sub-build, or of course just pay for more hosted build instances. The problem with a private build agent is where to run it. As my extensions are a personal project I don’t have a corporate Hyper-V server to run any extra private agents on, as I would have for an company projects. My MVP MSDN Azure benefits are the obvious answer, but I want any agents to be cheap to run, so I don’t burn through all my MSDN credits for a single build agent. To this end I created a Windows Server 2016 VM in &lt;a href=&#34;https://azure.microsoft.com/en-gb/services/devtest-lab/&#34;&gt;DevLabs&lt;/a&gt; (I prefer to create my VMs in DevLabs as it makes it easier tidying up of my Azure account) using an A0 sizing VM. This is tiny so cheap; I don’t intend to ever do a build on this agent, just schedule releases, so need to install few if any tools, so the size should not be an issue. To further reduce costs I used the auto start and stop features on the VM so it is only running during the hours I might be working. So I get an admittedly slow and limited private build agent but for less that $10 a month. As the VM is small it makes sense to not run a GUI. This means when you RDP to the new VM you just get a command prompt. So how do you get the agent onto the VM and setup? You can’t just open a browser to VSTS or cut and paste a file via RDP, and I wanted to avoid the complexity of having to open up PowerShell remoting on the VM. The process I used was as follows:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updates</strong></p>
<ul>
<li><strong>28th Nov 2017</strong>: Also see this second post <a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/28/creating-a-vsts-build-agent-on-an-azure-devlabs-windows-server-vm-with-no-gui-using-artifacts/">Creating a VSTS build agent on an Azure DevLabs Windows Server VM with no GUI - Using Artifacts</a></li>
<li><strong>30th Nov 2017</strong>: Also see associated post <a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/30/announcing-a-new-vsts-extension-for-starting-and-stopping-azure-devtest-labs-vms/">Announcing a new VSTS Extension for Starting and Stopping Azure DevTest Labs VMs</a></li>
</ul>
<hr>
<p>As I <a href="https://blogs.blackmarble.co.uk/rfennell/2017/11/09/major-update-to-my-cicd-process-for-vsts-extensions/">posted recently</a> I have been trying to add more functional tests to the VSTS based release CI/CD pipeline for my <a href="https://marketplace.visualstudio.com/search?term=fennell&amp;target=VSTS&amp;category=All%20categories&amp;sortBy=Relevance">VSTS Extensions</a>, and as I noted depending on how you want to run your tests e.g. trigger sub-builds, you can end up with scheduling deadlocks where a single build agent is scheduling the release and trying to run a new build. The answer is to use a second build agent in a different agent pool e.g. if the release is running on the Hosted build agent use a private build agent for the sub-build, or of course just pay for more hosted build instances. The problem with a private build agent is where to run it. As my extensions are a personal project I don’t have a corporate Hyper-V server to run any extra private agents on, as I would have for an company projects. My MVP MSDN Azure benefits are the obvious answer, but I want any agents to be cheap to run, so I don’t burn through all my MSDN credits for a single build agent. To this end I created a Windows Server 2016 VM in <a href="https://azure.microsoft.com/en-gb/services/devtest-lab/">DevLabs</a> (I prefer to create my VMs in DevLabs as it makes it easier tidying up of my Azure account) using an A0 sizing VM. This is tiny so cheap; I don’t intend to ever do a build on this agent, just schedule releases, so need to install few if any tools, so the size should not be an issue. To further reduce costs I used the auto start and stop features on the VM so it is only running during the hours I might be working. So I get an admittedly slow and limited private build agent but for less that $10 a month. As the VM is small it makes sense to not run a GUI. This means when you RDP to the new VM you just get a command prompt. So how do you get the agent onto the VM and setup? You can’t just open a browser to VSTS or cut and paste a file via RDP, and I wanted to avoid the complexity of having to open up PowerShell remoting on the VM. The process I used was as follows:</p>
<ol>
<li>In VSTS I created a new Agent Pool for my Azure hosted build agents</li>
<li>In the Azure portal, DevLabs I created a new Windows Server 2016 (1709) VM</li>
<li>I then RDP’d to my new Azure VM, in the open Command Prompt I ran PowerShell <strong>powershell</strong></li>
<li>As I was in my users home directory, I  cd’d into the <strong>downloads</strong> folder <strong>cd downloads</strong></li>
<li>I then ran the following PowerShell command to download the agent (you can get the current URI for the agent from your VSTS Agent Pool ‘Download Agent’ feature, but an old version will do as it will auto update. <strong>invoke-webrequest -UseBasicParsing -uri <a href="https://github.com/Microsoft/vsts-agent/releases/download/v2.124.0/vsts-agent-win7-x64-2.124.0.zip">https://github.com/Microsoft/vsts-agent/releases/download/v2.124.0/vsts-agent-win7-x64-2.124.0.zip</a> -OutFile vsts-agent-win7-x64-2.124.0.zip</strong></li>
<li>You can then follow the standard agent setup instructions from the VSTS Agent Pool ‘Download Agent’ feature <strong>mkdir agent ; cd agent PS Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory(&quot;$HOMEDownloadsvsts-agent-win7-x64-2.124.0.zip&quot;, &ldquo;$PWD&rdquo;)</strong></li>
<li>I then configured the agent to run as a service, I exited back to the command prompt to do this this, so the commands were <strong>exit config.cmd</strong></li>
</ol>
<p>I now had an other build agent pool to use in my CI/CD pipelines at a reasonable cost, and the performance was not too bad either.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Future of Reporting on VSTS with VSTS Analytics</title>
      <link>https://blog.richardfennell.net/posts/future-of-reporting-on-vsts-with-vsts-analytics/</link>
      <pubDate>Thu, 16 Nov 2017 10:25:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/future-of-reporting-on-vsts-with-vsts-analytics/</guid>
      <description>&lt;p&gt;Reporting has always been important for software development, simply put the ability to know what has been done, and what remains to be done. For many teams the out the box reporting within TFS/VSTS dashboards has been enough e.g. sprint burndowns and kanban charts etc. Also TFS has always had SQL Reporting Services (SSRS) to provide rich reporting on a whole host of areas; though in my experience few clients use the out the box reports or customise their own reports. The lack of SSRS based reporting on VSTS has been a blocking limitation for some clients, preventing their move to VSTS. Also irrespective of peoples past use of custom reports, most people would like an easier way, than SSRS, to produce custom reports and charts. So enter &lt;a href=&#34;http://aka.ms/VSTSAnalytics&#34;&gt;VSTS Analytics&lt;/a&gt; Microsoft’s new free reporting option for VSTS that provide a host of reporting options for dashboards, Power BI and OData. For a great introduction have a look at Gregg Boer’s Channel9 video &lt;a href=&#34;https://channel9.msdn.com/Events/connect/2017/T251&#34;&gt;Visual Studio Team Services Reporting: Dashboards, Power BI, and OData&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Reporting has always been important for software development, simply put the ability to know what has been done, and what remains to be done. For many teams the out the box reporting within TFS/VSTS dashboards has been enough e.g. sprint burndowns and kanban charts etc. Also TFS has always had SQL Reporting Services (SSRS) to provide rich reporting on a whole host of areas; though in my experience few clients use the out the box reports or customise their own reports. The lack of SSRS based reporting on VSTS has been a blocking limitation for some clients, preventing their move to VSTS. Also irrespective of peoples past use of custom reports, most people would like an easier way, than SSRS, to produce custom reports and charts. So enter <a href="http://aka.ms/VSTSAnalytics">VSTS Analytics</a> Microsoft’s new free reporting option for VSTS that provide a host of reporting options for dashboards, Power BI and OData. For a great introduction have a look at Gregg Boer’s Channel9 video <a href="https://channel9.msdn.com/Events/connect/2017/T251">Visual Studio Team Services Reporting: Dashboards, Power BI, and OData</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Major update to my CI/CD process for VSTS extensions</title>
      <link>https://blog.richardfennell.net/posts/major-update-to-my-cicd-process-for-vsts-extensions/</link>
      <pubDate>Thu, 09 Nov 2017 12:51:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/major-update-to-my-cicd-process-for-vsts-extensions/</guid>
      <description>&lt;p&gt;As time passes I have found there is a need for more cross platform VSTS extensions as there is more uptake of VSTS beyond it’s historic Microsoft platform based roots.&lt;/p&gt;
&lt;p&gt;Historically most of my extensions have been Powershell based. Now this is not a fundamental problem for cross platform usage. this is due to the availability of  &lt;a href=&#34;https://blogs.msdn.microsoft.com/powershell/2017/06/09/getting-started-with-powershell-core-on-windows-mac-and-linux/&#34;&gt;Powershell Core&lt;/a&gt;. However, Typescript and Node.JS is a better fit I think in many cases. This has caused me to revise the way I structure my repo and build my VSTS extensions to provide a consistent understandable process. My old Gulp based process  for Typescript was too complex and inconsistent between tasks, it even confused me!  &lt;a href=&#34;https://github.com/rfennell/vNextBuild/wiki/Building-the-VSTS-Extensions&#34;&gt;My process revisions have been documented in my vNextBuild Github’s WIKI&lt;/a&gt;, so I don’t propose too repeat the bulk of the content here.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As time passes I have found there is a need for more cross platform VSTS extensions as there is more uptake of VSTS beyond it’s historic Microsoft platform based roots.</p>
<p>Historically most of my extensions have been Powershell based. Now this is not a fundamental problem for cross platform usage. this is due to the availability of  <a href="https://blogs.msdn.microsoft.com/powershell/2017/06/09/getting-started-with-powershell-core-on-windows-mac-and-linux/">Powershell Core</a>. However, Typescript and Node.JS is a better fit I think in many cases. This has caused me to revise the way I structure my repo and build my VSTS extensions to provide a consistent understandable process. My old Gulp based process  for Typescript was too complex and inconsistent between tasks, it even confused me!  <a href="https://github.com/rfennell/vNextBuild/wiki/Building-the-VSTS-Extensions">My process revisions have been documented in my vNextBuild Github’s WIKI</a>, so I don’t propose too repeat the bulk of the content here.</p>
<p>That said, it is worth touching on why I did not use <a href="http://donovanbrown.com/post/yo-vsts">YO VSTS</a>. If I were starting this project again I would certainly look at this tool to build out my basic repo structure. Also I think I would look at a separate repo per VSTS extension, as opposed to putting them all in one repo. However, this project pre-dates the availability of  <a href="http://donovanbrown.com/post/yo-vsts">YO VSTS</a> , hence the structure I have is different. Also as people have forked the repo I don’t intend to introduce breaking changes by splitting the repo. That all said my Typescript/Node.JS process is heavily influenced by the structures and NPM script used in <a href="http://donovanbrown.com/post/yo-vsts">YO VSTS</a>, so people should find the core processes familiar i.e.</p>
<ol>
<li>At the command prompt</li>
<li>CD into an extension folder you wish to build (this contains the <strong>package.json</strong> file)</li>
<li>Get all the NPM packages <strong>npm install</strong> as defined in the <strong>package.json</strong></li>
<li>Run TSLink and transpile the TypeScript to Node.JS. There is a single command <strong>npm run build</strong> to do this, but they can be rerun individually using <strong>npm run lint</strong> and <strong>npm run transpile</strong></li>
<li>Run an unit tests you have <strong>npm run test-no-logging</strong> (Note the <strong>npm run test</strong> script is used by the CI/CD process to dump the test results for uploading to VSTS logging. I tend to find when working locally that just dumping the test results to the console is enough.</li>
<li>Package the the resultant .JS files <strong>npm run package</strong>. This script does two jobs, it remove any NPM modules that are set as -savedev i.e. only used in development and not production and also copies the required files to the correct locations to be packaged into a VSIX file. <strong>NOTE</strong> after this command is run you need to rerun <strong>npm install</strong> to reinstall the development NPM packages prior to be able to run tests etc. locally</li>
</ol>
<p>This is complement by an enhance VSTS CI/CD process that has a far greater focus on automated testing as well as packaging the VSIX files and releasing to the VSTS marketplace</p>
<p><a href="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2017/11/image.png"><img alt="image" loading="lazy" src="https://blogs.blackmarble.co.uk/wp-content/uploads/sites/2/2017/11/image_thumb.png" title="image"></a></p>
<p>  <a href="https://github.com/rfennell/vNextBuild/wiki/Building-the-VSTS-Extensions">Check the vNextBuild Github’s WIKI for more details</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving BM-Bloggers from BlogEngine.NET to WordPress</title>
      <link>https://blog.richardfennell.net/posts/moving-bm-bloggers-from-blogengine-net-to-wordpress/</link>
      <pubDate>Wed, 18 Oct 2017 10:52:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-bm-bloggers-from-blogengine-net-to-wordpress/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://github.com/rxtur/BlogEngine.NET&#34;&gt;BlogEngine.Net&lt;/a&gt; has served us well as a blogging platform for a good few years. However, it is &lt;a href=&#34;https://github.com/rxtur/BlogEngine.NET/issues/164&#34;&gt;no longer under active support&lt;/a&gt;, so it is time to move on, too much risk of future security issues to ignore the lack of support. After a bit of thought we decided on WordPress as a replacement. OK this has had its own history of problems, but it has an active community and is well supported and in the Azure Marketplace.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://github.com/rxtur/BlogEngine.NET">BlogEngine.Net</a> has served us well as a blogging platform for a good few years. However, it is <a href="https://github.com/rxtur/BlogEngine.NET/issues/164">no longer under active support</a>, so it is time to move on, too much risk of future security issues to ignore the lack of support. After a bit of thought we decided on WordPress as a replacement. OK this has had its own history of problems, but it has an active community and is well supported and in the Azure Marketplace.</p>
<p>The <a href="http://laumania.net/2014/10/16/migrating-blogengine-net-wordpress-4-0-2014/">process to move content from BlogEngine to WordPress</a> requires a few steps, and the available community documentation is a bit out of date, mostly due to change in multi-blog support in BlogEngine.NET. So these are the steps I followeded</p>
<h2 id="steps">Steps</h2>
<h3 id="setting-up-a-wordpress-network">Setting up a WordPress Network</h3>
<p>The first step is to create a standard WordPress App Service site on Azure. I used the option to create a MySQL DB in the App Service instance.</p>
<p>Once this was created I needed to make some WordPress setting changes to enable multi blog (network) usage.</p>
<ul>
<li>
<p>First run the standard WordPress setup to create a single site</p>
</li>
<li>
<p>Apply any upgrades available</p>
</li>
<li>
<p>Next I needed to update the settings, this involves editing text files on the instance so I used FTP (Filezilla) and a text editor to edit the required files</p>
</li>
<li>
<p>First I needed to update the php <a href="https://blogs.msdn.microsoft.com/hosamshobak/2015/03/20/azure-websites-how-to-change-php-ini-settings/">execute timeout</a> to give the content import enough time to run for our larger blogs, this means a custom config file. (Actually not sure this is 100% required as the import retry, as discussed below, would probably have been enough)</p>
</li>
<li>
<p>In the Azure portal set the AppSetting <strong>PHP_INI_SCAN_DIR</strong> to <strong>D:homesitewwwroot</strong></p>
</li>
<li>
<p><strong>I</strong>n the root of the site create a text file <strong>phpconfig.ini</strong></p>
</li>
<li>
<p>In the file set <strong>max_execution_time = 600</strong> and save and upload the file</p>
</li>
<li>
<p>As the linked post notes you can check these edit have worked checking with the <strong>phpinfo()</strong>  function on a PHP page on the site.</p>
</li>
<li>
<p>Next <a href="https://codex.wordpress.org/Create_A_Network">create the WordPress network</a>, edit <strong>wp-config.php</strong> and add  <strong>define( &lsquo;WP_ALLOW_MULTISITE&rsquo;, true );</strong></p>
</li>
<li>
<p>Once the site reloads you can now create the network, as the wizard completes it gives instructions to edit the <strong>wp-config.php</strong> and <strong>web.config</strong>, I found note that web.config settings the documentation/wizard gives are missing one element, I needed to use file following</p>
</li>
</ul>
<pre tabindex="0"><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt;  
&lt;configuration&gt;
    &lt;system.webServer&gt;
        &lt;rewrite&gt;
            &lt;rules&gt;
                &lt;rule name=&#34;WordPress Rule 1 Identical&#34; stopProcessing=&#34;true&#34;&gt; &lt;match url=&#34;^index.php$&#34; ignoreCase=&#34;false&#34; /&gt;
                    &lt;action type=&#34;None&#34; /&gt;
                &lt;/rule&gt;
                &lt;rule name=&#34;WordPress Rule 3 Identical&#34; stopProcessing=&#34;true&#34;&gt;
                    &lt;match url=&#34;^(\[\_0-9a-zA-Z-\]+/)?wp-admin$&#34; ignoreCase=&#34;false&#34; /&gt;
                    &lt;action type=&#34;Redirect&#34; url=&#34;{R:1}wp-admin/&#34; redirectType=&#34;Permanent&#34; /&gt;
                &lt;/rule&gt;
                &lt;rule name=&#34;WordPress Rule 4 Identical&#34; stopProcessing=&#34;true&#34;&gt;
                    &lt;match url=&#34;^&#34; ignoreCase=&#34;false&#34; /&gt;
                    &lt;conditions logicalGrouping=&#34;MatchAny&#34;&gt;
                        &lt;add input=&#34;{REQUEST\_FILENAME}&#34; matchType=&#34;IsFile&#34; ignoreCase=&#34;false&#34; /&gt;
                        &lt;add input=&#34;{REQUEST\_FILENAME}&#34; matchType=&#34;IsDirectory&#34; ignoreCase=&#34;false&#34; /&gt;
                    &lt;/conditions&gt;
                    &lt;action type=&#34;None&#34; /&gt;
                &lt;/rule&gt;
                &lt;rule name=&#34;WordPress Rule 5 R2&#34; stopProcessing=&#34;true&#34;&gt;
                    &lt;match url=&#34;^(\[\_0-9a-zA-Z-\]+/)?(wp-(content|admin|includes).\*)&#34; ignoreCase=&#34;false&#34; /&gt;
                    &lt;action type=&#34;Rewrite&#34; url=&#34;{R:2}&#34; /&gt;
                &lt;/rule&gt;
                &lt;rule name=&#34;WordPress Rule 6 Shorter&#34; stopProcessing=&#34;true&#34;&gt;
                    &lt;match url=&#34;^(\[\_0-9a-zA-Z-\]+/)?(.\*.php)$&#34; ignoreCase=&#34;false&#34; /&gt;
                    &lt;action type=&#34;Rewrite&#34; url=&#34;{R:2}&#34; /&gt;
                &lt;/rule&gt;
                &lt;rule name=&#34;WordPress Rule 7 Identical&#34; stopProcessing=&#34;true&#34;&gt;
                    &lt;match url=&#34;.&#34; ignoreCase=&#34;false&#34; /&gt;
                    &lt;action type=&#34;Rewrite&#34; url=&#34;index.php&#34; /&gt;
                &lt;/rule&gt;
            &lt;/rules&gt;
        &lt;/rewrite&gt;
    &lt;/system.webServer&gt;
&lt;/configuration&gt; 
</code></pre><p>Once this all done you should have a WordPress network and can start to import the old BlogEngine content into sub sites</p>
<h3 id="create-sites">Create Sites</h3>
<p>Next step is to login as a WordPress network admin (the original account you created in the wizard) and create a sub site.</p>
<p>When you do this a numeric folder will be created in the form <strong>/site/wwwroot/wp-content/uploads/sites/123</strong> , make a note of this number as you need it for fixing the import content in the next step, you might need to create a test post with an image to force this folder creation.</p>
<h3 id="import-the-blogengine-contents">Import the BlogEngine Contents</h3>
<p>Next we needed to import the content our BlogEngine.NET. This is done using the <a href="http://laumania.net/2014/10/16/migrating-blogengine-net-wordpress-4-0-2014/">basic process document in this blog post</a>, but it needs a few updates due to the posts age and the fact we had a multi-blog setup.</p>
<p>The only export option in BlogEbgine.NET is BlogML and if you are running in <a href="https://github.com/rxtur/BlogEngine.NET/issues/92">multisite mode this appears to be broken</a>. The fix is to edit  <strong>/admin/themes/standard/sidebar.cshtml</strong> around line 90 to remove the if test logic blocking showing the export options in multisite mode. Once this is done you can log into to a sub blog and export its contents as a BlogML XML file.</p>
<p><strong>Note:</strong> This is not without its problems, when you are logged into the sub site as an admin and select the <strong>settings&gt;advanced&gt;export</strong> option you get an error as it tries to load the page <a href="http://yoursite/blogml.axd" title="http://localhost/blogs/blogml.axd">http://yoursite/blogml.axd</a>, this is due to the simple hack used to enable the export features, you need to manually edit this URL to <a href="http://yoursite/[blogname]/blogml.axd" title="http://localhost/blogs/blogml.axd">http://yoursite/[blogname]/blogml.axd</a> and the export works OK</p>
<p>You now move the media files associated with the blog posts. The only difference from moving a single blog setup is you need to place them under the <strong>/site/wwwroot/wp-content/uploads/sites/123</strong> previously created. I suggest creating a folder for all the historic post media e.g. <strong>/site/wwwroot/wp-content/uploads/sites/123/historic</strong> and FTPing up all you old images from <strong>blogengine/App_Data/blogs/[name]/files</strong></p>
<p>I next hit the major issues and that is that the BlogML plugin (which you need to install as the WordPress network administrator) is 7 years old, and won’t activate on current versions of WordPress. The issue is changes in the PHP language. The fix is to use the edit option for the plugin and replace all the references to <strong>break $parseBlock</strong> to <strong>break 1</strong> in the file <strong>xpath.class.php.</strong> Once this is done the plugin activates at the network level, so can be used in each sub site</p>
<p>But before we try the import we need to edit exported BlogML file as the blog post says. However, we can improve on the documented process. The blog post says the tags and categories are lost in the import process, this is true for tags, but it is possible to fix the categories. To do this, and fix the images paths, I have written some PowerShell to do the required updates, it is ugly but works opening the file as text and XML separately</p>
<pre tabindex="0"><code> param  
  (  
     $filein = &#34;C:UsersfezDownloadsBlogML (5).xml&#34;,  
     $outfile = &#34;C:UsersfezDownloadsBlogML rfennell.xml&#34;,  
     $blogname = &#34;rfennell&#34;,  
     $blogid = &#34;2&#34;  
  )  
  # Fix the image path  
\[string\]$document = Get-Content -Path $filein

write-output &#34;Replacing name in $oldstring with BlogId $blogid&#34;

$oldstring = &#34;[http://blogs.blackmarble.co.uk/blogs/](http://blogs.blackmarble.co.uk/blogs/)$blogname/image.axd?picture=&#34;  
$document =  $document -Replace \[regex\]::Escape($oldstring) ,&#34;/wp-content/uploads/sites/$blogid/historic/&#34;

#seems to have image URLs with missing slashs in path, take the chance to fix them  
$oldstring = &#34;[http://blogs.blackmarble.co.uk/blogs/](http://blogs.blackmarble.co.uk/blogs/)$($blogname)image.axd?picture=&#34;  
$document =  $document -Replace \[regex\]::Escape($oldstring) ,&#34;/wp-content/uploads/sites/$blogid/historic/&#34;

Set-Content -Value $document -Path $outFile

\[xml\]$XmlDocument = Get-Content -Path $outFile

\# fix the categories block  
foreach ($item in $xmlDocument.blog.categories.category) {  
    try {  
        Write-output &#34;Setting $($item.id) to $($item.title.&#39;#cdata-section&#39;)&#34;  
        # fix all the categories on the post  
        $XmlDocument = $XmlDocument.OuterXml.Replace($($item.id),$($item.title.&#39;#cdata-section&#39;))

        # fix the categories block  
        $item.id = $item.title.&#39;#cdata-section&#39;  
    } catch {}  
  }

$xmlDocument.Save($outfile)  
</code></pre><p>Make sure the parameters are correct for your blog export and target site then process you BlogEngine Export.</p>
<p>You can now login to your newly create WordPress subsite and using the <strong>Tools/Import</strong> option to run the BlogML wizard. You are prompted to remap the authors of the posts as needed. I unified them to the site owner if needed, the started the import. Now we did edit of the PHP timeout, but I found that for my largest 7Mb export file I still got Error 500 timeouts. The good news is that you can just rerun the import (maybe a few time) and it is clever enough to pickup where it left off and will eventually finish, with no duplications. Now there maybe a different timeout you need to set but I did not find it.</p>
<p>You should now have imported post content into you sub site. Unfortunately, you will have to handle static pages manually.  </p>
<h3 id="finishing-up">Finishing Up</h3>
<p>You are now in realm of WordPress, so you can add users, plug-ins and themes as needed to style your set of blogs.</p>
<p>One I found very useful was the <a href="https://en-gb.wordpress.org/plugins/wds-multisite-aggregate/">WDS Multisite Aggregator</a> which allowed the root site to be an aggregation of all the sub sites, just the same as I had on BlogEngine.NET multisite.</p>
<p>Also as I was running on Azure I needed some <a href="https://blogs.msdn.microsoft.com/azureossds/2017/04/20/an-example-of-setting-wordpress-email-with-office-365-smtp/">special handling</a> for email to use SMTP with the plug-in <a href="https://wordpress.org/plugins/wp-mail-smtp/">WP Mail</a>. Once this change was done it could configure the network root site’s email to allow user password resets. For comments each individual sub sites email needs configuring.</p>
<p>I had had concerns over links in old post (as the URL structure had changed), but WordPress seems to sort most of this out, the remained were sorted with redirection rules in the web.config.</p>
<h3 id="conclusion">Conclusion</h3>
<p>This whole process took some experimentation, but once done the rest was a ‘handle turning process’. Lets hope WordPress works as well for us as BlogEngine.NET did in the past.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Restarting VS Code fixed NPM INSTALL intermittent EPERM issues</title>
      <link>https://blog.richardfennell.net/posts/restarting-vs-code-fixed-npm-install-intermittent-eperm-issues/</link>
      <pubDate>Mon, 26 Jun 2017 15:07:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/restarting-vs-code-fixed-npm-install-intermittent-eperm-issues/</guid>
      <description>&lt;p&gt;Whilst doing some NPM build work for VSTS Extensions I kept getting &lt;a href=&#34;https://github.com/npm/npm/issues/12059&#34;&gt;intermittent EPERM errors about renaming Windows files during NPM install (as discussed on GitHub)l&lt;/a&gt;. When you get this it completely blocks any development.&lt;/p&gt;
&lt;p&gt;As the Github issue discusses there are many possible reasons for this issue, and many proposed potential solutions. However the only one that worked for me was to restart VS Code; as this appeared to be locking the &lt;strong&gt;node_modules&lt;/strong&gt; folder somehow. This was even though I could delete it via Windows Explorer without any problems.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst doing some NPM build work for VSTS Extensions I kept getting <a href="https://github.com/npm/npm/issues/12059">intermittent EPERM errors about renaming Windows files during NPM install (as discussed on GitHub)l</a>. When you get this it completely blocks any development.</p>
<p>As the Github issue discusses there are many possible reasons for this issue, and many proposed potential solutions. However the only one that worked for me was to restart VS Code; as this appeared to be locking the <strong>node_modules</strong> folder somehow. This was even though I could delete it via Windows Explorer without any problems.</p>
<p>A quick restart of VS Code and all was good again for a while, good enough to work with.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Duplicate project GUID blocking SonarQube analysis of Windows 10 Universal Projects</title>
      <link>https://blog.richardfennell.net/posts/duplicate-project-guid-blocking-sonarqube-analysis-of-windows-10-universal-projects/</link>
      <pubDate>Fri, 09 Jun 2017 16:17:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/duplicate-project-guid-blocking-sonarqube-analysis-of-windows-10-universal-projects/</guid>
      <description>&lt;p&gt;I have working on getting a Windows 10 Universal application analysed with SonarQube 6.x as part of a VSTS build. The problem has been that when the VSTS task to complete the SonarQube analysis ran I kept getting an error in the form&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;WARNING: Duplicate project GUID: &amp;#34;8ace107e-8e3c-4a1b-9920-e76eb1db5e53&amp;#34;. Check that the project is only being built for a single platform/configuration and that that the project guid is unique. The project will not be analyzed by SonarQube. Project file: E:Build1\_work58sBlackMarble.Victory.Common.Module.csproj

… plus loads more similar lines.  
The exclude flag has been set so the project will not be analyzed by SonarQube. Project file: E:Build1\_work58sBlackMarble.Victory.Ux.Common.csproj  
… plus loads more similar lines. 

WARNING: Duplicate project GUID: &amp;#34;1e7b2f4e-6de2-40ab-bff9-a0c63db47ca2&amp;#34;. Check that the project is only being built for a single platform/configuration and that that the project guid is unique. The project will not be analyzed by SonarQube. 2017-06-09T15:50:41.9993583Z ##\[error\]No analysable projects were found but some duplicate project IDs were found. Possible cause: you are building multiple configurations (e.g. DEBUG|x86 and RELEASE|x64) at the same time, which is not supported by the SonarQube integration. Please build and analyse each configuration individually.  
Generation of the sonar-properties file failed. Unable to complete SonarQube analysis.  
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Turns out the issue was that even though my CI build was only set to create an x86|Debug build the act of creating the .APPX package was causing both x64 and ARM builds to be build too, this was too much for SonarQube as it though I had a multiplatform build..&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have working on getting a Windows 10 Universal application analysed with SonarQube 6.x as part of a VSTS build. The problem has been that when the VSTS task to complete the SonarQube analysis ran I kept getting an error in the form</p>
<pre tabindex="0"><code>WARNING: Duplicate project GUID: &#34;8ace107e-8e3c-4a1b-9920-e76eb1db5e53&#34;. Check that the project is only being built for a single platform/configuration and that that the project guid is unique. The project will not be analyzed by SonarQube. Project file: E:Build1\_work58sBlackMarble.Victory.Common.Module.csproj

… plus loads more similar lines.  
The exclude flag has been set so the project will not be analyzed by SonarQube. Project file: E:Build1\_work58sBlackMarble.Victory.Ux.Common.csproj  
… plus loads more similar lines. 

WARNING: Duplicate project GUID: &#34;1e7b2f4e-6de2-40ab-bff9-a0c63db47ca2&#34;. Check that the project is only being built for a single platform/configuration and that that the project guid is unique. The project will not be analyzed by SonarQube. 2017-06-09T15:50:41.9993583Z ##\[error\]No analysable projects were found but some duplicate project IDs were found. Possible cause: you are building multiple configurations (e.g. DEBUG|x86 and RELEASE|x64) at the same time, which is not supported by the SonarQube integration. Please build and analyse each configuration individually.  
Generation of the sonar-properties file failed. Unable to complete SonarQube analysis.  
</code></pre><p>Turns out the issue was that even though my CI build was only set to create an x86|Debug build the act of creating the .APPX package was causing both x64 and ARM builds to be build too, this was too much for SonarQube as it though I had a multiplatform build..</p>
<p>The answer was to pass a parameter into the Visual Studio build task to disable the creation of the .APPX package.</p>
<p>The parameter override required is <strong>/p:AppxBundle=Never</strong>. This overrides the setting of <strong>Always</strong> that was set in the .CSProj file.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_348.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_342.png" title="image"></a></p>
<p>Once this change was done analysis completed as expected. Just need to fix all the issues it found now!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Book your free place at a Global DevOps Bootcamp venue for the 17th June 2017 event</title>
      <link>https://blog.richardfennell.net/posts/book-your-free-place-at-a-global-devops-bootcamp-venue-for-the-17th-june-2017-event/</link>
      <pubDate>Thu, 11 May 2017 19:33:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/book-your-free-place-at-a-global-devops-bootcamp-venue-for-the-17th-june-2017-event/</guid>
      <description>&lt;p&gt;Are you enthused by the all news at &lt;a href=&#34;https://build.microsoft.com/&#34;&gt;Build 2017&lt;/a&gt;?&lt;/p&gt;
&lt;p&gt;Do you want to find out more about VSTS, DevOps and Continuous Delivery?&lt;/p&gt;
&lt;p&gt;Well why not take the chance to join us on June 17th at Black Marble, or one of the over 25 other venues around the world for the first Global DevOps Bootcamp?&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/gdb-logo_%28002%29_%28002%29.png&#34;&gt;&lt;img alt=&#34;gdb-logo (002) (002)&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/gdb-logo_(002)_(002)_thumb.png&#34; title=&#34;gdb-logo (002) (002)&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The Global DevOps Bootcamp is a free one-day event hosted by local passionate DevOps communities around the globe. Find your local venue on the &lt;a href=&#34;http://globaldevopsbootcamp.com/&#34;&gt;Global DevOps Bootcamp website&lt;/a&gt; or search for Global DevOps Bootcamp on &lt;a href=&#34;https://www.eventbrite.com/d/worldwide/global-devops-bootcamp/&#34;&gt;EventBrite&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Are you enthused by the all news at <a href="https://build.microsoft.com/">Build 2017</a>?</p>
<p>Do you want to find out more about VSTS, DevOps and Continuous Delivery?</p>
<p>Well why not take the chance to join us on June 17th at Black Marble, or one of the over 25 other venues around the world for the first Global DevOps Bootcamp?</p>
<p><a href="/wp-content/uploads/sites/2/historic/gdb-logo_%28002%29_%28002%29.png"><img alt="gdb-logo (002) (002)" loading="lazy" src="/wp-content/uploads/sites/2/historic/gdb-logo_(002)_(002)_thumb.png" title="gdb-logo (002) (002)"></a></p>
<p>The Global DevOps Bootcamp is a free one-day event hosted by local passionate DevOps communities around the globe. Find your local venue on the <a href="http://globaldevopsbootcamp.com/">Global DevOps Bootcamp website</a> or search for Global DevOps Bootcamp on <a href="https://www.eventbrite.com/d/worldwide/global-devops-bootcamp/">EventBrite</a></p>
<p>Learn about the latest DevOps trends, ‘get your hands dirty during the Hackaton’, gain insights in new technologies and share experiences with other community members. All based around the concept of &ldquo;From Server to Serverless in a DevOps world&rdquo;. The Global DevOps Bootcamp is all about DevOps on the Microsoft Stack</p>
<p>Remember, places are limited at all venues so make sure you get your name down soon to avoid disappointment</p>
]]></content:encoded>
    </item>
    <item>
      <title>Options migrating TFS to VSTS</title>
      <link>https://blog.richardfennell.net/posts/options-migrating-tfs-to-vsts/</link>
      <pubDate>Wed, 10 May 2017 14:11:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/options-migrating-tfs-to-vsts/</guid>
      <description>&lt;p&gt;I did an event yesterday on using the &lt;a href=&#34;https://www.visualstudio.com/en-us/articles/adopting-vsts&#34;&gt;TFS Database Import Service&lt;/a&gt; to do migrations from on premises TFS to VSTS. During the presentation I discussed some of the other migration options available. Not everyone needs a high fidelity migration, bring everything over. Some teams may want to just bring over their current source or just a subset of their source. Maybe they are making a major change in work practices and want to start anew on VSTS. To try to give an idea of the options I have produced this flow chart to help with the choices&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I did an event yesterday on using the <a href="https://www.visualstudio.com/en-us/articles/adopting-vsts">TFS Database Import Service</a> to do migrations from on premises TFS to VSTS. During the presentation I discussed some of the other migration options available. Not everyone needs a high fidelity migration, bring everything over. Some teams may want to just bring over their current source or just a subset of their source. Maybe they are making a major change in work practices and want to start anew on VSTS. To try to give an idea of the options I have produced this flow chart to help with the choices</p>
<p><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_341.png" title="image"></p>
<p><a href="/files/TFS-Migrate-Choices.pdf">Click for a PDF version</a>  </p>
<p>It mentions a few 3rd party tools in the flowchart, so here are some useful links</p>
<ul>
<li><a href="https://marketplace.visualstudio.com/items?itemName=Willy-PSchaub.TeamFoundationServerIntegrationToolsMarch2012Relea">TFS Integration Platform</a> – can in theory move source and work items – but really try not to use it!</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=nkdagility.vsts-sync-migration">VSTS Sync Migration Tools</a> –  moves Work items</li>
<li><a href="http://www.timelymigration.com/">Timely Migration</a> – moves TFVC source (commercial product)</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=vs-publisher-1455028.OpsHubVisualStudioOnlineMigrationUtility">OpsHub</a> – moves Team Projects (free &amp; commercial versions)</li>
<li><a href="https://github.com/git-tfs/git-tfs">Git TFS</a> – move TFVC into Git</li>
<li><a href="https://www.visualstudio.com/en-us/docs/work/office/bulk-add-modify-work-items-excel">TFS Office Integration</a> – moves work items via Excel</li>
<li><a href="https://www.visualstudio.com/en-us/articles/adopting-vsts">TFS Database Import Service</a> – the full fidelity service</li>
</ul>
<p>Also, if you find yourself in the orange box at the bottom and don’t want to use the <a href="https://www.visualstudio.com/en-us/articles/adopting-vsts">TFS Database Import Service</a> for some reason, <a href="https://www.microsoft.com/en-gb/developers/articles/week02mar2014/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-retaining-as-much-source-and-work-item-history-as-possible/">have a look at this post I did on Microsoft’s UK Developers site</a>. It might give you some ideas</p>
]]></content:encoded>
    </item>
    <item>
      <title>Debugging Typescript in Visual Studio Code</title>
      <link>https://blog.richardfennell.net/posts/debugging-typescript-in-visual-studio-code/</link>
      <pubDate>Thu, 04 May 2017 19:34:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/debugging-typescript-in-visual-studio-code/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is one of those posts I do as a reminder to myself. I have struggled to get debugging working in VSCode for Typescript files. If I set breakpoints in the underlying generated JavaScript they worked, but did not work if they were set in the Typescript source file. There are loads of walkthroughs and answers on Stackoverflow, but all with that vital little bit (for me) missing. So this is what I needed to do for my usage scenario…&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>This is one of those posts I do as a reminder to myself. I have struggled to get debugging working in VSCode for Typescript files. If I set breakpoints in the underlying generated JavaScript they worked, but did not work if they were set in the Typescript source file. There are loads of walkthroughs and answers on Stackoverflow, but all with that vital little bit (for me) missing. So this is what I needed to do for my usage scenario…</em></p>
<p>Whilst developing a Node based VSTS extension I have the following structure</p>
<pre tabindex="0"><code>

Git repo root       (folder)

\--- .vscode         (folder)

\------ launch.json

\--- mystuff.src   (the source .TS files)

\------ script.ts

\------ tsconfig.json

\--- mystuff    (the target folder for the .JS files)
</code></pre><p>I develop my Typescript in the <strong>.src</strong> folder and use the <a href="https://www.typescriptlang.org/docs/handbook/compiler-options.html">TSC compiler</a> to generate the <strong>.JS</strong> file into the target folder, running the ‘<strong>tsc –p .’</strong> command whilst sitting in the <strong>.src</strong> folder.</p>
<p>Note: I actually run this <strong>Tsc</strong> command using Gulp, but this post does not need to go into detail of that.</p>
<p>The key thing is to make sure the two <strong>.json</strong> files have the correct options</p>
<p>The <strong>tsconfig.json</strong> is</p>
<pre tabindex="0"><code>{ 

   &#34;compilerOptions&#34;: { 

   &#34;target&#34;: &#34;ES6&#34;, 

   &#34;module&#34;: &#34;commonjs&#34;, 

   &#34;watch&#34;: false, 

 &#34;outDir&#34;: &#34;../mystuff/&#34;, 

 &#34;sourceMap&#34;: true 

}, 

&#34;exclude&#34;: \[ 

   &#34;node\_modules&#34; 

\] 

}
</code></pre><p>The important lines are highlighted</p>
<ul>
<li>The path to generate to <strong>.JS</strong> to – for me it was important to generate to a different folder as this made it easier to create the VSTS extension packages without shipping the <strong>.TS</strong> files by accident.This was part of my debugging problem as if the <strong>.ts</strong> and <strong>.js</strong> files are in the folder the should be no issues</li>
<li>Creating the source map which enables debugging, the <strong>.JS.MAP</strong> files</li>
</ul>
<p>The <strong>launch.json</strong> is</p>
<pre tabindex="0"><code>{ 

   &#34;version&#34;: &#34;0.2.0&#34;, 

   &#34;configurations&#34;:  

   \[  

   { 

      &#34;type&#34;: &#34;node&#34;, 

      &#34;request&#34;: &#34;launch&#34;, 

      &#34;name&#34;: &#34;Node – my stuff&#34;, 

 &#34;program&#34;: &#34;${workspaceRoot}/mystuff.src/script.ts&#34;, 

 &#34;outFiles&#34;: \[&#34;${workspaceRoot}/mystuff/\*.js&#34;\] 

   } 

   \] 

}
</code></pre><p>The critical lines, and the ones I messed up are</p>
<ul>
<li>Program must point at the <strong>.TS</strong> file</li>
<li>Outfiles must point to the location of the <strong>.JS</strong> and <strong>.JS.Map</strong> files in the target folder <strong>and those square brackets [] are vital</strong></li>
</ul>
<p>Once I had all this in place I could set breakpoints the <strong>.TS</strong> file and they worked</p>
]]></content:encoded>
    </item>
    <item>
      <title>Two new tasks in my Manifest Version VSTS Extension</title>
      <link>https://blog.richardfennell.net/posts/two-new-tasks-in-my-manifest-version-vsts-extension/</link>
      <pubDate>Tue, 02 May 2017 16:04:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/two-new-tasks-in-my-manifest-version-vsts-extension/</guid>
      <description>&lt;p&gt;I have just released a new version of my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task&#34;&gt;VSTS Manifest Version Extension&lt;/a&gt; (1.5.22). This adds two new tasks to the set of versioning tasks in the extension. The complete set now is&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;VersionAssemblies - sets the version in the assemblyinfo.cs or .vb (used pre build)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VersionDotNetCoreAssemblies - sets the version in the *.csproj (used pre build)&lt;/strong&gt;  - cross platform&lt;/li&gt;
&lt;li&gt;VersionAPPX - sets the version in the Package.appxmanifest (used pre build)&lt;/li&gt;
&lt;li&gt;VersionVSIX - sets the version in the source.extension.vsixmanifest (used pre build)&lt;/li&gt;
&lt;li&gt;VersionDacpac - sets the version in a SQL DACPAC (used post build)&lt;/li&gt;
&lt;li&gt;VersionNuspec - sets the version in a Nuget Nuspec file (used pre packing)&lt;/li&gt;
&lt;li&gt;VersionSharePoint - sets the version in a SharePoint 2013/2016/O365 Add-In&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VersionWix - sets the version in a Wix Project&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As with all the other tasks, these new tasks aim to find files recursively to update with a version number extracted from the build number&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just released a new version of my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">VSTS Manifest Version Extension</a> (1.5.22). This adds two new tasks to the set of versioning tasks in the extension. The complete set now is</p>
<ul>
<li>VersionAssemblies - sets the version in the assemblyinfo.cs or .vb (used pre build)</li>
<li><strong>VersionDotNetCoreAssemblies - sets the version in the *.csproj (used pre build)</strong>  - cross platform</li>
<li>VersionAPPX - sets the version in the Package.appxmanifest (used pre build)</li>
<li>VersionVSIX - sets the version in the source.extension.vsixmanifest (used pre build)</li>
<li>VersionDacpac - sets the version in a SQL DACPAC (used post build)</li>
<li>VersionNuspec - sets the version in a Nuget Nuspec file (used pre packing)</li>
<li>VersionSharePoint - sets the version in a SharePoint 2013/2016/O365 Add-In</li>
<li><strong>VersionWix - sets the version in a Wix Project</strong></li>
</ul>
<p>As with all the other tasks, these new tasks aim to find files recursively to update with a version number extracted from the build number</p>
<h3 id="version-net-core-csproj-file">Version .NET Core CSPROJ file</h3>
<p>This is a cross platform task, written in Node.JS. It updates a version number in a .NET Core .CSPROJ file. There is a parameter so it can look for different XML node names</p>
<pre tabindex="0"><code>&lt;Project Sdk=&#34;Microsoft.NET.Sdk.Web&#34;&gt; 

&lt;PropertyGroup&gt; 

&lt;TargetFramework&gt;netcoreapp1.1&lt;/TargetFramework&gt; 

&lt;Version&gt;1.0.0.0&lt;/Version&gt; 

&lt;/PropertyGroup&gt; 

&lt;ItemGroup&gt; 

&lt;PackageReference Include=&#34;Microsoft.AspNetCore&#34; Version=&#34;1.1.1&#34; /&gt; 

&lt;PackageReference Include=&#34;Microsoft.AspNetCore.Mvc&#34; Version=&#34;1.1.2&#34; /&gt; 

&lt;PackageReference Include=&#34;Microsoft.AspNetCore.StaticFiles&#34; Version=&#34;1.1.1&#34; /&gt; 

&lt;PackageReference Include=&#34;Microsoft.Extensions.Logging.Debug&#34; Version=&#34;1.1.1&#34; /&gt; 

&lt;PackageReference Include=&#34;Microsoft.VisualStudio.Web.BrowserLink&#34; Version=&#34;1.1.0&#34; /&gt; 

&lt;/ItemGroup&gt; 

&lt;/Project&gt;
</code></pre><h3 id="version-a-wix-project">Version a Wix project</h3>
<p>As <a href="http://wixtoolset.org/">Wix</a> is a Windows technology, my new Wix task is only supported on Windows (PowerShell) build agents.</p>
<p>Due to the flexibility (complexity?) of Wix it does have to make some assumptions. <a href="https://github.com/rfennell/vNextBuild/wiki/Wix-Versioning">I have documented them in the VSTS extension’s project wiki</a>.</p>
<p>The summary is you need to put all the version number variable definitions in an <strong>installerversion.wxi</strong> file that is included into your Wix project. The task looks for and updates this file(s)</p>
<pre tabindex="0"><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;  
&lt;!-- Note that this file will be overridden by the build server. --&gt;  
&lt;Include&gt;  
  &lt;?define MajorVersion = &#34;1&#34; ?&gt;  
  &lt;?define MinorVersion = &#34;0&#34; ?&gt;  
  &lt;?define BuildNumber = &#34;0&#34; ?&gt;  
  &lt;?define Revision = &#34;0&#34; ?&gt;  
  &lt;?define FullVersion = &#34;1.0.0.0&#34; ?&gt;  
  &lt;!--WiX Installer Versions are Major.Minor.Revision --&gt;  
&lt;/Include&gt;
</code></pre><p>These variables can be used anywhere in the Wix project where needed e.g. to set the MSI version you could do the following</p>
<pre tabindex="0"><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;UTF-8&#34;?&gt;  
&lt;?include InstallerVersion.wxi ?&gt;   
&lt;Wix xmlns=&#34;[http://schemas.microsoft.com/wix/2006/wi&#34;](http://schemas.microsoft.com/wix/2006/wi&#34;)\&gt;  
  &lt;Product Id=&#34;\*&#34; Name=&#34;WixProject&#34; Language=&#34;1033&#34; Version=&#34;$(var.FullVersion)&#34;  Manufacturer=&#34;Test&#34;  
           UpgradeCode=&#34;510ca227-7d91-43f4-881c-13319a07b299&#34;&gt;
</code></pre><p>If you want a different number format just build it from the individual parts.</p>
<p>Hope you find them useful, as usual asked question on <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">VSTS Manifest Version Extension Q&amp;A</a> or raise <a href="https://github.com/rfennell/vNextBuild/issues">issues on Github</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>401.1 Permission error with on-premises TFS when accessing the API with a PAT</title>
      <link>https://blog.richardfennell.net/posts/401-1-permission-error-with-on-premises-tfs-when-accessing-the-api-with-a-pat/</link>
      <pubDate>Sat, 29 Apr 2017 15:27:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/401-1-permission-error-with-on-premises-tfs-when-accessing-the-api-with-a-pat/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;If you are creating VSTS build extensions you will need to get the build or release’s PAT token if you wish to call the VSTS REST API.&lt;/p&gt;
&lt;p&gt;This is done using a call like this (Node)&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;import tl = require(&amp;#39;vsts-task-lib/task&amp;#39;); 

var auth = tl.getEndpointAuthorization(&amp;#39;SYSTEMVSSCONNECTION&amp;#39;, false); 

if (auth.scheme === &amp;#39;OAuth&amp;#39;) { 

var token = auth.parameters\[&amp;#39;AccessToken&amp;#39;\];
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;or (PowerShell)&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$vssEndPoint = Get-ServiceEndPoint -Name &amp;#34;SystemVssConnection&amp;#34; -Context $distributedTaskContext 

$personalAccessToken = $vssEndpoint.Authorization.Parameters.AccessToken
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You pop the resultant PAT into the headers of your REST web request and you are away and running.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>If you are creating VSTS build extensions you will need to get the build or release’s PAT token if you wish to call the VSTS REST API.</p>
<p>This is done using a call like this (Node)</p>
<pre tabindex="0"><code>import tl = require(&#39;vsts-task-lib/task&#39;); 

var auth = tl.getEndpointAuthorization(&#39;SYSTEMVSSCONNECTION&#39;, false); 

if (auth.scheme === &#39;OAuth&#39;) { 

var token = auth.parameters\[&#39;AccessToken&#39;\];
</code></pre><p>or (PowerShell)</p>
<pre tabindex="0"><code>$vssEndPoint = Get-ServiceEndPoint -Name &#34;SystemVssConnection&#34; -Context $distributedTaskContext 

$personalAccessToken = $vssEndpoint.Authorization.Parameters.AccessToken
</code></pre><p>You pop the resultant PAT into the headers of your REST web request and you are away and running.</p>
<h3 id="the-problem">The Problem</h3>
<p>I hit a problem using this logic on VSTS Extension when they are run on TFS. On VSTS all was fine, but on TFS I got an unexpected 401.1 permission error on the first REST call i.e. I could not access the VSTS REST API</p>
<p>I tried setting fiddling with rights of my build user account, it was not that. Also I tried setting the ‘Allow scripts to access OAuth token’ property for the build/release agent</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_345.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_339.png" title="image"></a></p>
<p>But this does not help either. This option just <a href="https://www.visualstudio.com/en-us/docs/build/scripts/index#use-the-oauth-token-to-access-the-rest-api">makes the PAT available as an environment variable</a>, so you don’t need to use the code shown above.</p>
<p>And anyway – it was all worked on VSTS so it could not have been that!</p>
<h3 id="solution">Solution</h3>
<p>The answer was I had basic authentication enabled on my Test TFS VM, as soon as this is disabled (the default) everything leapt into life.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_346.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_340.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Rik and my ‘Living the Dream - real world DevOps’ session available on YouTube</title>
      <link>https://blog.richardfennell.net/posts/rik-and-my-living-the-dream-real-world-devops-session-available-on-youtube/</link>
      <pubDate>Fri, 28 Apr 2017 10:18:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/rik-and-my-living-the-dream-real-world-devops-session-available-on-youtube/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/blogs/rhepworth&#34;&gt;Rik Hepworth&lt;/a&gt; and myself have been doing our ‘Living the Dream – Real world DevOps with VSTS and ARM’ at a good few events over the past few months.&lt;/p&gt;
&lt;p&gt;We now, at last, have a recording of a version of it up on &lt;a href=&#34;http://bit.ly/BMDevOpsPL&#34;&gt;Black Marble YouTube Channel’s DevOps Playlist&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Enjoy….&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="/blogs/rhepworth">Rik Hepworth</a> and myself have been doing our ‘Living the Dream – Real world DevOps with VSTS and ARM’ at a good few events over the past few months.</p>
<p>We now, at last, have a recording of a version of it up on <a href="http://bit.ly/BMDevOpsPL">Black Marble YouTube Channel’s DevOps Playlist</a></p>
<p>Enjoy….</p>
]]></content:encoded>
    </item>
    <item>
      <title>Still spaces available for the Leeds venue of the Global DevOps Bootcamp</title>
      <link>https://blog.richardfennell.net/posts/still-spaces-available-for-the-leeds-venue-of-the-global-devops-bootcamp/</link>
      <pubDate>Tue, 25 Apr 2017 11:59:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/still-spaces-available-for-the-leeds-venue-of-the-global-devops-bootcamp/</guid>
      <description>&lt;p&gt;There are still spaces available at the Black Marble hosted venue for the &lt;a href=&#34;http://globaldevopsbootcamp.com&#34;&gt;Global DevOps Bootcamp&lt;/a&gt; on the 17th of June&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_342.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://www.eventbrite.com/e/global-devops-bootcamp-black-marble-tickets-32433470383?aff=es2&#34;&gt;If you are interested register here&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There are still spaces available at the Black Marble hosted venue for the <a href="http://globaldevopsbootcamp.com">Global DevOps Bootcamp</a> on the 17th of June</p>
<p><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_342.png"></p>
<p><a href="https://www.eventbrite.com/e/global-devops-bootcamp-black-marble-tickets-32433470383?aff=es2">If you are interested register here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Presenting at an event in Leeds - Making it easy to migrate your ALM process to the Cloud</title>
      <link>https://blog.richardfennell.net/posts/presenting-at-an-event-in-leeds-making-it-easy-to-migrate-your-alm-process-to-the-cloud/</link>
      <pubDate>Tue, 25 Apr 2017 11:54:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/presenting-at-an-event-in-leeds-making-it-easy-to-migrate-your-alm-process-to-the-cloud/</guid>
      <description>&lt;p&gt;Do you find your TFS server gets forgotten?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It is not owned by the IT department and the Development team don’t have the time to support it fully, it never gets patched or upgrades?&lt;/li&gt;
&lt;li&gt;Or maybe you are adopting a cloud first strategy for all you systems?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Well maybe it is time to consider moving your on-premises TFS instance to VSTS?&lt;/p&gt;
&lt;p&gt;On the 9th of May at the Crowne Plaza Hotel in Leeds I will be presenting at a Black Marble /Microsoft event where we will be looking at Microsoft’s new high fidelity &lt;a href=&#34;https://www.visualstudio.com/en-us/articles/migration-overview&#34;&gt;VSTS database import tools that can be used to move a TFS instance to VSTS&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Do you find your TFS server gets forgotten?</p>
<ul>
<li>It is not owned by the IT department and the Development team don’t have the time to support it fully, it never gets patched or upgrades?</li>
<li>Or maybe you are adopting a cloud first strategy for all you systems?</li>
</ul>
<p>Well maybe it is time to consider moving your on-premises TFS instance to VSTS?</p>
<p>On the 9th of May at the Crowne Plaza Hotel in Leeds I will be presenting at a Black Marble /Microsoft event where we will be looking at Microsoft’s new high fidelity <a href="https://www.visualstudio.com/en-us/articles/migration-overview">VSTS database import tools that can be used to move a TFS instance to VSTS</a>.</p>
<p>I will also be considering the pros and cons of the other migration options available to you. Hopefully making this a very useful session if you are considering a move to VSTS from TFS or any other source control ALM solution.</p>
<p>Hope to see you there, <a href="http://bit.ly/TFStoVSTS0905">to register click here</a></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_344.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_338.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>On RadioTFS again</title>
      <link>https://blog.richardfennell.net/posts/on-radiotfs-again/</link>
      <pubDate>Thu, 20 Apr 2017 07:12:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/on-radiotfs-again/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/author/rik&#34;&gt;Rik&lt;/a&gt; and myself had a great time fun last night chatting all things ALM and DevOps with Greg Duncan on &lt;a href=&#34;http://radiotfs.com/Show/136&#34;&gt;RadioTFS show 136&lt;/a&gt;. I was amazed it was a year since I was last on.&lt;/p&gt;
&lt;p&gt;The Podcast is now up at &lt;a href=&#34;https://t.co/BSBtF9eo5z&#34;&gt;http://radiotfs.com/Show/136&lt;/a&gt; , why not have a listen&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/author/rik">Rik</a> and myself had a great time fun last night chatting all things ALM and DevOps with Greg Duncan on <a href="http://radiotfs.com/Show/136">RadioTFS show 136</a>. I was amazed it was a year since I was last on.</p>
<p>The Podcast is now up at <a href="https://t.co/BSBtF9eo5z">http://radiotfs.com/Show/136</a> , why not have a listen</p>
]]></content:encoded>
    </item>
    <item>
      <title>New Cross Platform version of my Generate Release Notes VSTS Extension</title>
      <link>https://blog.richardfennell.net/posts/new-cross-platform-version-of-my-generate-release-notes-vsts-extension/</link>
      <pubDate>Wed, 12 Apr 2017 17:17:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-cross-platform-version-of-my-generate-release-notes-vsts-extension/</guid>
      <description>&lt;p&gt;My &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task&#34;&gt;Generate Release Notes VSTS extension&lt;/a&gt; has been my most popular by a long way. I have enhanced it, with the help of others via pull requests, but there have been two repeating common questions that have not been resolved&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Is it cross platform?&lt;/li&gt;
&lt;li&gt;Why does it show different work items and commit associations to the VSTS Release Status UI?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Well the answer to the first is that the core of the logic for the extension came from a PowerShell script we used internally, so PowerShell was the obvious first platform, especially as though my PowerShell skills are not great, my Node was weaker!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">Generate Release Notes VSTS extension</a> has been my most popular by a long way. I have enhanced it, with the help of others via pull requests, but there have been two repeating common questions that have not been resolved</p>
<ol>
<li>Is it cross platform?</li>
<li>Why does it show different work items and commit associations to the VSTS Release Status UI?</li>
</ol>
<p>Well the answer to the first is that the core of the logic for the extension came from a PowerShell script we used internally, so PowerShell was the obvious first platform, especially as though my PowerShell skills are not great, my Node was weaker!</p>
<p>The second issue is due to my original extension and VSTS’s UI doing very different things. My old extension was based around inspecting build results, so when working in a release it finds all the builds between the current release and last successful one and looks at the details of each build in turn, building a big list or changes. VSTS’s Release summary UI does not do this, it make a few current undocumented ‘compare this to that’ API calls to get the lists.</p>
<p>In an attempt to address both these questions I have over the past few weeks created a <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes">new Cross Platform Generate Release Notes Extension</a>. Now don’t worry, the old one is still there and supported, they do different jobs. This new extension is cross platform and tries to use the same API calls the VSTS Release summary UI uses.</p>
<p>There are of course a few gotchas</p>
<ul>
<li>I did have to adopt a work around for TFVC changeset history as Microsoft use an old internal API call, but that that was the only place I had to do this. So apologies if there are any differences in the changesets returned.</li>
<li>The template format is very similar to that used in my original <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">Generate Release Notes VSTS extension</a>, but due to the change from PowerShell to Node I had to move from <strong>$($widetail.fields.&lsquo;System.Title&rsquo;)</strong> style to <strong>${widetail.fields[&lsquo;System.Title&rsquo;]}</strong></li>
</ul>
<p>So I hope people find this new extension useful, I can now go off happily closing old Issues in GitHub</p>
]]></content:encoded>
    </item>
    <item>
      <title>VSTS Build Task Development - Now that is a misleading error message !</title>
      <link>https://blog.richardfennell.net/posts/vsts-build-task-development-now-that-is-a-misleading-error-message/</link>
      <pubDate>Sun, 02 Apr 2017 15:00:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vsts-build-task-development-now-that-is-a-misleading-error-message/</guid>
      <description>&lt;p&gt;I have been working on a new Node.JS based VSTS Build Extension, so loads of Node learning going on as it is a language I have not done much with in the past. I expect to get caught out in places, but I have just wasted a good few hours on this one!&lt;/p&gt;
&lt;p&gt;I am working in VS Code using Typescript to generate my Node based task. Whilst hacking around I have been just running the script in the VS Code debugger, getting the logic right before packaging it up and testing within a VSTS Build extension.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working on a new Node.JS based VSTS Build Extension, so loads of Node learning going on as it is a language I have not done much with in the past. I expect to get caught out in places, but I have just wasted a good few hours on this one!</p>
<p>I am working in VS Code using Typescript to generate my Node based task. Whilst hacking around I have been just running the script in the VS Code debugger, getting the logic right before packaging it up and testing within a VSTS Build extension.</p>
<p>Everything was working until I did a big refactor, basically moving functions to a module. I suddenly started getting the following error trying to make a REST call</p>
<blockquote>
<p><em>Exception occured</em></p>
<p><em>Error: Error: Hostname/IP doesn&rsquo;t match certificate&rsquo;s altnames: &ldquo;Host: richardfennell.vsrm.visualstudio.com. is not in the cert&rsquo;s altnames: DNS:*.vsrm.visualstudio.com, DNS:vsrm.visualstudio.com, DNS:*.app.vsrm.visualstudio.com, DNS:app.vsrm.visualstudio.com&rdquo;</em></p></blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_343.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_337.png" title="image"></a></p>
<p>I looked for ages to see how I had broken the REST call, all to no avail. In the end I rolled back and had to step through the refactor in small steps (smaller steps I should probably have taken anyway)</p>
<p>In the end I found the issue. The problem was in my early testing I had hard coded my input parameters e.g.</p>
<blockquote>
<p><em>var templateFile = &ldquo;template.md”;</em></p></blockquote>
<p>Whilst stating to wire up the code as a VSTS Task I had started to swap in calls to the VSTS task Library</p>
<blockquote>
<p><em>import tl = require(&lsquo;vsts-task-lib/task&rsquo;);</em></p></blockquote>
<p>Correction – all the tl.xxx calls seem to cause a problem, avoid them for local testing</p>
<p>Now for items such as logging this works fine whether the logic is running in VS Code’s debugger or on a VSTS Build Agent, so I could use the following line in VS Code or on a VSTS Build Agent.</p>
<blockquote>
<p>tl.Debug(“My Message”);</p></blockquote>
<p>Where it does not work is for Task inputs. I had assume that</p>
<blockquote>
<p>var templateFile = tl.getInput(&ldquo;templatefile&rdquo;);</p></blockquote>
<p>Would return null/undefined when running in the VS Code debugger, but no, it causes that strange exception.</p>
<p>Once I removed the all the <em>getInput</em> calls my error went away.</p>
<p>Hope this save someone else some time</p>
]]></content:encoded>
    </item>
    <item>
      <title>Migrating projects from CodePlex to GitHub due to CodePlex shutting down at the end of year</title>
      <link>https://blog.richardfennell.net/posts/migrating-projects-from-codeplex-to-github-due-to-codeplex-shutting-down-at-the-end-of-year/</link>
      <pubDate>Sun, 02 Apr 2017 09:26:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/migrating-projects-from-codeplex-to-github-due-to-codeplex-shutting-down-at-the-end-of-year/</guid>
      <description>&lt;p&gt;It has just been &lt;a href=&#34;https://blogs.msdn.microsoft.com/bharry/2017/03/31/shutting-down-codeplex/&#34;&gt;announced by Microsoft that it’s Open Source service CodePlex is shutting down&lt;/a&gt; before the end of the year. The reality is that the Microsoft focused Open Source community, and Microsoft itself, have moved to GitHub a good while ago.&lt;/p&gt;
&lt;p&gt;I think I, like most developers, have moved any &lt;a href=&#34;http://github.com/rfennell&#34;&gt;active Open Source projects to GitHub&lt;/a&gt; a good while ago,  but I still had legacy ones on CodePlex.&lt;/p&gt;
&lt;p&gt;Microsoft have provided a &lt;a href=&#34;https://codeplex.codeplex.com/wikipage?title=Migrating%20to%20GitHub&#34;&gt;nicely documented process to move the key assets of projects&lt;/a&gt;, whether TFVC or Git based, to GitHub. This process worked for me. However, I will suggest a could of changes/additions&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It has just been <a href="https://blogs.msdn.microsoft.com/bharry/2017/03/31/shutting-down-codeplex/">announced by Microsoft that it’s Open Source service CodePlex is shutting down</a> before the end of the year. The reality is that the Microsoft focused Open Source community, and Microsoft itself, have moved to GitHub a good while ago.</p>
<p>I think I, like most developers, have moved any <a href="http://github.com/rfennell">active Open Source projects to GitHub</a> a good while ago,  but I still had legacy ones on CodePlex.</p>
<p>Microsoft have provided a <a href="https://codeplex.codeplex.com/wikipage?title=Migrating%20to%20GitHub">nicely documented process to move the key assets of projects</a>, whether TFVC or Git based, to GitHub. This process worked for me. However, I will suggest a could of changes/additions</p>
<ol>
<li>I would not export the WIKI docs as detailed in the process. I don’t want my old CodePlex Wiki pages in the new GitHub code repository as a folder. I think it is better to move each page over to a GitHub WIki. I only had few pages, so I did this by hand. I used this <a href="https://domchristie.github.io/to-markdown/">nice little tools  from Dom Christie to convert the CodePlex HTML based pages to Markdown</a> which I cut and pasted into the new repo’s Wiki, fixing URLs as I went.</li>
<li>I decided I needed to consider release downloads. The process does not do address this area. I thought I should bring over at least the last set of release binaries for my projects as a Github Releases. The reason was that the chances are for any old inactive project on CodePlex you won’t have the tools to hand to re-build the code easily, so just in case it is best to keep the last built version to hand as a release</li>
<li>The process does not bring over Issues, but this was not a problem for me, the projects I have been superseded by active ones already on Github, so the issues are irrelevant</li>
</ol>
<p>So if you have old CodePlex projects and you don’t want them to disappear think about moving them before the service is shutdown, you have until December 2017.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Leeds venue for the Global DevOps Bootcamp</title>
      <link>https://blog.richardfennell.net/posts/leeds-venue-for-the-global-devops-bootcamp/</link>
      <pubDate>Thu, 16 Mar 2017 13:02:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/leeds-venue-for-the-global-devops-bootcamp/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://globaldevopsbootcamp.com/&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_342.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I m really pleased to say that we at Black Marble are hosting a Yorkshire venue for the upcoming &lt;a href=&#34;http://globaldevopsbootcamp.com/&#34;&gt;Global DevOps Bootcamp&lt;/a&gt;, to quote the main bootcamp site…&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“Global DevOps Bootcamp is a global event that will be held on Saturday June 17th and is all about DevOps on the Microsoft Stack. Centrally organized by&lt;/em&gt; &lt;a href=&#34;https://xpirit.com/&#34;&gt;&lt;em&gt;Xpirit&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href=&#34;http://www.solidify.se/&#34;&gt;&lt;em&gt;Solidify&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and offered to you by&lt;/em&gt; &lt;a href=&#34;http://www.blackmarble.co.uk/&#34;&gt;&lt;em&gt;Black Marble&lt;/em&gt;&lt;/a&gt;&lt;em&gt;. During this 1-day event we will join (Microsoft) DevOps communities all around the world to talk, learn and play with DevOps concepts.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://globaldevopsbootcamp.com/"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_342.png" title="image"></a></p>
<p>I m really pleased to say that we at Black Marble are hosting a Yorkshire venue for the upcoming <a href="http://globaldevopsbootcamp.com/">Global DevOps Bootcamp</a>, to quote the main bootcamp site…</p>
<p><em>“Global DevOps Bootcamp is a global event that will be held on Saturday June 17th and is all about DevOps on the Microsoft Stack. Centrally organized by</em> <a href="https://xpirit.com/"><em>Xpirit</em></a> <em>and</em> <a href="http://www.solidify.se/"><em>Solidify</em></a> <em>and offered to you by</em> <a href="http://www.blackmarble.co.uk/"><em>Black Marble</em></a><em>. During this 1-day event we will join (Microsoft) DevOps communities all around the world to talk, learn and play with DevOps concepts.</em></p>
<p><em>Goals of the GlobalDevOpsBootcamp:<br>
* DevOps in general<br>
* Insights into where we are heading when it comes to DevOps and new technologies<br>
* Get people&rsquo;s hands dirty and let them play with all the good Microsoft DevOps stuff</em></p>
<p><em>This year’s theme will be &ldquo;From server to serverless in a DevOps world&rdquo;.<br>
We will kick off with an introduction of no one less than</em> <a href="http://www.donovanbrown.com/"><em>Donovan Brown</em></a><em>, followed by a keynote from the local partner about where we are heading with DevOps.</em></p>
<p><em>After that it’s time to get your hands dirty! Competing in groups you will transform an existing “traditional” application into a fully-fledged, containerized or Serverless application. Naturally you need to take care of all the DevOps practices like monitoring, pipelines and collaboration.”</em></p>
<p> <a href="https://www.eventbrite.nl/e/global-devops-bootcamp-black-marble-tickets-32433470383">There is a specific site register for the Black Marble venue</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at VS2017 launch event in Dublin on the 30th</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-vs2017-launch-event-in-dublin-on-the-30th/</link>
      <pubDate>Sat, 11 Mar 2017 17:08:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-vs2017-launch-event-in-dublin-on-the-30th/</guid>
      <description>&lt;p&gt;I am speaking “Any Developer, Any App, Any Platform” a Visual Studio 2017 Launch Event at Microsoft’s offices in Dublin on the 30th of this month.&lt;/p&gt;
&lt;p&gt;A bit of a departure for me I will be speaking on Xamarin, makes a change&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://bit.ly/VS2017LaunchMSIE&#34;&gt;To register see the event site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking “Any Developer, Any App, Any Platform” a Visual Studio 2017 Launch Event at Microsoft’s offices in Dublin on the 30th of this month.</p>
<p>A bit of a departure for me I will be speaking on Xamarin, makes a change</p>
<p><a href="http://bit.ly/VS2017LaunchMSIE">To register see the event site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at the SharePoint Usergroup (Leeds) on Modern ALM &amp;amp; DevOps</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-the-sharepoint-usergroup-leeds-on-modern-alm-devops/</link>
      <pubDate>Mon, 13 Feb 2017 17:05:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-the-sharepoint-usergroup-leeds-on-modern-alm-devops/</guid>
      <description>&lt;p&gt;Pleased to say I&amp;rsquo;m presenting on DevOps with VSTS at the SharePoint UG in Leeds on the 7th of March&lt;/p&gt;
&lt;p&gt;To register have a look at the &lt;a href=&#34;http://www.suguk.org/Event.aspx?id=31150012526&#34;&gt;Usergroup&amp;rsquo;s events site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Pleased to say I&rsquo;m presenting on DevOps with VSTS at the SharePoint UG in Leeds on the 7th of March</p>
<p>To register have a look at the <a href="http://www.suguk.org/Event.aspx?id=31150012526">Usergroup&rsquo;s events site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>You never know how people will use a tool</title>
      <link>https://blog.richardfennell.net/posts/you-never-know-how-people-will-use-a-tool/</link>
      <pubDate>Tue, 07 Feb 2017 19:57:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-never-know-how-people-will-use-a-tool/</guid>
      <description>&lt;p&gt;You never know how people will use a tool once it is out ‘in the wild’. I wrote my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task&#34;&gt;Generate Release Notes VSTS&lt;/a&gt; extension to generate markdown files, but people have attempted to use it in other ways.&lt;/p&gt;
&lt;p&gt;I realised, via an issue raised on &lt;a href=&#34;https://github.com/rfennell/vNextBuild/issues/73&#34;&gt;Github&lt;/a&gt;, that it can also be used, without any code changes, to generate other formats such as HTML. The only change required is to provide an HTML based template as opposed to markdown one.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You never know how people will use a tool once it is out ‘in the wild’. I wrote my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">Generate Release Notes VSTS</a> extension to generate markdown files, but people have attempted to use it in other ways.</p>
<p>I realised, via an issue raised on <a href="https://github.com/rfennell/vNextBuild/issues/73">Github</a>, that it can also be used, without any code changes, to generate other formats such as HTML. The only change required is to provide an HTML based template as opposed to markdown one.</p>
<p>I have added suitable samples to the <a href="https://github.com/rfennell/vNextBuild/wiki/GenerateReleaseNotes%20-Tasks">wiki</a> and <a href="https://github.com/rfennell/vNextBuild/tree/master/SampleTemplates">repo</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>How you can keep using Lab Management after a move to VSTS (after a fashion)</title>
      <link>https://blog.richardfennell.net/posts/how-you-can-keep-using-lab-management-after-a-move-to-vsts-after-a-fashion/</link>
      <pubDate>Mon, 06 Feb 2017 17:50:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-you-can-keep-using-lab-management-after-a-move-to-vsts-after-a-fashion/</guid>
      <description>&lt;p&gt;I have posted on previously how we used &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/?tag=Lab&amp;#43;Management&#34;&gt;TFS Lab Management&lt;/a&gt; to provision our test and development environments. &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2017/01/03/A-nice-relaxing-Christmas-break-%28and-by-the-way-I-migrated-our-on-premises-TFS-to-VSTS-as-well%29&#34;&gt;With our move to VSTS&lt;/a&gt;, where Lab Management does not exist, we needed to look again at how to provision these labs. There are a few options…&lt;/p&gt;
&lt;h3 id=&#34;move-to-the-cloud--aka-stop-using-lab-management&#34;&gt;Move to the Cloud – aka stop using Lab Management&lt;/h3&gt;
&lt;p&gt;Arguably the best option is to move all your lab VMs up to the cloud. Microsoft even has the specific service to help with this &lt;a href=&#34;https://azure.microsoft.com/en-gb/services/devtest-lab/&#34;&gt;Azure DevTest Labs&lt;/a&gt;. This service allows you to create single VMs or sets of VMs for more complex scenarios &lt;a href=&#34;https://azure.microsoft.com/en-gb/blog/announcing-azure-devtest-labs-support-for-creating-environment-with-arm-templates/&#34;&gt;using of ARM templates&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have posted on previously how we used <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/?tag=Lab&#43;Management">TFS Lab Management</a> to provision our test and development environments. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2017/01/03/A-nice-relaxing-Christmas-break-%28and-by-the-way-I-migrated-our-on-premises-TFS-to-VSTS-as-well%29">With our move to VSTS</a>, where Lab Management does not exist, we needed to look again at how to provision these labs. There are a few options…</p>
<h3 id="move-to-the-cloud--aka-stop-using-lab-management">Move to the Cloud – aka stop using Lab Management</h3>
<p>Arguably the best option is to move all your lab VMs up to the cloud. Microsoft even has the specific service to help with this <a href="https://azure.microsoft.com/en-gb/services/devtest-lab/">Azure DevTest Labs</a>. This service allows you to create single VMs or sets of VMs for more complex scenarios <a href="https://azure.microsoft.com/en-gb/blog/announcing-azure-devtest-labs-support-for-creating-environment-with-arm-templates/">using of ARM templates</a>.</p>
<p>All good it seems, but the issue is that adoption of a cloud solution moves the cost of running the lab from a capital expenditure (buying the VM host server) to an operational cost (monthly cloud usage bill). This can potentially be a not insignificant sum; in our case we have up to 100 test VMs of various types running at any given time. A sizeable bill.</p>
<p>Also we need to consider that this is a different technology to Lab management, so we would need to invest time to rebuild our test environments using newer technologies such as ARM, DSC etc. A thing we should be doing, but I would to avoid doing it for all our projects today.</p>
<p>Now it is fair to say that we might not need all the VMs keep running all the time, better VM management could help alleviate the costs, and DevTest Labs has tools to help here, but it won’t remove all the costs.</p>
<p>So is there a non-cloud way?</p>
<h3 id="move-to-systems-center">Move to Systems Center</h3>
<p>Microsoft’s current on premises recommended solution is to use System Center, using tasks within your build and release pipeline to trigger events via SC-VMM.</p>
<p>Now as Lab Management also makes use of System Center SC-VMM this might initially sound a reasonable step. Problem is that the way Lab Management uses System Center is ‘special’. It does not leverage any of the standard System Center tools really. Chances are anyone who investing time in using Lab Management makes little or no use of System Center own tools directly.</p>
<p>So if you want to use System Center without Lab Management you need to work in a very different way. You are into the land of System Center orchestrations etc.</p>
<p>So again you are looking at a new technology, this might be appealing for you, especially if you are using System Center to manage your on premised IT estate, but it was not a route I wanted to take.</p>
<h3 id="keeping-lab-management-running">Keeping Lab Management running</h3>
<p>So the short term answer for us was to keep our Lab Management system running, it does what we need (network isolation the key factor for us), we have a library of ‘standard VMs’ built and we have already paid for the Hyper-V hosts. So the question became how to bridge the gap to VSTS?</p>
<h3 id="step-1--leave-lab-management-running">Step 1 – Leave Lab Management Running</h3>
<p>When we moved to VSTS we made the conscious choice to leave our old TFS 2015.3 server running. We removed access for most users, only leaving access for those who needed to manage Lab Management. This provided us with a means to start, stop, deploy  network isolated Lab Environments.</p>
<p><strong>KEY POINT HERE</strong> – The only reason our on-premised TFS server is running is to allow a SC-VMM server and a Test Controller to connect to it to allow Lab Management operations.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_339.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_334.png" title="image"></a></p>
<p>Another important fact to remember is that network isolation in each labs is enabled by the Lab Test Agents running on the VMs in the Lab; so as well as communicating with the Test Controller the agents in the environments also manage the reconfiguration of the VMs network adapters to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management">provide the isolation</a>. Anything we do at this point has to be careful not to ‘mess up’ this network configuration.</p>
<p>Problem is you also use this Test Agent to run your tests, how do you make sure the Test Agent runs the right tests and send the results to the right place?</p>
<p>We had already had to build some <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/09/27/Running-Test-Suites-within-a-network-Isolated-Lab-Management-environment-when-using-TFS-vNext-build-and-release-tooling">custom scripts to get these agents to work the TFS vNext build</a> against the on-prem TFS server. We were going to need something similar this time too. The key was we needed to be able to trigger tests in the isolated environment and get the results back out and up to VSTS all controlled within a build and release pipeline.</p>
<p>We came up with two options.</p>
<h3 id="option-1-scripts">Option 1 Scripts</h3>
<p>First option is to do everything with PowerShell Scripts Tasks within the release process.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_340.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_335.png" title="image"></a></p>
<ol>
<li>Copy the needed files onto the VM using the built in tasks</li>
<li>Use PowerShell remoting to run MSTest (previously installed on the target VM) – remember you have to delete any existing .TRX result file by hand, it won’t overwrite.</li>
<li>Copy the test results back from the VM (RoboCopy again)</li>
<li>Publish the test results TRX file using the standard VSTS build task for that job.</li>
</ol>
<p>There is nothing too complex, just a couple of PowerShell scripts, and it certainly does not effect the network isolation.</p>
<p>However, there is a major issue if you want to run UX tests. MSTest is running on a background thread, so your test will fail it cannot access the UI thread.</p>
<p>That said, this is a valid technique as long as either</p>
<ul>
<li>Your tests are not UX based e.g. integration tests that hit an API</li>
<li>You can write your UX test to use Selenium <a href="http://phantomjs.org/">PhantomJS</a></li>
</ul>
<h3 id="option-2-do-it-the-proper-vsts-way">Option 2 do it the ‘proper’ VSTS way</h3>
<p>VSTS has tasks built in to deploy a Test Agent to a machine and run tests remotely, including UX tests. The problem was I had assumed these tasks could not be used as they would break the network isolation, but I thought I would give it  try anyway. That is what test labs are for!</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_341.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_336.png" title="image"></a></p>
<p>Inside my release pipeline I added</p>
<ol>
<li>Copy the needed files onto the VM using the built in tasks, as before</li>
<li>A deploy Test Agent Task</li>
<li>Run functional tests Task, which also handles the publish</li>
</ol>
<p>When this was run the deploy Test Agent task de-configures (and removes) the old TFS 2015 Test Agent put on by Lab Management and installs the current version. However, and this is important, it does not break the network isolation as this is all setup during VM boot and/or repair. The Lab will report itself a broken in the Lab Management UI as the Test Agent will not be reporting to the Test Controller, but it is still working</p>
<p>Once the new agent is deployed, it can be used to run the test and the results get published back to VSTS, whether they be UX tests or not.</p>
<p>If you restart, redeploy, or repair the Network Isolated environment the 2015 Test Agent gets put back in place, so making sure the network isolation is fixed.</p>
<h3 id="conclusion">Conclusion</h3>
<p>So Option 2 seems to deliver what I needed for now</p>
<ul>
<li>I can use the old tech to manage the deployment of the VMs</li>
<li>and use the new tech to run my tests and get the results published to the right place.</li>
</ul>
<p>Now this does not means I should not be looking at DevTest Labs to replace some of my test environments, also <a href="https://azure.microsoft.com/en-gb/overview/azure-stack/">Azure Stack</a> might provide an answer in the future.</p>
<p>But for now I have a workable solution that protects my past investments while I move to a longer term future plan.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I’m on @techstringy podcast talking on #DevOps</title>
      <link>https://blog.richardfennell.net/posts/im-on-techstringy-podcast-talking-on-devops/</link>
      <pubDate>Mon, 06 Feb 2017 12:01:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/im-on-techstringy-podcast-talking-on-devops/</guid>
      <description>&lt;p&gt;I recorded a podcast on DevOps with &lt;a href=&#34;https://techstringy.wordpress.com&#34;&gt;Paul Stringfellow&lt;/a&gt; &lt;a href=&#34;https://twitter.com/techstringy&#34;&gt;@techstringy&lt;/a&gt; last weeks Black Marble event. It has just been &lt;a href=&#34;https://techstringy.wordpress.com/2017/02/06/release-your-inner-dev-child-with-devops-richard-fennell-ep-14/&#34;&gt;published on his blog&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recorded a podcast on DevOps with <a href="https://techstringy.wordpress.com">Paul Stringfellow</a> <a href="https://twitter.com/techstringy">@techstringy</a> last weeks Black Marble event. It has just been <a href="https://techstringy.wordpress.com/2017/02/06/release-your-inner-dev-child-with-devops-richard-fennell-ep-14/">published on his blog</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>New version of my Parameters.Xml Generator Visual Studio add-in now supports VS2017 too</title>
      <link>https://blog.richardfennell.net/posts/new-version-of-my-parameters-xml-generator-visual-studio-add-in-now-supports-vs2017-too/</link>
      <pubDate>Sun, 29 Jan 2017 22:39:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-version-of-my-parameters-xml-generator-visual-studio-add-in-now-supports-vs2017-too/</guid>
      <description>&lt;p&gt;I have just published Version 1.5 of my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=RichardFennellMVP.ParametersXmlGenerator&#34;&gt;Parameters.Xml Generator Visual Studio add-in&lt;/a&gt; . After much fiddling this VSIX now supports VS2017 as well as VS2013 and VS2015.&lt;/p&gt;
&lt;p&gt;The complexity was that VS2017 uses a new VSIX format, V3. You have to makes changes to the project that generates the VSIX and to the VSIX manifest too. The &lt;a href=&#34;https://docs.microsoft.com/en-us/visualstudio/extensibility/faq-2017#why-does-the-vsixinstaller-now-wait-for-processes-to-exit-before-installing-the-vsix&#34;&gt;FAQ&lt;/a&gt; says you can do this within VS2015 my hand, but I had no luck getting it right. The recommended option, and the method I used, is to &lt;a href=&#34;https://docs.microsoft.com/en-us/visualstudio/extensibility/how-to-migrate-extensibility-projects-to-visual-studio-2017&#34;&gt;upgrade your solution to VS2017&lt;/a&gt; (or the RC at the time of writing as the product has not RTM’d yet).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just published Version 1.5 of my <a href="https://marketplace.visualstudio.com/items?itemName=RichardFennellMVP.ParametersXmlGenerator">Parameters.Xml Generator Visual Studio add-in</a> . After much fiddling this VSIX now supports VS2017 as well as VS2013 and VS2015.</p>
<p>The complexity was that VS2017 uses a new VSIX format, V3. You have to makes changes to the project that generates the VSIX and to the VSIX manifest too. The <a href="https://docs.microsoft.com/en-us/visualstudio/extensibility/faq-2017#why-does-the-vsixinstaller-now-wait-for-processes-to-exit-before-installing-the-vsix">FAQ</a> says you can do this within VS2015 my hand, but I had no luck getting it right. The recommended option, and the method I used, is to <a href="https://docs.microsoft.com/en-us/visualstudio/extensibility/how-to-migrate-extensibility-projects-to-visual-studio-2017">upgrade your solution to VS2017</a> (or the RC at the time of writing as the product has not RTM’d yet).</p>
<p>This upgrade process is a one way migration, and you do have to check/edit some key items</p>
<ul>
<li>Get all your references right, as I was using the RC of VS2017  this meant enabling the use of Preview packages from Nuget in the solution.</li>
<li>Makes sure the install targets (of Visual Studio) match what you want to install too</li>
<li>Add prerequisites (this is the big new addition in the VSIX 3 format)</li>
<li><strong>And the one that stalled me for ages</strong> – Make sure you reference the right version of the <em>Microsoft.VisualStudio.Shell.<VERSION>.dll</em> . You need to pick the one for the oldest version of Visual Studio you wish to target. In my case this was <em>Microsoft.VisualStudio.Shell.12.0.dll.</em> For some reason during the migration this got changed to <em>Microsoft.VisualStudio.Shell.14.0.dll</em> which gave the strange effect that the VSIX installed on 2013, 2015 and 2017. But in 2013, though I could see menu item, it did not work. This was fixed by referencing the 12.0 DLL .</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Can’t add users to a VSTS instance backed by an Azure Directory</title>
      <link>https://blog.richardfennell.net/posts/cant-add-users-to-a-vsts-instance-backed-by-an-azure-directory/</link>
      <pubDate>Thu, 26 Jan 2017 21:00:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cant-add-users-to-a-vsts-instance-backed-by-an-azure-directory/</guid>
      <description>&lt;p&gt;I have a VSTS instance that is &lt;a href=&#34;https://www.visualstudio.com/en-us/docs/setup-admin/team-services/manage-organization-access-for-your-account-vs&#34;&gt;backed by an Azure Directory&lt;/a&gt;. This is a great way to help secure a VSTS instance, only users in the Azure Directory can be added to VSTS, not just any old MSA (LiveIDs). This is a directory that can be shared with any other Azure based services such as O365, and centrally managed and linked to an on-premises Active Directory.&lt;/p&gt;
&lt;p&gt;When I tried to add a user to VSTS, one that was a valid user in the Azure Directory, their account did not appear in the available users drop down.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have a VSTS instance that is <a href="https://www.visualstudio.com/en-us/docs/setup-admin/team-services/manage-organization-access-for-your-account-vs">backed by an Azure Directory</a>. This is a great way to help secure a VSTS instance, only users in the Azure Directory can be added to VSTS, not just any old MSA (LiveIDs). This is a directory that can be shared with any other Azure based services such as O365, and centrally managed and linked to an on-premises Active Directory.</p>
<p>When I tried to add a user to VSTS, one that was a valid user in the Azure Directory, their account did not appear in the available users drop down.</p>
<p> <a href="/wp-content/uploads/sites/2/historic/image_337.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_332.png" title="image"></a></p>
<p>Turns out the problem was who I was logged in as. As yo can see from the screenshot I have three Richard accounts in the VSTS instance (and Azure Directory), a couple of MSAs and a guest work account from another Azure Directory. I was logged in as the guest work account.</p>
<p>All three IDs as administrators in VSTS, but it turned out I needed to be logged in as the MSA that owned the Azure subscription contains the Azure Directory. As soon as I used this account the dropdown populated as expected and I could add the users from the Azure Diretcory</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_338.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_333.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Version 2.0.x of my Generate Release Notes VSTS Task has been released with release rollup support</title>
      <link>https://blog.richardfennell.net/posts/version-2-0-x-of-my-generate-release-notes-vsts-task-has-been-released-with-release-rollup-support/</link>
      <pubDate>Thu, 05 Jan 2017 13:02:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/version-2-0-x-of-my-generate-release-notes-vsts-task-has-been-released-with-release-rollup-support/</guid>
      <description>&lt;p&gt;I have just released a major update to my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task&#34;&gt;Generate Release Notes VSTS Build extension&lt;/a&gt;. This V2 update adds support to look back into past releases to find when there was a successful release to a given stage/environment and creates a rollup set of build artifacts, and hence commits/changesets and workitems, in the release notes.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://cloud.githubusercontent.com/assets/3496701/17657482/0f35c0d0-6307-11e6-8d3e-afff88108348.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;This has been &lt;a href=&#34;https://github.com/rfennell/vNextBuild/issues/34&#34;&gt;a long running request on GitHub for this extension&lt;/a&gt; which I am pleased to have been able to address.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just released a major update to my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">Generate Release Notes VSTS Build extension</a>. This V2 update adds support to look back into past releases to find when there was a successful release to a given stage/environment and creates a rollup set of build artifacts, and hence commits/changesets and workitems, in the release notes.</p>
<p><img loading="lazy" src="https://cloud.githubusercontent.com/assets/3496701/17657482/0f35c0d0-6307-11e6-8d3e-afff88108348.png"></p>
<p>This has been <a href="https://github.com/rfennell/vNextBuild/issues/34">a long running request on GitHub for this extension</a> which I am pleased to have been able to address.</p>
<p>To aid backwards compatibility, the default behaviour of the build/release tasks is as it was before, it can be used in a build or in and release, and if in a release it only consider the artifacts in the current release that ran the task.</p>
<p>If you want to use the new features you need to enable them. This is all on the advanced properties</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_336.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_331.png" title="image"></a></p>
<p>You get new properties to enable scanning past releases until the task find a successful deployment to, by default, the same stage/environment that is currently being released too. You can override this stage name to allow more complex usage e.g. generating the releases notes for what is changed since the last release to production whist in a UAT environment.</p>
<p>This change also means there is new variable that can be accessed in templates, this <strong>$Releases</strong> which contains all the releases being used to get build artifacts. This can be used on release notes to show the releases being used e.g.</p>
<pre tabindex="0"><code>\*\*Release notes for release $defname\*\*  
\*\*Release Number\*\*  : $($release.name)      
\*\*Release completed\*\* $(&#34;{0:dd/MM/yy HH:mm:ss}&#34; -f \[datetime\]$release.modifiedOn) \*\*Changes since last successful release to &#39;$stagename&#39;\*\*     
\*\*Including releases:\*\*     
 $(($releases | select-object -ExpandProperty name) -join &#34;, &#34; )  
</code></pre><p>Generating a content</p>
<pre tabindex="0"><code>**Release notes for release Validate-ReleaseNotesTask.Master** **Release Number** : Release-69  **Release completed** 05/01/17 12:40:19 **Changes since last successful release to &#39;Environment 2&#39;**  **Including releases:**  Release-69, Release-68, Release-67, Release-66 
</code></pre><p>Hope you find this extension useful</p>
]]></content:encoded>
    </item>
    <item>
      <title>A nice relaxing Christmas break (and by the way I migrated our on-premises TFS to VSTS as well)</title>
      <link>https://blog.richardfennell.net/posts/a-nice-relaxing-christmas-break-and-by-the-way-i-migrated-our-on-premises-tfs-to-vsts-as-well/</link>
      <pubDate>Tue, 03 Jan 2017 20:12:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-nice-relaxing-christmas-break-and-by-the-way-i-migrated-our-on-premises-tfs-to-vsts-as-well/</guid>
      <description>&lt;p&gt;Over the Christmas break I migrated our on premises TFS 2015 instance to VSTS. The reason for the migration was multi-fold:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We were blocked on moving to TFS 2017 as we could not easily upgrade our SQL cluster to SQL 2014&lt;/li&gt;
&lt;li&gt;We wanted to be on the latest, greatest and newest features of VSTS/TFS&lt;/li&gt;
&lt;li&gt;We wanted to get away from having to perform on-premises updates every few months&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To do the migration we used &lt;a href=&#34;https://blogs.msdn.microsoft.com/visualstudioalm/2016/11/16/import-your-tfs-database-into-visual-studio-team-services/&#34;&gt;the public preview of the TFS to VSTS Migrator&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the Christmas break I migrated our on premises TFS 2015 instance to VSTS. The reason for the migration was multi-fold:</p>
<ul>
<li>We were blocked on moving to TFS 2017 as we could not easily upgrade our SQL cluster to SQL 2014</li>
<li>We wanted to be on the latest, greatest and newest features of VSTS/TFS</li>
<li>We wanted to get away from having to perform on-premises updates every few months</li>
</ul>
<p>To do the migration we used <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2016/11/16/import-your-tfs-database-into-visual-studio-team-services/">the public preview of the TFS to VSTS Migrator</a>.</p>
<p>So what did we learn?</p>
<p>The actual import was fairly quick, around 3 hours for just short of 200Gb of TPC data. However, getting the data from our on-premises system up to Azure was much slower, constrained by the need to copy backups around our LAN and our Internet bandwidth to get the files to Azure storage, a grand total of more like 16 hours. But remember this was mostly spent watching various progress bars after running various commands; so I was free to enjoy the Christmas break, I was not a slave to a PC.</p>
<p>This all makes it sound easy, and to be honest the actual production migration was, but this was only due to doing the hard work prior to the Christmas break during the dry run phase. During the dry run we:</p>
<ul>
<li>Addressed the TFS customisations that needed to be altered/removed</li>
<li>Sorted the AD &gt; AAD sync mappings for user accounts</li>
<li>Worked out the backup/restore/copy process to get the TPC data to somewhere VSTS could import it from</li>
<li>Did the actual dry run migration</li>
<li>Tested the dry run instance after the migrate to get a list of what else needed addressing and anything our staff would have to do to access the new VSTS instance</li>
<li>Documented (and scripted where possible) all the steps</li>
<li>Made sure we had fall back processes in place if the migration failed.</li>
</ul>
<p>And arguably most importantly, discovered how long each step would take so we could set expectations. This was the prime reason for picking the Christmas break as we knew we could have a number of days where there should be no TFS activity (we close for an extended period) hence de-risking the process to a great degree. We knew we could get the migration done over weekend, but a weeks break was easier, more relaxed, Christmas seemed a timely choice.</p>
<p>You might ask the question ‘what did not migrate?’</p>
<p>Well a better question might be ’what needed changing due to the migration?’</p>
<p>It was not so much items did not migrate, just they are handled a bit differently in VSTS. The list of areas we needed to address were</p>
<ul>
<li>User Licensing – we needed to make sure your user’s MSDN subscription are mapped to their work IDs.</li>
<li>Build/Release Licensing – we needed to decide how many private build agents we really needed (not just spin up more on a whim as we had done with our on-premises TFS), they cost money on VSTS</li>
<li>Release pipeline – now these don’t migrate as of the time of writing, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/12/20/Transform-tool-for-transferring-TFS-20153-Release-Templates-to-VSTS">but I wrote a quick tool to get 95% of their content moved.</a>  After using this tool we did then need to also edit the pipelines, re-entering ‘secrets’ which are not exported, before retesting them</li>
</ul>
<p>But that was all the issues we had to address, everything else seems to be fine with users just changing the URL they connected to from on-premises to VSTS.</p>
<p>So if you think migrating your TFS to VSTS seems like a good idea, why not have a look at <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2016/11/16/import-your-tfs-database-into-visual-studio-team-services/">the blog post and video on  the Microsoft ALM Blog about the migration tool</a>. Remember that this is a <a href="http://devopsms.com/#featuredPartners">Microsoft Gold DevOps Partner</a> led process, so please get in <a href="http://www.blackmarble.co.uk/contact/">touch with us at Black Marble</a> or me directly via this <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/contact">blog</a> if you want a chat about the migrations or other <a href="http://www.blackmarble.co.uk/specialisations/devops/">DevOps service we offer</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My TFSAlertsDSL project has moved to GitHub and become VSTSServiceHookDsl</title>
      <link>https://blog.richardfennell.net/posts/my-tfsalertsdsl-project-has-moved-to-github-and-become-vstsservicehookdsl/</link>
      <pubDate>Wed, 21 Dec 2016 20:51:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-tfsalertsdsl-project-has-moved-to-github-and-become-vstsservicehookdsl/</guid>
      <description>&lt;h3 id=&#34;introduction&#34;&gt;Introduction&lt;/h3&gt;
&lt;p&gt;A while ago I create the &lt;a href=&#34;https://tfsalertsdsl.codeplex.com&#34;&gt;TFSAlertsDSL project&lt;/a&gt; to provide a means to script responses to TFS Alert SOAP messages using Python. The SOAP Alert technology has been overtaken by time with the move to &lt;a href=&#34;https://www.visualstudio.com/en-us/docs/integrate/get-started/service-hooks/get-started&#34;&gt;Service Hooks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So I have taken the time to move this project over to the newer technology, which is supported both on TFS 2015 (onwards) and VSTS. I also took the chance to move from CodePlex to &lt;a href=&#34;https://github.com/rfennell/VSTSServiceHookDsl&#34;&gt;GitHub and renamed the project to VSTSServiceHookDsl&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>A while ago I create the <a href="https://tfsalertsdsl.codeplex.com">TFSAlertsDSL project</a> to provide a means to script responses to TFS Alert SOAP messages using Python. The SOAP Alert technology has been overtaken by time with the move to <a href="https://www.visualstudio.com/en-us/docs/integrate/get-started/service-hooks/get-started">Service Hooks</a>.</p>
<p>So I have taken the time to move this project over to the newer technology, which is supported both on TFS 2015 (onwards) and VSTS. I also took the chance to move from CodePlex to <a href="https://github.com/rfennell/VSTSServiceHookDsl">GitHub and renamed the project to VSTSServiceHookDsl</a>.</p>
<p><em><strong>Note:</strong> If you need the older SOAP alert based model stick with the project on</em> <a href="https://tfsalertsdsl.codeplex.com"><em>CodePlex</em></a><em>, I don’t intend to update it, but all the source is there if you need it.</em></p>
<h3 id="what-i-learnt-in-the-migration">What I learnt in the migration</h3>
<h4 id="supporting-wcf-and-service-hooks">Supporting WCF and Service Hooks</h4>
<p>I had intended to keep support for both SOAP Alerts and Service Hooks in the new project, but I quickly realised there was little point. You cannot even register SOAP based alerts via the UI anymore and it added a lot of complexity. So I decided to remove all the WCF SOAP handling.</p>
<h4 id="c-or-rest-tfs-api">C# or REST TFS API</h4>
<p>The SOAP Alert version used the older TFS C# API, hence you had to distribute these DLLs with the web site. Whilst factoring I decided to swap all the TFS calls to using the new <a href="https://www.visualstudio.com/en-us/docs/integrate/api/overview">REST API</a>. This provided a couple of advantages</p>
<ul>
<li>
<p>I did not need to distribute the TFS DLLs</p>
</li>
<li>
<p>Many of the newer function of VSTS/TFS are only available via the REST API</p>
<h4 id="exposing-jobjects-to-python">Exposing JObjects to Python</h4>
<p>I revised the way that TFS data is handed in the Python Scripts. In the past I hand crafted data transfer objects for consumption within the Python scripts. The problem with this way of working is that it cannot handle custom objects, customised work items are a particular issue. You don’t know their shape.</p>
<p>I found the best solution was to just return the Newtonsoft JObjects that I got from the C# based REST calls. These are easily consumed in Python in the general form</p>
<pre tabindex="0"><code>workitem\[&#34;fields&#34;\]\[&#34;System.State&#34;\] 
</code></pre><p>Downside is that this change does mean that any scripts you had created for the old SOAP Alert version will need a bit of work when you transfer to the new Service Hook version.</p>
<h4 id="create-a-release-pipeline">Create a release pipeline</h4>
<p>As per all good projects, I created a release pipeline for my internal test deployment. My process was as follows</p>
<ul>
<li>A VSTS build that builds the code from Github this
<ul>
<li>Complies the code</li>
<li>Run all the unit test</li>
<li>Packages as an MSDeploy Package</li>
</ul>
</li>
<li>Followed by a VSTS release that
<ul>
<li>Sets the web.config entries</li>
<li>Deploys the MSDeploy package to Azure</li>
<li>Then uses FTP to uploaded DSL DLL to Azure as it is not part of the package</li>
</ul>
</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_335.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_330.png" title="image"></a> </p>
<h3 id="future-steps">Future Steps</h3>
<h4 id="add-support-for-more-triggers">Add support for more triggers</h4>
<p>At the moment the Service Hook project supports the same trigger events as the old SOAP project, with the addition of support Git Push triggers.</p>
<p>I need to add in the handlers for all the older support triggers in VSTS/TFS, specifically the release related ones. I suspect these might be useful.</p>
<h4 id="create-an-arm-template">Create an ARM template</h4>
<p>At the moment the deployment relies on the user creating the web site. It would be good to add an Azure Resource Management (ARM) Template to allow this site to be created automatically as part of the release process</p>
<h3 id="summary">Summary</h3>
<p>So we have a nice new Python and Service Hook based framework to help manage your responses to Service Hook triggers for TFS and VSTS.</p>
<p>If you think it might be useful to you why not have a look at <a href="https://github.com/rfennell/VSTSServiceHookDsl" title="https://github.com/rfennell/VSTSServiceHookDsl">https://github.com/rfennell/VSTSServiceHookDsl</a>.</p>
<p>Interested to hear your feedback</p>
</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Transform tool for transferring TFS 2015.3 Release Templates to VSTS</title>
      <link>https://blog.richardfennell.net/posts/transform-tool-for-transferring-tfs-2015-3-release-templates-to-vsts/</link>
      <pubDate>Tue, 20 Dec 2016 21:58:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/transform-tool-for-transferring-tfs-2015-3-release-templates-to-vsts/</guid>
      <description>&lt;p&gt;If you are moving from on-premises TFS to VSTS you might hit the same problem I have just have. The structure of a VSTS releases is changing, there is now the concept of multiple ‘Deployment Steps’ in an environment. This means you can use a number of different agents for a single environment – a good thing.&lt;/p&gt;
&lt;p&gt;The downside this that if you export a TFS2015.3 release process and try to import it to VSTS it will fail saying the JSON format is incorrect.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are moving from on-premises TFS to VSTS you might hit the same problem I have just have. The structure of a VSTS releases is changing, there is now the concept of multiple ‘Deployment Steps’ in an environment. This means you can use a number of different agents for a single environment – a good thing.</p>
<p>The downside this that if you export a TFS2015.3 release process and try to import it to VSTS it will fail saying the JSON format is incorrect.</p>
<p>Of course you can get around this with some copy typing, but I am lazy, so….</p>
<p>I have written a quick transform tool that converts the basic structure of the JSON to the new format. You can see the code as <a href="https://gist.github.com/rfennell/01ca5d09f87a02d980b7fac1054da546">Github Gist</a></p>
<p>It is a command line tool, usage is as follows</p>
<ol>
<li>
<p>In VSTS create a new empty release, and save it</p>
</li>
<li>
<p>Use the drop down menu on the newly saved release in the release explorer and export the file. This is the template for the new format e.g. template.json</p>
</li>
<li>
<p>On your old TFS system export the release process in the same way to get your source file e.g. source.json</p>
</li>
<li>
<p>Run the command line tool providing the name of the template, source and output file</p>
<p><em>RMTransform template.json source.json output.json</em></p>
</li>
<li>
<p>On VSTS import the newly create JSON file release file.</p>
</li>
<li>
<p>A release process should be created, but it won’t be possible to save it until you have fixed a few things that are not transferred</p>
<ol>
<li>Associated each Deployment step with Agent Pool</li>
<li>Set the user accounts who will do the pre-and post approvals</li>
<li>Any secret variable will need to be reentered<br>
IMPORTANT - Make sure you save the imported process as soon as you can (i.e. straight after fixing anything that is stopping it being saved). If you don&rsquo;t save and start clicking into artifacts or global variable it seems to loose everything and you need to re-import</li>
</ol>
</li>
</ol>
<p><a href="/blogs/rfennell/image.axd?picture=image_334.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_329.png" title="image"></a></p>
<p>It is not perfect, you might find other issues that need fixing, but it save a load of copy typing</p>
]]></content:encoded>
    </item>
    <item>
      <title>Deleting unwanted orphan XAML Build Controllers on a migrated VSTS instance</title>
      <link>https://blog.richardfennell.net/posts/deleting-unwanted-orphan-xaml-build-controllers-on-a-migrated-vsts-instance/</link>
      <pubDate>Tue, 20 Dec 2016 21:55:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/deleting-unwanted-orphan-xaml-build-controllers-on-a-migrated-vsts-instance/</guid>
      <description>&lt;p&gt;Whilst working with the &lt;a href=&#34;https://blogs.msdn.microsoft.com/visualstudioalm/2016/11/16/import-your-tfs-database-into-visual-studio-team-services/&#34;&gt;VSTS Data Import Service&lt;/a&gt; I ended up migrating a TFS TPC up to VSTS that had an old XAML Build Controller defined. I did not need this XAML build controller, in fact I needed to remove it because it was using my free private build controller slot. Problem was I could not find a way to remove it via the VSTS (or Visual Studio Team Explorer) UI, and the VM that had been running the build controller was long gone.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst working with the <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2016/11/16/import-your-tfs-database-into-visual-studio-team-services/">VSTS Data Import Service</a> I ended up migrating a TFS TPC up to VSTS that had an old XAML Build Controller defined. I did not need this XAML build controller, in fact I needed to remove it because it was using my free private build controller slot. Problem was I could not find a way to remove it via the VSTS (or Visual Studio Team Explorer) UI, and the VM that had been running the build controller was long gone.</p>
<p>The way I got rid of it in the end was the TFS C# API and a quick command line tool as shown below.</p>
<p>Note that you will need to delete any queued builds on the controller before you can delete it. You can do this via the VSTS browser interface.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problem adding external AAD user to a directory backed VSTS instance</title>
      <link>https://blog.richardfennell.net/posts/problem-adding-external-aad-user-to-a-directory-backed-vsts-instance/</link>
      <pubDate>Wed, 30 Nov 2016 21:31:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problem-adding-external-aad-user-to-a-directory-backed-vsts-instance/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;I recently decided to change one of my VSTS instance to be directory backed. What this means is that in the past users logged in using LiveIDs (MSAs by their new name); once the VSTS instance was linked to an Azure Active Directory (AAD), via the Azure portal, they could login only if&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;they were using an account in the AAD&lt;/li&gt;
&lt;li&gt;their MSA was listed as a guest in the AAD&lt;/li&gt;
&lt;li&gt;they used a work ID in another AAD that is listed as a guest in my AAD&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Thus giving me centralised user management.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>I recently decided to change one of my VSTS instance to be directory backed. What this means is that in the past users logged in using LiveIDs (MSAs by their new name); once the VSTS instance was linked to an Azure Active Directory (AAD), via the Azure portal, they could login only if</p>
<ul>
<li>they were using an account in the AAD</li>
<li>their MSA was listed as a guest in the AAD</li>
<li>they used a work ID in another AAD that is listed as a guest in my AAD</li>
</ul>
<p>Thus giving me centralised user management.</p>
<p>So I made the changes required, and the first two types of user were fine, but I had a problem with the third case. When I did the following</p>
<ul>
<li>Added and external Work ID to my AAD directory (via the old management portal <a href="https://manage.windowsazure.com" title="https://manage.windowsazure.com">https://manage.windowsazure.com</a>)</li>
<li>Added the user in my VSTS instance as a user</li>
<li>Granted the new user rights to access team projects.</li>
</ul>
<p>All seemed to go OK, but when I tried to login as the user I got the error</p>
<p><em>TF400813: The user &lsquo;db0990ce-80ce-44fc-bac9-ff2cce4720affez_blackmarble.com#EXT#@richardblackmarbleco.onmicrosoft.com&rsquo; is not authorized to access this resource.</em></p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002_5.jpg"><img alt="clip_image002" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002_thumb_5.jpg" title="clip_image002"></a></p>
<h3 id="solution">Solution</h3>
<p>With some help from Microsoft I got this fixed, seem to be an issue with Azure AD. The fix was to do the following</p>
<ol>
<li>Remove the user from VSTS account</li>
<li>Go to the new Azure Portal (<a href="https://portal.azure.com/" title="https://portal.azure.com/">https://portal.azure.com/</a>) and remove this user from the AAD</li>
<li>Then re-add them as an external user back into the AAD (an invite email is sent)</li>
<li>Add the user again to VSTS (another invite email is sent)</li>
<li>Grant the user rights to the required team projects</li>
</ol>
<p>and this fixed the access problems for me. The key item for me I think was to use the new Azure portal.</p>
<p>Hope it saves you some time</p>
]]></content:encoded>
    </item>
    <item>
      <title>You can now manage complex ARM based environments in Azure DevTest Labs</title>
      <link>https://blog.richardfennell.net/posts/you-can-now-manage-complex-arm-based-environments-in-azure-devtest-labs/</link>
      <pubDate>Tue, 29 Nov 2016 13:42:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-can-now-manage-complex-arm-based-environments-in-azure-devtest-labs/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://azure.microsoft.com/en-gb/services/devtest-lab/&#34;&gt;Azure DevTest Labs&lt;/a&gt; has been available for a while and I &lt;a href=&#34;https://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/05/27/Easier-management-of-DevTest-VMs-with-Azure-DevTest-Labs&#34;&gt;have found it a good way&lt;/a&gt; to make sure I control costs of VMs i.e. making sure they are shutdown outside office hours. However, in the past, have had a major limitation for me that they only allowed templates that contained a single VM. You could group these together but it was bit awkward.&lt;/p&gt;
&lt;p&gt;Well at &lt;a href=&#34;https://blogs.msdn.microsoft.com/devtestlab/2016/11/16/connect-2016-news-for-azure-devtest-labs-azure-resource-manager-template-based-environments-vm-auto-shutdown-and-more/&#34;&gt;Connect() it was announced&lt;/a&gt; that there is now complex ARM template support. So you can now build multi-VM environment that simulate the environments you need to work on as a single unit.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://azure.microsoft.com/en-gb/services/devtest-lab/">Azure DevTest Labs</a> has been available for a while and I <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/05/27/Easier-management-of-DevTest-VMs-with-Azure-DevTest-Labs">have found it a good way</a> to make sure I control costs of VMs i.e. making sure they are shutdown outside office hours. However, in the past, have had a major limitation for me that they only allowed templates that contained a single VM. You could group these together but it was bit awkward.</p>
<p>Well at <a href="https://blogs.msdn.microsoft.com/devtestlab/2016/11/16/connect-2016-news-for-azure-devtest-labs-azure-resource-manager-template-based-environments-vm-auto-shutdown-and-more/">Connect() it was announced</a> that there is now complex ARM template support. So you can now build multi-VM environment that simulate the environments you need to work on as a single unit.</p>
<p>Why not give it a try?</p>
]]></content:encoded>
    </item>
    <item>
      <title>There are now no excuses for not using Continuous Delivery from VSTS for Azure Web Apps</title>
      <link>https://blog.richardfennell.net/posts/there-are-now-no-excuses-for-not-using-continuous-delivery-from-vsts-for-azure-web-apps/</link>
      <pubDate>Wed, 23 Nov 2016 23:02:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/there-are-now-no-excuses-for-not-using-continuous-delivery-from-vsts-for-azure-web-apps/</guid>
      <description>&lt;p&gt;One type of feature I hate people demoing in any IDE, especially Visual Studio, is the ‘just click here to publish to live from the developers PC’. This is just not good practice, we want to encourage a good DevOps process with&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Source Control&lt;/li&gt;
&lt;li&gt;Automated build&lt;/li&gt;
&lt;li&gt;Automated release with approvals&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The problem is, this can all be a bit much for people, it takes work and knowledge, and that right click is just too tempting.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One type of feature I hate people demoing in any IDE, especially Visual Studio, is the ‘just click here to publish to live from the developers PC’. This is just not good practice, we want to encourage a good DevOps process with</p>
<ul>
<li>Source Control</li>
<li>Automated build</li>
<li>Automated release with approvals</li>
</ul>
<p>The problem is, this can all be a bit much for people, it takes work and knowledge, and that right click is just too tempting.</p>
<p>So I was really pleased to see the new ‘Continuous Delivery (Preview)’ feature on Azure Web Apps announced at <a href="https://channel9.msdn.com/Events/Connect/2016">Connect().</a></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_332.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_327.png" title="image"></a></p>
<p>This provides that one click simplicity, but creates a reasonably good DevOps pipeline using the features of VSTS using VSTS itself or GitHub as the source repository.</p>
<p>For details of the exact features and how to use it see the <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2016/11/17/azure-app-services-continuous-delivery/">ALM Blog post</a>, I am sure it will provide you with a good starting point for your ongoing development if you don’t want to build it from scratch; but remember this will not be your end game, you are probably still going to need to think how you are going to manage the further config settings, tests and approvals a full process will require. It is just a much better place to start than a right click in Visual Studio.</p>
]]></content:encoded>
    </item>
    <item>
      <title>As announced at Connect() there is now a tool to fully migrate an on-premises TFS to VSTS</title>
      <link>https://blog.richardfennell.net/posts/as-announced-at-connect-there-is-now-a-tool-to-fully-migrate-an-on-premises-tfs-to-vsts/</link>
      <pubDate>Wed, 16 Nov 2016 16:47:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/as-announced-at-connect-there-is-now-a-tool-to-fully-migrate-an-on-premises-tfs-to-vsts/</guid>
      <description>&lt;p&gt;I am often asked asked ‘How can I move my TFS installation to VSTS?’&lt;/p&gt;
&lt;p&gt;In the past the only real answer I  had was the consultant’s answer ‘it depends’. There were options, but they all ended up losing fidelity i.e. that the history of past changes got removed or altered in some manner. For many companies the implication of such changes meant they stayed on-premises; with all the regular backups, updates and patch running the use of any on-premises service entails.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am often asked asked ‘How can I move my TFS installation to VSTS?’</p>
<p>In the past the only real answer I  had was the consultant’s answer ‘it depends’. There were options, but they all ended up losing fidelity i.e. that the history of past changes got removed or altered in some manner. For many companies the implication of such changes meant they stayed on-premises; with all the regular backups, updates and patch running the use of any on-premises service entails.</p>
<p>This has all changed with the <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2016/11/16/import-your-tfs-database-into-visual-studio-team-services/">announcement of the public preview of the TFS to VSTS Migrator</a> from Microsoft at the <a href="https://connectevent.microsoft.com/">Connect() conference</a>.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_331.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_326.png" title="image"></a></p>
<p>In essence this allows a TFS Team Project Collection to be imported into VSTS as new VSTS instance. This makes it sound simple, but this can be a complex process depending upon your adoption of Azure Active Directory, the levels of customisation that have been made to your on-premises TFS instance and may require the upgrading your TFS server to the current version. Hence, the process is <a href="http://www.devopsms.com/">Microsoft ALM/DevOps partner</a> led, and I am pleased to say that <a href="http://www.devopsms.com/">Black Marble is one of those Gold Partners</a>.</p>
<p>So if you have an on-premise TFS and…</p>
<ul>
<li>your company strategy is cloud first and you want to migrate, with full history</li>
<li>or you don’t want to patch your TFS server any more (or you stopped doing it a while ago)</li>
<li>or you just want to move to VSTS because it where all the cool new bits are</li>
</ul>
<p>why not get in touch with us at <a href="http://www.blackmarble.co.uk/">Black Marble</a> or <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/contact">myself</a> to help you investigate the options.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Changes to LiveID/MSA and what I have done about it to get around the new domain limitations</title>
      <link>https://blog.richardfennell.net/posts/changes-to-liveidmsa-and-what-i-have-done-about-it-to-get-around-the-new-domain-limitations/</link>
      <pubDate>Wed, 09 Nov 2016 16:44:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/changes-to-liveidmsa-and-what-i-have-done-about-it-to-get-around-the-new-domain-limitations/</guid>
      <description>&lt;h3 id=&#34;what-are-the-changes-in-allowed-email-addresses-in-msas&#34;&gt;What are the changes in allowed email addresses in MSAs?&lt;/h3&gt;
&lt;p&gt;You may or may not have noticed that there has been a recent change in LiveID (or Microsoft Account MSA as they are now called). In the past you could create a MSA using an existing email address e.g. &lt;a href=&#34;mailto:richard@mydomain.co.uk&#34;&gt;richard@mydomain.co.uk&lt;/a&gt; . This is no longer an option. If you try to create a new MSA with a non Microsoft (outlook.com/hotmal.com) email they are blocked saying &lt;em&gt;‘This email is part of a reserved domain. Please enter a different email address’&lt;/em&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="what-are-the-changes-in-allowed-email-addresses-in-msas">What are the changes in allowed email addresses in MSAs?</h3>
<p>You may or may not have noticed that there has been a recent change in LiveID (or Microsoft Account MSA as they are now called). In the past you could create a MSA using an existing email address e.g. <a href="mailto:richard@mydomain.co.uk">richard@mydomain.co.uk</a> . This is no longer an option. If you try to create a new MSA with a non Microsoft (outlook.com/hotmal.com) email they are blocked saying <em>‘This email is part of a reserved domain. Please enter a different email address’</em>.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_330.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_325.png" title="image"></a></p>
<p>This limitation is actually a bit more complex than you might initially think, as it is not just your primary corporate work email it checks, it also checks any aliases you have. So in my case it would give the same error for <a href="mailto:richard@mydomain.com">richard@mydomain.com</a> as well as <a href="mailto:richard@mydomain.co.uk">richard@mydomain.co.uk</a> because they are both valid non Microsoft domains even though one is really only used as an email alias for the other.</p>
<p>So if creating a new MSA you will need to create a <a href="mailto:user@outlook.com">user@outlook.com</a> style address. This is something all our new staff need to do as at present you need an MSA to associate with your MSDN subscription.</p>
<p>In the past we asked them to create this MSA with their email <a href="mailto:user@mydomain.co.uk">user@mydomain.co.uk</a> alias. This email address is an alias for their primary work email account, not their primary work account address <a href="mailto:user@mydomain.com">user@mydomain.com</a> itself. We encouraged them to not use their primary email address as it gets confusing as to which type of account (MSA or work account)  is in use at any given login screen if their login name is the same for both (their primary email address). We now we have to ask them to create one in the form <a href="mailto:bm-user@outlook.com">bm-user@outlook.com</a> to associate their MSDN subscription with.</p>
<h3 id="so-that-is-all-good-but-that-about-any-existing-accounts">So that is all good, but that about any existing accounts?</h3>
<p>I think the best option is to update any existing to use new <a href="mailto:user@outlook.com">user@outlook.com</a> addresses. I have found if you don’t do this you get into a place where the Azure/VSTS/O365 etc. login get confused as to whether your account is MSA or a Work Account. I actually managed to get to the point where I suddenly could not login to an Azure Active Directory (AAD) backed VSTS instance due to this confusion (the fix was to remove my ‘confused’ MSA and re-add my actual corporate AAD work account)</p>
<h3 id="how-do-i-fix-that-then">How do I fix that then?</h3>
<p>To try to forestall this problem on other services I decided to update my old MSA email adress by do the following</p>
<ol>
<li>Login as my old MSA</li>
<li>Go to <a href="https://account.microsoft.com">https://account.microsoft.com</a></li>
<li>Select ‘Info’</li>
<li>Select ‘Manage how you sign in to Microsoft’</li>
<li>Select ‘Add a new email address’</li>
<li>Create a new <strong>@outlook.com</strong> email address (this will create a new email/Outlook inbox, but note that this seems to take a few minutes, or it did for me)</li>
<li>Once the new email alias is created you can choose to make it your primary login address</li>
<li>Finally you can delete your old address from the MSA</li>
</ol>
<p>And your are done, you now can login with your new <a href="mailto:user@outlook.com">user@outlook.com</a> with your existing password and any 2FA settings  you have to any services you would previously login to e.g MSDN web site, VSTS etc.</p>
<p>The one extra step I did was to go into <a href="https://outlook.live.com">https://outlook.live.com</a> , one the new email inbox was created, to access the new Inbox and setup an email forward to my old <a href="mailto:richard@mydomain.co.uk">richard@mydomain.co.uk</a> email address. This was just to make sure any email notifications sent to the MSAs new Inbox end up somewhere I will actually see them, last think I wanted was a new Inbox to monitor</p>
<h3 id="summary">Summary</h3>
<p>So I have migrated the primary email address for my MSA and all is good. You might not need this today, but I suspect it is something most people with MSAs using a work email as their primary login address are going to have to address at some point in the future.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can’t login to OneDrive desktop application on Windows 10</title>
      <link>https://blog.richardfennell.net/posts/cant-login-to-onedrive-desktop-application-on-windows-10/</link>
      <pubDate>Tue, 25 Oct 2016 09:47:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cant-login-to-onedrive-desktop-application-on-windows-10/</guid>
      <description>&lt;p&gt;Whilst I have been on holiday my PC has been switched off and in a laptop bag. This did not seem to stopped me getting problems when I tried to use it again…&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Outlook could not sync to O365 – turns out there had been some changes in our hybrid Exchange infrastructure, I just need to restart/patch the PC on our company LAN to pick up all the new group policy settings etc.&lt;/li&gt;
&lt;li&gt;Could not login to OneDrive getting a script error &lt;strong&gt;&lt;a href=&#34;https://auth.gfx.ms/16.000.26657.00/DefaultLogin_Core.js&#34;&gt;https://auth.gfx.ms/16.000.26657.00/DefaultLogin_Core.js&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_329.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_324.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst I have been on holiday my PC has been switched off and in a laptop bag. This did not seem to stopped me getting problems when I tried to use it again…</p>
<ul>
<li>Outlook could not sync to O365 – turns out there had been some changes in our hybrid Exchange infrastructure, I just need to restart/patch the PC on our company LAN to pick up all the new group policy settings etc.</li>
<li>Could not login to OneDrive getting a script error <strong><a href="https://auth.gfx.ms/16.000.26657.00/DefaultLogin_Core.js">https://auth.gfx.ms/16.000.26657.00/DefaultLogin_Core.js</a></strong></li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_329.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_324.png" title="image"></a></p>
<p>This second problem was a bit more complex to fix.</p>
<ol>
<li>Load Internet Explorer (not Edge)</li>
<li>Go to Settings &gt; Internet Options &gt; Security</li>
<li>Pick Trusted Sites and manually add the URL <strong><a href="https://auth.gfx.ms">https://auth.gfx.ms</a></strong> as a trusted site.</li>
<li>Unload the OneDrive desktop client</li>
<li>Reload the OneDrive desktop client and you should get the usual LiveID login and all is good</li>
<li>Interestingly – if you remove the trusted site setting the login still appears to work, but for how long I don’t know. I assume something is being cached.</li>
</ol>
<p>So it appears there have been a few changes on security whilst I have been away.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My video about DevOps with VSTS has been published on Channel9</title>
      <link>https://blog.richardfennell.net/posts/my-video-about-devops-with-vsts-has-been-published-on-channel9/</link>
      <pubDate>Mon, 24 Oct 2016 16:44:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-video-about-devops-with-vsts-has-been-published-on-channel9/</guid>
      <description>&lt;p&gt;I have been on holiday in Nepal for a couple of weeks; trekking to Annapurna Base Camp&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/blogs/rfennell/image.axd?picture=IMG_0020_1.jpg&#34;&gt;&lt;img alt=&#34;IMG_0020&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/blogs/rfennell/image.axd?picture=IMG_0020_thumb_1.jpg&#34; title=&#34;IMG_0020&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Whilst I was ‘off the grid’ a video I made on using the tools within &lt;a href=&#34;https://channel9.msdn.com/events/TechDaysOnline/UK-TechDays-Online-2016/A-whistle-stop-tour-of-Visual-Studio-Team-Services-and-DevOps&#34;&gt;VSTS to build a DevOps pipeline got published on Channel9&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Hope you find it useful&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note to myself&lt;/strong&gt;: In the future I must make sure I have my eyes open for the landing screen. - Now fixed thanks to the CH9 curators&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been on holiday in Nepal for a couple of weeks; trekking to Annapurna Base Camp</p>
<p><a href="/blogs/rfennell/image.axd?picture=IMG_0020_1.jpg"><img alt="IMG_0020" loading="lazy" src="/blogs/rfennell/image.axd?picture=IMG_0020_thumb_1.jpg" title="IMG_0020"></a></p>
<p>Whilst I was ‘off the grid’ a video I made on using the tools within <a href="https://channel9.msdn.com/events/TechDaysOnline/UK-TechDays-Online-2016/A-whistle-stop-tour-of-Visual-Studio-Team-Services-and-DevOps">VSTS to build a DevOps pipeline got published on Channel9</a></p>
<p>Hope you find it useful</p>
<p><strong>Note to myself</strong>: In the future I must make sure I have my eyes open for the landing screen. - Now fixed thanks to the CH9 curators</p>
]]></content:encoded>
    </item>
    <item>
      <title>Devils with Arms and Cats - the new name for DevOps?</title>
      <link>https://blog.richardfennell.net/posts/devils-with-arms-and-cats-the-new-name-for-devops/</link>
      <pubDate>Sun, 02 Oct 2016 10:10:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/devils-with-arms-and-cats-the-new-name-for-devops/</guid>
      <description>&lt;p&gt;Great day at &lt;a href=&#34;http://www.dddnorth.co.uk/&#34;&gt;DDDNorth&lt;/a&gt; yesterday, hope everyone enjoyed it. Thanks to all the team who helped during the preparation and on the day.&lt;/p&gt;
&lt;p&gt;The slides from &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth&#34;&gt;Rik Hepworth&lt;/a&gt; and my presentation on ‘_Living the dream - Real world DevOps with Azure and VSTS’ are up at &lt;a href=&#34;http://bit.ly/2cIAiz7&#34; title=&#34;http://bit.ly/2cIAiz7&#34;&gt;Github&lt;/a&gt;. _&lt;/p&gt;
&lt;p&gt;We were a late stand in session  to cover for a presenter who could not attend on the day. So I hope it was not too much of let down, that we were not the speaker on the agenda, covered a different subject and did not match the title the spell checker converted out session title too. Though ‘Devils with Arms and Cats’  is maybe a good term for DevOps?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Great day at <a href="http://www.dddnorth.co.uk/">DDDNorth</a> yesterday, hope everyone enjoyed it. Thanks to all the team who helped during the preparation and on the day.</p>
<p>The slides from <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth">Rik Hepworth</a> and my presentation on ‘_Living the dream - Real world DevOps with Azure and VSTS’ are up at <a href="http://bit.ly/2cIAiz7" title="http://bit.ly/2cIAiz7">Github</a>. _</p>
<p>We were a late stand in session  to cover for a presenter who could not attend on the day. So I hope it was not too much of let down, that we were not the speaker on the agenda, covered a different subject and did not match the title the spell checker converted out session title too. Though ‘Devils with Arms and Cats’  is maybe a good term for DevOps?</p>
<p>As to the grok talk I did at lunchtime on developing VSTS extension with VS Code, there are no slides; but look at these past posts, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/06/21/Using-Visual-Studio-Code-to-develop-VSTS-Build-Tasks-with-PowerShell-and-Pester-tests">building VSTS tasks with Powershell</a> and <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/05/06/Putting-a-release-process-around-my-VSTS-extension-development">putting a release process around vsts extension development</a> they are on similar subjects.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My DevTest Labs video is up on Channel9</title>
      <link>https://blog.richardfennell.net/posts/my-devtest-labs-video-is-up-on-channel9/</link>
      <pubDate>Thu, 29 Sep 2016 18:54:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-devtest-labs-video-is-up-on-channel9/</guid>
      <description>&lt;p&gt;The video of my session at TechDays online is now &lt;a href=&#34;https://channel9.msdn.com/events/TechDaysOnline/UK-TechDays-Online-September-2016/DevTest-in-the-Cloud-with-Azure-and-Friends&#34;&gt;available on Channel9&lt;/a&gt;, along with all the &lt;a href=&#34;https://channel9.msdn.com/Events/TechDaysOnline/UK-TechDays-Online-September-2016&#34;&gt;TechDay Online UK content&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Enjoy…&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The video of my session at TechDays online is now <a href="https://channel9.msdn.com/events/TechDaysOnline/UK-TechDays-Online-September-2016/DevTest-in-the-Cloud-with-Azure-and-Friends">available on Channel9</a>, along with all the <a href="https://channel9.msdn.com/Events/TechDaysOnline/UK-TechDays-Online-September-2016">TechDay Online UK content</a></p>
<p>Enjoy…</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Test Suites within a network Isolated Lab Management environment when using TFS vNext build and release tooling</title>
      <link>https://blog.richardfennell.net/posts/running-test-suites-within-a-network-isolated-lab-management-environment-when-using-tfs-vnext-build-and-release-tooling/</link>
      <pubDate>Tue, 27 Sep 2016 19:17:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-test-suites-within-a-network-isolated-lab-management-environment-when-using-tfs-vnext-build-and-release-tooling/</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Updated 27 Sep 2016&lt;/strong&gt;: Added solutions to known issues&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;As I have &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/?tag=Lab&amp;#43;Management&#34;&gt;posted many times&lt;/a&gt; we make use of TFS Lab Management to provide network isolated dev/test environments. Going forward I see us moving to &lt;a href=&#34;https://azure.microsoft.com/en-gb/services/devtest-lab/&#34;&gt;Azure Dev Labs&lt;/a&gt; and/or &lt;a href=&#34;https://azure.microsoft.com/en-gb/overview/azure-stack/&#34;&gt;Azure Stack&lt;/a&gt; with &lt;a href=&#34;https://azure.microsoft.com/en-gb/documentation/templates/&#34;&gt;ARM templates&lt;/a&gt;, but that isn’t going to help me today, especially when I have already made the investment in setting up a Lab Management environments and they are ready to use.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em><strong>Updated 27 Sep 2016</strong>: Added solutions to known issues</em></p>
<h3 id="background">Background</h3>
<p>As I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/?tag=Lab&#43;Management">posted many times</a> we make use of TFS Lab Management to provide network isolated dev/test environments. Going forward I see us moving to <a href="https://azure.microsoft.com/en-gb/services/devtest-lab/">Azure Dev Labs</a> and/or <a href="https://azure.microsoft.com/en-gb/overview/azure-stack/">Azure Stack</a> with <a href="https://azure.microsoft.com/en-gb/documentation/templates/">ARM templates</a>, but that isn’t going to help me today, especially when I have already made the investment in setting up a Lab Management environments and they are ready to use.</p>
<p>One change we are making now is a move from the old TFS Release Management (2013 generation) to the <a href="https://www.visualstudio.com/en-us/features/release-management-vs.aspx">new VSTS and TFS 2015.2 vNext Release tools</a>. This means I need to be able to trigger automated tests on VMs within Lab Management network isolated environments with a command inside my new build/release process. I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/04/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline">posted on how to do this with the older generation Release Management tools</a>, turns out it is in some ways a little simpler with the newer tooling, no need to fiddle with shadow accounts etal.</p>
<h3 id="my-setup">My Setup</h3>
<p><a href="/wp-content/uploads/sites/2/historic/image_312.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_308.png" title="image"></a></p>
<h3 id="constraints">Constraints</h3>
<p>The constraints are these</p>
<ul>
<li>I need to be able to trigger tests on the Client VM in the network isolated lab environment. These tests are all defined in automated test suites within Microsoft Test Manager.</li>
<li>The network isolated lab already has a TFS Test Agent deployed on all the VMs in the environment linked back to the TFS Test Controller on my corporate domain, these agents are automatically installed and managed, and are <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management">handling the ‘magic’ for the network isolation</a> – we can’t fiddle with these without breaking the Labs </li>
<li>The new build/release tools assume that you will auto deploy a 2015 generation Test Agent via a build task as part of the build/release process. This is a new test agent install, so removed any already installed Test Agent – we don’t want this as it breaks the existing agent/network isolation.</li>
<li>So my only options to trigger the tests by using TCM (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/04/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline">as we did in the past</a>) from some machine in the system. In the past (with the old tools) this had to be within the isolated network environment due to the limitation put in place by the use of shadow accounts.  </li>
<li>However, TCM (as shipped with VS 2015) does not ‘understand’ vNext builds, so it can’t seem to find them by definition name/number – we have to find builds by their drop location, and I think this needs to be a UNC share, not a drop back onto the TFS server. So using TCM.EXE (and any wrapper scripts) probably is not going to deliver what I want i.e. the test run associated with a vNext build and/or release.</li>
</ul>
<h3 id="my-solution">My Solution</h3>
<p>The solution I adopted was to write a PowerShell script that performs the same function as the TCMEXEC.PS1 script that used to be run within the network isolated Labe Environment by the older Release Management products.</p>
<p>The difference is the old script shelled out to run TCM.EXE, my new version makes calls to the new TFS REST API (and unfortunately also to the older C# API as some features notably those for Lab Management services are not exposed via REST). This script can be run from anywhere, I chose to run it on the TFS vNext build agent, as this is easiest and this machine already had Visual Studio installed so had the TFS C# API available.</p>
<p><a href="https://github.com/rfennell/VSTSPowershell/blob/master/REST/TCMReplacement.ps1">You can find this script on my VSTSPowerShell GitHub Repo</a>.</p>
<p>The usage of the script is</p>
<pre tabindex="0"><code>TCMReplacement.ps1  
      -Collectionuri [http://tfsserver.domain.com:8080/tfs/defaultcollection/](http://tfsserver.domain.com:8080/tfs/defaultcollection/)   
      -Teamproject &#34;My Project&#34;  
      -testplanname &#34;My test plan&#34;    
      -testsuitename &#34;Automated tests&#34;   
      -configurationname &#34;Windows 8&#34;   
      -buildid  12345  
   -environmentName &#34;Lab V.2.0&#34;    
      -testsettingsname &#34;Test Setting&#34;   
      -testrunname &#34;Smoke Tests&#34;   
      -testcontroller &#34;mytestcontroller.domain.com&#34;  
      -releaseUri &#34;vstfs:///ReleaseManagement/Release/167&#34;   
      -releaseenvironmenturi &#34;vstfs:///ReleaseManagement/Environment/247&#34;  
</code></pre><p>Note</p>
<ul>
<li>The last two parameters are optional, all the others are required. If the last two are not used the test results will not be associated with a release</li>
<li>The is also a <strong>pollinginterval</strong> parameter which default to 10 seconds. The script starts a test run then polls on this interval to see if it has completed.</li>
<li>If there are any failed test then the script writes to <strong>write-error</strong> as the TFS build process sees this is a failed step</li>
</ul>
<p>In some ways I think this script is an improvement over the TCMEXEC script, the old one needed you to know the IDs for many of the settings (loads of poking around in Microsoft Test Manager to find them), I allow the common names of settings to be passed in which I then use to lookup the required values via the APIs (this is where I needed to use the older C# API as I could not find a way to get the Configuration ID, Environment ID or Test Settings ID via REST).</p>
<p>There is nothing stopping you running this script from the command line, but I think it is more likely to make it part of release pipeline using the PowerShell on local machine task in the build system. When used this way you can get many of the parameters from environment variables. So the command arguments become something like the following (and of course you can make all the string values build variables too if you want)</p>
<pre tabindex="0"><code>   -Collectionuri $(SYSTEM.TEAMFOUNDATIONCOLLECTIONURI)   
   -Teamproject $(SYSTEM.TEAMPROJECT)   
   -testplanname &#34;My test plan&#34;  
   -testsuitename &#34;Automated tests&#34;  
   -configurationname &#34;Windows 8&#34;   
   -buildid  $(BUILD.BUILDID)  
  -environmentName &#34;Lab V.2.0&#34;  
   -testsettingsname &#34;Test Settings&#34;  
   -testrunname &#34;Smoke Tests&#34;   
   -testcontroller &#34;mytestcontroller.domain.com&#34;   
   -releaseUri $(RELEASE.RELEASEURI)   
   -releaseenvironmenturi $(RELEASE.ENVIRONMENTURI)
</code></pre><p>Obviously this script is potentially a good candidate for a TFS build/release task, but as per my usual practice I will make sure I am happy with it’s operation before wrappering it up into an extension.</p>
<h3 id="known-issues">Known Issues</h3>
<ul>
<li>
<p>If you run the script from the command line targeting a completed build and release the tests run and are shown in the release report as well as on the test tab as we would expect.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_313.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_309.png" title="image"></a></p>
<p>However, if you trigger the test run from within a release pipeline, the test runs OK and you can see the results in the test tab (and MTM), but they are not associated within the release. My guess is because the release had not completed when the data update is made. I am investigating this to try to address the issue.</p>
</li>
</ul>
<p>Previously I reported there was a known issue that the test results were associated with the build, but not the release. It turns out this was due to the AD account the build/release agent was running as was missing rights on the TFS server. To fix the problem I made sure the account as configured as follows”&quot;:</p>
<ul>
<li>As a basic (or advanced) user who can access the test hub (<a href="http://%3cyourserver%3e:8080/tfs/_admin/_licenses">http://<yourserver>:8080/tfs/_admin/_licenses</a>) i.e. not a stakeholder</li>
<li>Also made sure the user was a ‘release administrator’ in the team project (<a href="http://%3cyourserver%3e:8080/tfs/%3CTPC%3E/%3CTP%3E/_admin/_security/?_a=members">http://<yourserver>:8080/tfs/<TPC>/<TP>/_admin/_security/?_a=members</a>)</li>
</ul>
<p>Once this was done all the test results appeared where they should</p>
<p>So hopefully you will find this a useful tool if you are using network isolated environments and TFS build</p>
]]></content:encoded>
    </item>
    <item>
      <title>If I add a custom field to a VSTS work item type what is it’s name?</title>
      <link>https://blog.richardfennell.net/posts/if-i-add-a-custom-field-to-a-vsts-work-item-type-what-is-its-name/</link>
      <pubDate>Fri, 23 Sep 2016 16:08:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/if-i-add-a-custom-field-to-a-vsts-work-item-type-what-is-its-name/</guid>
      <description>&lt;p&gt;The process customisation options in VSTS are now fairly extensive. &lt;a href=&#34;https://www.visualstudio.com/en-us/docs/work/process/customize-process&#34;&gt;You can add fields, states and custom items&lt;/a&gt;, making VSTS is ‘very possible’ option for many more people.&lt;/p&gt;
&lt;p&gt;As well as the obvious uses of this customisation such as storing more data or matching your required process, customisation can also aid in migrating work items into VSTS from other VSTS instances, or on-premises TFS.&lt;/p&gt;
&lt;p&gt;Whether using &lt;a href=&#34;https://tfsintegration.codeplex.com/&#34;&gt;TFS Integration&lt;/a&gt; (now with no support – beware) or &lt;a href=&#34;https://github.com/nkdAgility/vsts-data-bulk-editor&#34;&gt;Martin Hinshelwood’s vsts-data-bulk-editor&lt;/a&gt; (an active open source solution so probably a much better choice for most people) as &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/06/20/Migrating-a-TFS-TFVC-team-project-to-a-Git-team-project&#34;&gt;mentioned in my past post&lt;/a&gt; you need to add a custom field on the target VSTS server to contain the original work item ID. Commonly called &lt;strong&gt;ReflectedWorkItemId&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The process customisation options in VSTS are now fairly extensive. <a href="https://www.visualstudio.com/en-us/docs/work/process/customize-process">You can add fields, states and custom items</a>, making VSTS is ‘very possible’ option for many more people.</p>
<p>As well as the obvious uses of this customisation such as storing more data or matching your required process, customisation can also aid in migrating work items into VSTS from other VSTS instances, or on-premises TFS.</p>
<p>Whether using <a href="https://tfsintegration.codeplex.com/">TFS Integration</a> (now with no support – beware) or <a href="https://github.com/nkdAgility/vsts-data-bulk-editor">Martin Hinshelwood’s vsts-data-bulk-editor</a> (an active open source solution so probably a much better choice for most people) as <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/06/20/Migrating-a-TFS-TFVC-team-project-to-a-Git-team-project">mentioned in my past post</a> you need to add a custom field on the target VSTS server to contain the original work item ID. Commonly called <strong>ReflectedWorkItemId</strong></p>
<p>This can be added in VSTS  <a href="https://www.visualstudio.com/en-us/docs/work/process/customize-process">add detailed in MSDN</a></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_327.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_322.png" title="image"></a></p>
<p><em><strong>Note</strong>: In the case of Martin’s tool the field needs to be a string as it is going to contains a URL not the simple Integer you might expect.</em></p>
<p>The small issue you have when you add a custom field is that this UI does not make it clear what the full name of field is. You need to remember that it is in the form <strong><name of custom process>.<field name></strong> e.g.  <strong>MigrateScrum.ReflectedWorkItemId</strong>.</p>
<p>If you forget this you can always download the work item definition using the <a href="https://visualstudiogallery.msdn.microsoft.com/898a828a-af00-42c6-bbb2-530dc7b8f2e1">TFS Power Tools</a> to have a look (yes this even works on VSTS).</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_328.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_323.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Typemock have released official VSTS build extension</title>
      <link>https://blog.richardfennell.net/posts/typemock-have-released-official-vsts-build-extension/</link>
      <pubDate>Mon, 12 Sep 2016 21:17:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/typemock-have-released-official-vsts-build-extension/</guid>
      <description>&lt;p&gt;Typemock have just released an &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=Typemock.Typemock-Tasks&#34;&gt;official VSTS build extension to run Typemock Isolator based tests&lt;/a&gt;. Given there is now an official extension I have decided to deprecate mine, &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-TypeMockRunner-Task&#34;&gt;it is still available in the Marketplace&lt;/a&gt; but I would recommend using the official one &lt;/p&gt;
&lt;p&gt;The new Typemock extension includes two tasks&lt;/p&gt;
&lt;h3 id=&#34;smartrunner-task&#34;&gt;SmartRunner Task&lt;/h3&gt;
&lt;p&gt;The SmartRunner is a unit test runner, that can run nunit and mstest based tests. It handles the deployment of Typemock Isolator.  SmartRunner can run on both Shared and On Premises Agents&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Typemock have just released an <a href="https://marketplace.visualstudio.com/items?itemName=Typemock.Typemock-Tasks">official VSTS build extension to run Typemock Isolator based tests</a>. Given there is now an official extension I have decided to deprecate mine, <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-TypeMockRunner-Task">it is still available in the Marketplace</a> but I would recommend using the official one </p>
<p>The new Typemock extension includes two tasks</p>
<h3 id="smartrunner-task">SmartRunner Task</h3>
<p>The SmartRunner is a unit test runner, that can run nunit and mstest based tests. It handles the deployment of Typemock Isolator.  SmartRunner can run on both Shared and On Premises Agents</p>
<h3 id="typemock-with-vstests">Typemock with VSTests</h3>
<p>This task acts as a wrapper to enable Typemock Isolator and then run your tests via VSTest. This task can only be used with On Premises Agents as the build agent needs to be running with admin privileges.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for my Docker image create dates being 8 hours in the past</title>
      <link>https://blog.richardfennell.net/posts/fix-for-my-docker-image-create-dates-being-8-hours-in-the-past/</link>
      <pubDate>Wed, 24 Aug 2016 20:18:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-my-docker-image-create-dates-being-8-hours-in-the-past/</guid>
      <description>&lt;p&gt;I have been having a look at &lt;a href=&#34;https://docs.docker.com/docker-for-windows/&#34;&gt;Docker for Windows&lt;/a&gt; recently. I have been experiencing a problem that when I create a new image the created date/time (as shown with &lt;strong&gt;docker images&lt;/strong&gt;) is 8 hours in the past.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_326.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_321.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Turns out the problem seems to be due to putting my Windows 10 laptop into sleep mode. So the process to see the problem is&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Create a new Docker image – the create date is correct, the current time&lt;/li&gt;
&lt;li&gt;Sleep the PC&lt;/li&gt;
&lt;li&gt;Wake up the PC&lt;/li&gt;
&lt;li&gt;Check the create date, it is now 8 hours off in the past&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now the create date is not an issue in itself, but the fact that the time within the Docker images is also off by 8 hours can be, especially when trying to connect to cloud based services. I needed to sort it out&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been having a look at <a href="https://docs.docker.com/docker-for-windows/">Docker for Windows</a> recently. I have been experiencing a problem that when I create a new image the created date/time (as shown with <strong>docker images</strong>) is 8 hours in the past.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_326.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_321.png" title="image"></a></p>
<p>Turns out the problem seems to be due to putting my Windows 10 laptop into sleep mode. So the process to see the problem is</p>
<ol>
<li>Create a new Docker image – the create date is correct, the current time</li>
<li>Sleep the PC</li>
<li>Wake up the PC</li>
<li>Check the create date, it is now 8 hours off in the past</li>
</ol>
<p>Now the create date is not an issue in itself, but the fact that the time within the Docker images is also off by 8 hours can be, especially when trying to connect to cloud based services. I needed to sort it out</p>
<p>Turns out the fix is simple, you need to stop and restart the Docker process (or restarting the PC has the same effect as this restarts the Docker process). Why the Docker process ends up 8 hours off, irrespective of the time the PC is slept, I don’t know. Just happy to have a quick fix.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I am speaking at Microsoft UK TechDays Online event on Azure DevTest Labs</title>
      <link>https://blog.richardfennell.net/posts/i-am-speaking-at-microsoft-uk-techdays-online-event-on-azure-devtest-labs/</link>
      <pubDate>Wed, 24 Aug 2016 09:06:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-am-speaking-at-microsoft-uk-techdays-online-event-on-azure-devtest-labs/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;https://www.eventbrite.co.uk/e/techdays-online-september-2016-led-by-mvps-tickets-27251539087?ref=estw&#34;&gt;registration link&lt;/a&gt; for Microsoft UK TechDays Online is now live. This is a 5 day event live broadcast from the Microsoft Campus in Reading. You will be able to view the sessions live at &lt;a href=&#34;https://channel9.msdn.com/&#34;&gt;https://channel9.msdn.com/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The themes for each day are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Monday, 12 September: Explore the world of Data Platform and BOTs&lt;/li&gt;
&lt;li&gt;Tuesday, 13 September: DevOps in practice&lt;/li&gt;
&lt;li&gt;Wednesday, 14 September: A day at the Office!&lt;/li&gt;
&lt;li&gt;Thursday, 15 September: The inside track on Azure and UK Datacenter&lt;/li&gt;
&lt;li&gt;Friday, 16 September: Find out more about Artificial Intelligence&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I am doing a session on the Thursday on &lt;a href=&#34;https://azure.microsoft.com/en-gb/services/devtest-lab/&#34;&gt;Azure DevTest Labs&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="https://www.eventbrite.co.uk/e/techdays-online-september-2016-led-by-mvps-tickets-27251539087?ref=estw">registration link</a> for Microsoft UK TechDays Online is now live. This is a 5 day event live broadcast from the Microsoft Campus in Reading. You will be able to view the sessions live at <a href="https://channel9.msdn.com/">https://channel9.msdn.com/</a></p>
<p>The themes for each day are:</p>
<ul>
<li>Monday, 12 September: Explore the world of Data Platform and BOTs</li>
<li>Tuesday, 13 September: DevOps in practice</li>
<li>Wednesday, 14 September: A day at the Office!</li>
<li>Thursday, 15 September: The inside track on Azure and UK Datacenter</li>
<li>Friday, 16 September: Find out more about Artificial Intelligence</li>
</ul>
<p>I am doing a session on the Thursday on <a href="https://azure.microsoft.com/en-gb/services/devtest-lab/">Azure DevTest Labs</a>.</p>
<p>Hope you find time to watch some or all of the events. For more details see the <a href="https://www.eventbrite.co.uk/e/techdays-online-september-2016-led-by-mvps-tickets-27251539087?ref=estw">registration link</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Why have I got a ‘.NETCore50’ and a ‘netcore50’ folder in my nuget package?</title>
      <link>https://blog.richardfennell.net/posts/why-have-i-got-a-netcore50-and-a-netcore50-folder-in-my-nuget-package/</link>
      <pubDate>Tue, 23 Aug 2016 14:23:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-have-i-got-a-netcore50-and-a-netcore50-folder-in-my-nuget-package/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/08/16/Experiences-versioning-related-sets-of-NuGet-packages-within-a-VSTS-build&#34;&gt;recently posted on how we were versioning our Nuget packages as part of a release pipeline&lt;/a&gt;. In test we noticed that the packages being produced by this process has an extra folder inside them.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_325.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_320.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;We expected there to be a &lt;strong&gt;netcore50&lt;/strong&gt; folder, but not a &lt;strong&gt;.NETCore50&lt;/strong&gt; folder. Strangely if we build the package locally we only saw the expect &lt;strong&gt;netcore50&lt;/strong&gt; folder. The addition of this folder did not appear to be causing any problem, but I did want to find out why it had appeared and remove it as it was not needed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/08/16/Experiences-versioning-related-sets-of-NuGet-packages-within-a-VSTS-build">recently posted on how we were versioning our Nuget packages as part of a release pipeline</a>. In test we noticed that the packages being produced by this process has an extra folder inside them.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_325.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_320.png" title="image"></a> </p>
<p>We expected there to be a <strong>netcore50</strong> folder, but not a <strong>.NETCore50</strong> folder. Strangely if we build the package locally we only saw the expect <strong>netcore50</strong> folder. The addition of this folder did not appear to be causing any problem, but I did want to find out why it had appeared and remove it as it was not needed.</p>
<p>Turns out the issue was the version of <strong>Nuget.exe</strong>, the automatically installed version on the on-prem TFS build agent was 3.2, my local copy 3.4. As soon as I upgraded the build box’s <strong>nuget.exe</strong> version to 3.4 the problem went away</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences versioning related sets of NuGet packages within a VSTS build</title>
      <link>https://blog.richardfennell.net/posts/experiences-versioning-related-sets-of-nuget-packages-within-a-vsts-build/</link>
      <pubDate>Tue, 16 Aug 2016 11:39:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-versioning-related-sets-of-nuget-packages-within-a-vsts-build/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;We are currently packaging up a set of UX libraries as NuGet packages to go on our internal NuGet server. The assemblies that make up the core of this framework are all in a single Visual Studio solution, however it makes sense to distribute them as a set of NuGet packages as you might not need all the parts in a given project. Hence we have a package structure as follows…&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>We are currently packaging up a set of UX libraries as NuGet packages to go on our internal NuGet server. The assemblies that make up the core of this framework are all in a single Visual Studio solution, however it makes sense to distribute them as a set of NuGet packages as you might not need all the parts in a given project. Hence we have a package structure as follows…</p>
<ul>
<li>BM.UX.Common</li>
<li>BM.UX.Controls</li>
<li>BM.UX.Behaviours</li>
<li>etc…</li>
</ul>
<p>There has been much thought on the versioning strategy of these packages. We did consider independent versioning of each of these fundamental packages, but decided it was worth the effort, keeping their versions in sync was reasonable  i.e. the packages have the same version number and are released as a set.</p>
<p>Now this might not be the case for future ‘extension’ packages, but it is an OK assumption for now, especially as it makes the development cycle quicker/easier. This framework is young and rapidly changing, there are often changes in a control that needs associated changes in the common assembly; it is hence good that a developers does not have to check-in a change on the common package before they can make an associated changed to the control package whist debugging a control prior to it being released.</p>
<p>However, this all meant it was important to make sure the package dependencies and versions are set correctly.</p>
<h3 id="builds">Builds</h3>
<p>We are using Git for this project (though this process is just as relevant for TFVC) with a development branch and a master branch. Each branch has its own CI triggered build</p>
<ul>
<li>
<p>Development branch build …</p>
</li>
<li>
<p>Builds the solution</p>
</li>
<li>
<p>Runs Unit tests</p>
</li>
<li>
<p>Does SonarQube analysis</p>
</li>
<li>
<p><strong>DOES NOT</strong> store any built artifacts</p>
</li>
<li>
<p>[Is used to validate Pull requests]</p>
</li>
<li>
<p>Master branch build …</p>
</li>
<li>
<p>Versions the code</p>
</li>
<li>
<p>Builds the solution</p>
</li>
<li>
<p>Runs Unit tests</p>
</li>
<li>
<p>Creates the NuGet Packages</p>
</li>
<li>
<p>Stores the created packages (to be picked up by a Release pipeline for publishing to our internal NuGet server)</p>
</li>
</ul>
<h3 id="versioning">Versioning</h3>
<p>So within the Master build we need to do some versioning, this needs to be done to different files to make sure the assemblies and the NuGet packages are ‘stamped’ with the build version.</p>
<p>We get this version for the build number <a href="https://www.visualstudio.com/en-us/docs/build/define/variables">variable</a>, <strong>$(Build.BuildNumber)</strong>, we use the format <strong>$(Major).$(Minor).$(Year:yy)$(DayOfYear).$(rev:r)</strong>  e.g. 1.2.16123.3</p>
<p>Where</p>
<ul>
<li><strong>$(Major)</strong> and <strong>$(Minor)</strong> build variables we manage (actually our release pipeline updates the <strong>$(Minor)</strong> on every successful release to production using a <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-BuildUpdating-Tasks">VSTS task</a>)</li>
<li><strong>$(Year:yy)$(DayOfYear)</strong> gives a date in the form 16123</li>
<li>and <strong>$(rev:r)</strong> is a count of builds on a given day</li>
</ul>
<p>We have chosen to use this number format to version both the assemblies and Nuget packages, if you have different plans, such as <a href="https://docs.nuget.org/create/versioning">semantic versioning</a> , you will need to modify this process a bit.</p>
<h4 id="assemblies">Assemblies</h4>
<p>The assemblies themselves are easy to version, we just need to set the correct value in their <strong>assemblyinfo.cs</strong> or <strong>assemblyinfo.vb</strong> files. I used my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">Assembly versioning VSTS task</a> to do this</p>
<h4 id="nuget-packages">NuGet Packages</h4>
<p>The packages turn out to be a bit more complex. Using the <a href="https://www.visualstudio.com/docs/build/steps/package/nuget-packager">standard NuGet Packager task</a> there is a checkbox to say to use the build number as the version. This works just fine versioning the actual package, adding the <strong>–Version</strong> flag to the package command to override the value in the project <strong>.nuspec</strong> file. However it does not help with managing the versions of any dependant packages in the solution, and here is why. In our build …</p>
<ol>
<li><strong>AssemblyInfo</strong> files updated</li>
<li>The solution is built, so we have version stamped DLLs</li>
<li>We package the first ‘common’ Nuget package (which has no dependencies on other projects in the solution) and it is versioned using the <strong>–version</strong> setting, not the value in it’s <strong>nuspec</strong> file.</li>
<li>We package the ‘next’ Nuget package, the package picks up the version from the <strong>–version</strong> flag (as needed), but it also needs to add a dependency to a specific version of the ‘common’ package. We pass the <strong>–IncludeReferencedProjects</strong>  argument to make sure this occurs. However, Nuget.exe gets this version number from  the ‘common’ packages .<strong>nuspec</strong> file <strong>NOT</strong> the package actually built in the previous step. So we end up with a mismatch.</li>
</ol>
<p>The bottom line is we need to manage the version number in the .<strong>nuspec</strong> file of each package. So more custom VSTS extensions are needed.</p>
<p>Initially I reused my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-FileCopier-Tasks">Update XML file task</a>, passing in some XPath to select the node to update, and this is a very valid approach if using <a href="https://docs.nuget.org/create/versioning">semantic versioning</a> as it is a very flexible way yo build the version number. However, in the end I added an extra task to my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-Versioning-Task">versioning VSTS extension for Nuget</a> to make my build neater and consistent with my other versions steps.</p>
<p>Once all the versioning was done I could create the packages. I ended up with a build process as shown below</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_324.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_319.png" title="image"></a></p>
<p>A few notes about the NuGet packaging</p>
<ul>
<li>Each project I wish to create a Nuget package for has a <strong>nuspec</strong> file of the same ‘root’ name in the same folder as the <strong>csproj</strong> eg. <strong>mypackage.csproj</strong> and <strong>mypackage.nuspec.</strong> This file contains all descriptions, copyright details etc.</li>
<li>I am building each package explicitly, I could use wildcards in the ‘Path/Pattern to nuspec files’ property, I choose not to at this time. This is down to the fact I don’t want to build all the solution’s package at this point in time.</li>
<li><strong>IMPORTANT</strong> I am passing in the <strong>.csproj</strong> file names, not the <strong>.nuspec</strong> file names to the ‘Path/Pattern to nuspec files’ property. I found I had to do this else the   <strong>–IncludeReferencedProjects</strong>  was ignored. The Nuget documentation seems to suggest as long as the .<strong>csproj</strong> and .<strong>nuspec</strong> files have the same ‘root’ name then you could reference the .<strong>nuspec</strong> file but this was not my experience</li>
<li>I still set the flag to use the build version to version the package – this is not actually needed as the .<strong>nuspec</strong> file has already been update</li>
<li>I pass in the  <strong>–IncludeReferencedProjects</strong>  argument via the advanced parameters, to pick up the project dependancies.</li>
</ul>
<h3 id="summary">Summary</h3>
<p>So now I have a reliable way to make sure my NuGet packages have consistent version numbers</p>
]]></content:encoded>
    </item>
    <item>
      <title>Tidy up those VSTS release pipelines with meta-tasks</title>
      <link>https://blog.richardfennell.net/posts/tidy-up-those-vsts-release-pipelines-with-meta-tasks/</link>
      <pubDate>Fri, 12 Aug 2016 13:53:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tidy-up-those-vsts-release-pipelines-with-meta-tasks/</guid>
      <description>&lt;p&gt;Do you have repeating blocks in your VSTS release pipelines?&lt;/p&gt;
&lt;p&gt;I certainly do. A common one is to run a set of functional test, so I need to repeatedly …&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Deploy some test files to a VM&lt;/li&gt;
&lt;li&gt;Deploy a test agent to the VM – &lt;strong&gt;IMPORTANT&lt;/strong&gt; I had not realised you can only run one test run against this deployed agent. You need to redeploy it for the next run&lt;/li&gt;
&lt;li&gt;Run my tests&lt;/li&gt;
&lt;li&gt;… and repeat for next test type/configuration/test plan/DLL etc.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In the past this lead to a lot of repeat tasks in my release pipeline, all very messy.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Do you have repeating blocks in your VSTS release pipelines?</p>
<p>I certainly do. A common one is to run a set of functional test, so I need to repeatedly …</p>
<ol>
<li>Deploy some test files to a VM</li>
<li>Deploy a test agent to the VM – <strong>IMPORTANT</strong> I had not realised you can only run one test run against this deployed agent. You need to redeploy it for the next run</li>
<li>Run my tests</li>
<li>… and repeat for next test type/configuration/test plan/DLL etc.</li>
</ol>
<p>In the past this lead to a lot of repeat tasks in my release pipeline, all very messy.</p>
<p>Now in VSTS we have the option of  <a href="https://www.visualstudio.com/en-us/docs/release/author-release-definition/understanding-tasks#metatasks">Meta-tasks</a>, these allow tasks to be grouped into in-effect functions with their own properties.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_322.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_317.png" title="image"></a></p>
<p>In the above screen shot below you can see I use a meta-task ‘Run Tests’ that wrappers the four tasks shown below.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_323.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_318.png" title="image"></a></p>
<p>Much neater, but as you might expect with something new I have come across a few minor gotchas</p>
<ul>
<li>You cannot order the list of properties for the meta-task</li>
<li>This is a problem as the first one is used to generate the instance name in the pipeline. No a major problem you can always edit it.</li>
<li>Meta-tasks properties are auto-detected from any variables used with in the meta-task tasks, the auto-detection mechanism is case sensitive, unless the rest of VSTS variable handling. So be careful to not end up with duplicates.</li>
</ul>
<p>That all said, I think this is big step forward in readability and reuse for release management</p>
]]></content:encoded>
    </item>
    <item>
      <title>New version of my generate release notes task–now with authentication options</title>
      <link>https://blog.richardfennell.net/posts/new-version-of-my-generate-release-notes-task-now-with-authentication-options/</link>
      <pubDate>Thu, 11 Aug 2016 13:18:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-version-of-my-generate-release-notes-task-now-with-authentication-options/</guid>
      <description>&lt;p&gt;I have just released &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task&#34;&gt;1.4.7 of the release notes VSTS extension&lt;/a&gt;. This provides a new advanced options that allows you to switch the authentication model.&lt;/p&gt;
&lt;p&gt;The default remains the same i.e. use a personal access token provided by the server, but you have the option to enable use of the &amp;lsquo;defaultcredentials&amp;rsquo; (via the advanced properties). If this is done the account the build agent is running as is used. Hopefully this should fix the 401 issues some people have been seeing when using the task with on-prem TFS.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just released <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">1.4.7 of the release notes VSTS extension</a>. This provides a new advanced options that allows you to switch the authentication model.</p>
<p>The default remains the same i.e. use a personal access token provided by the server, but you have the option to enable use of the &lsquo;defaultcredentials&rsquo; (via the advanced properties). If this is done the account the build agent is running as is used. Hopefully this should fix the 401 issues some people have been seeing when using the task with on-prem TFS.</p>
<p>For most people the default PAT model should be fine</p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows 10 Anniversary (Build 1607) messed up my virtual NAT Switch – a fix</title>
      <link>https://blog.richardfennell.net/posts/windows-10-anniversary-build-1607-messed-up-my-virtual-nat-switch-a-fix/</link>
      <pubDate>Wed, 10 Aug 2016 11:43:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-10-anniversary-build-1607-messed-up-my-virtual-nat-switch-a-fix/</guid>
      <description>&lt;p&gt;I use a virtual NAT Switch to allow my VMs to talk to the outside world. The way I do this is &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/12/01/An-out-the-box-way-to-let-local-Hyper-V-VMs-see-the-Internet-without-using-a-DD-WRT-router&#34;&gt;documented in this post&lt;/a&gt;, based on the &lt;a href=&#34;http://www.thomasmaurer.ch/2016/05/set-up-a-hyper-v-virtual-switch-using-a-nat-network/&#34;&gt;work of  Thomas Maurer&lt;/a&gt;. The upgrade to Windows 10 Anniversary messed this up, just seemed to loose the virtual network completely, VMs failed to start with invalid configurations and would not even start.&lt;/p&gt;
&lt;p&gt;I had to recreate my NATSwitch &lt;a href=&#34;http://www.thomasmaurer.ch/2016/05/set-up-a-hyper-v-virtual-switch-using-a-nat-network/&#34;&gt;using Thomas’s revised instructions&lt;/a&gt;, but I did have an problem. The final ‘New-NetNat’ command failed with  a ‘The parameter is incorrect.’ error. I think the issue was that there was debris left from the old setup (seems Microsoft removed the NatSwitch interface type). I could find no way to remove the old NATSwitch as it did not appear in the list in PowerShell and there is no UI remove option. So I just ended up disabling it via the UI and this seemed to do the trick&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I use a virtual NAT Switch to allow my VMs to talk to the outside world. The way I do this is <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/12/01/An-out-the-box-way-to-let-local-Hyper-V-VMs-see-the-Internet-without-using-a-DD-WRT-router">documented in this post</a>, based on the <a href="http://www.thomasmaurer.ch/2016/05/set-up-a-hyper-v-virtual-switch-using-a-nat-network/">work of  Thomas Maurer</a>. The upgrade to Windows 10 Anniversary messed this up, just seemed to loose the virtual network completely, VMs failed to start with invalid configurations and would not even start.</p>
<p>I had to recreate my NATSwitch <a href="http://www.thomasmaurer.ch/2016/05/set-up-a-hyper-v-virtual-switch-using-a-nat-network/">using Thomas’s revised instructions</a>, but I did have an problem. The final ‘New-NetNat’ command failed with  a ‘The parameter is incorrect.’ error. I think the issue was that there was debris left from the old setup (seems Microsoft removed the NatSwitch interface type). I could find no way to remove the old NATSwitch as it did not appear in the list in PowerShell and there is no UI remove option. So I just ended up disabling it via the UI and this seemed to do the trick</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_321.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_316.png" title="image"></a></p>
<p>My VMs seem happy again talking to the outside world</p>
]]></content:encoded>
    </item>
    <item>
      <title>Out with the Band in with the Garmin</title>
      <link>https://blog.richardfennell.net/posts/out-with-the-band-in-with-the-garmin/</link>
      <pubDate>Fri, 05 Aug 2016 10:02:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/out-with-the-band-in-with-the-garmin/</guid>
      <description>&lt;p&gt;I have been using the Microsoft Band (both version &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/04/05/After-a-few-days-living-with-a-Microsoft-Band&#34;&gt;Band1&lt;/a&gt; and &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/11/30/first-experience-of-a-band-2&#34;&gt;Band2&lt;/a&gt;) since they came out, and been reasonably happy. However, a year or so on my issues with it have remained the same&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Poor battery life, I can live with charging it each day, but even with GPS Power-saver mode on I can’t go for any exercise over about 4 hours (bit of an issue for longer bike rides)&lt;/li&gt;
&lt;li&gt;It is not waterproof, so no swimming (and worried doing the washing up)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Also there seem to be some build issues with the robustness of the Band2. I had to get mine replaced due to it not accepting recharging and the forums seems to report people suffering problems with the wrist strap splitting. That said, the warrantee service seems excellent, no complaints there, mine was swapped without any issue in a couple of days&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been using the Microsoft Band (both version <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/04/05/After-a-few-days-living-with-a-Microsoft-Band">Band1</a> and <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/11/30/first-experience-of-a-band-2">Band2</a>) since they came out, and been reasonably happy. However, a year or so on my issues with it have remained the same</p>
<ul>
<li>Poor battery life, I can live with charging it each day, but even with GPS Power-saver mode on I can’t go for any exercise over about 4 hours (bit of an issue for longer bike rides)</li>
<li>It is not waterproof, so no swimming (and worried doing the washing up)</li>
</ul>
<p>Also there seem to be some build issues with the robustness of the Band2. I had to get mine replaced due to it not accepting recharging and the forums seems to report people suffering problems with the wrist strap splitting. That said, the warrantee service seems excellent, no complaints there, mine was swapped without any issue in a couple of days</p>
<p>In the end however, I decided it was time to to check out alternatives and picked the <a href="https://www.amazon.co.uk/Garmin-vivoactive-Smart-Watch-Wrist/dp/B01BKUB6BA/ref=as_li_ss_tl?ie=UTF8&amp;qid=1470386677&amp;sr=8-3&amp;keywords=garmin&#43;vivoactive&#43;hr&amp;linkCode=ll1&amp;tag=buitwoonmypc-21&amp;linkId=7b06688c28db2183db054dfe5a4fe3dd">Garmin Vivoactive HR</a>; basically the Garmin equivalent to the Band in feature set and price (it is a little more expensive in the UK)</p>
<p><a href="https://www.amazon.co.uk/Garmin-vivoactive-Smart-Watch-Wrist/dp/B01BKUB6BA/ref=as_li_ss_tl?ie=UTF8&amp;qid=1470386677&amp;sr=8-3&amp;keywords=garmin&#43;vivoactive&#43;hr&amp;linkCode=ll1&amp;tag=buitwoonmypc-21&amp;linkId=7b06688c28db2183db054dfe5a4fe3dd"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_320.png" title="image"></a></p>
<p>I have to say a couple of weeks in I am very pleased. It fixes those two major issues for me. Most importantly I seem to need charging it only about every 5 days or so, that is with with an hour or two of full activity tracking each day. The specs claim 10 hour+ for full activity tracking on a charge. Also it is waterproof and allows activity tracking for pool based swimming (swim mode is lap based and has no GPS enabled so less use for open water).</p>
<p>That all said there are still issues</p>
<ul>
<li>The Bluetooth link to my Windows Phone 10 is a little temperamental for things like notifications and sync -  a restart usually fixes everything (but hey it fully supports Windows Phone 10 not just Android and iPhone!)</li>
<li>Shame they disable heart rate monitor for swimming (signal not reliable enough, unless you pair with a chest strap it seems)</li>
<li>Lack of open water swimming tracking (see above – but of you want full multisport tracking <a href="https://www.amazon.co.uk/GARMIN-Forerunner-Sports-Bundle-Silver/dp/B0133OL78S/ref=as_li_ss_tl?ie=UTF8&amp;qid=1470391239&amp;sr=8-10&amp;keywords=garmin&#43;920xt&amp;linkCode=ll1&amp;tag=buitwoonmypc-21&amp;linkId=9156b61bb344ac6a5fb503e8e17a763a">look at the Garmin 920XT</a>, their top of the range watch it does it all)</li>
</ul>
<p>But I think these are all minor issues for me, and the <a href="https://apps.garmin.com/en-GB/">third party apps store</a> for the device help such as <a href="https://apps.garmin.com/en-GB/apps/f520733f-5dc4-4d9f-8c7b-3941a2ff074b">adding triathlon support</a> which attempts HR monitoring for swimming, without needing to upgrade to the <a href="https://www.amazon.co.uk/GARMIN-Forerunner-Sports-Bundle-Silver/dp/B0133OL78S/ref=as_li_ss_tl?ie=UTF8&amp;qid=1470391239&amp;sr=8-10&amp;keywords=garmin&#43;920xt&amp;linkCode=ll1&amp;tag=buitwoonmypc-21&amp;linkId=9156b61bb344ac6a5fb503e8e17a763a">920XT</a>.</p>
<p>So a good alternative to theBand2?</p>
<p>For me yes, it addresses my key issues. Band2 is a good fitness tracker with unique styling, but if swimming or longer activities are your thing I think the <a href="https://www.amazon.co.uk/Garmin-vivoactive-Smart-Watch-Wrist/dp/B01BKUB6BA/ref=as_li_ss_tl?ie=UTF8&amp;qid=1470386677&amp;sr=8-3&amp;keywords=garmin&#43;vivoactive&#43;hr&amp;linkCode=ll1&amp;tag=buitwoonmypc-21&amp;linkId=7b06688c28db2183db054dfe5a4fe3dd">Garmin Vivoactive HR</a> has it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I’m on RadioTFS</title>
      <link>https://blog.richardfennell.net/posts/im-on-radiotfs/</link>
      <pubDate>Fri, 15 Jul 2016 13:54:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/im-on-radiotfs/</guid>
      <description>&lt;p&gt;The RadioTFS show that I was the guest on has just been published at &lt;a href=&#34;http://radiotfs.com/Show/117&#34; title=&#34;http://radiotfs.com/Show/117&#34;&gt;http://radiotfs.com/Show/117&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you don’t listen to RadioTFS why not?  It is a regular podcast (as you can see with over 100 episodes) on all things TFS and VSTS. A great way to keep up what is new in the technology space.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/page/Videos-of-my-Presentations&#34;&gt;Links to this and all my other recorded sessions can be found here&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The RadioTFS show that I was the guest on has just been published at <a href="http://radiotfs.com/Show/117" title="http://radiotfs.com/Show/117">http://radiotfs.com/Show/117</a>.</p>
<p>If you don’t listen to RadioTFS why not?  It is a regular podcast (as you can see with over 100 episodes) on all things TFS and VSTS. A great way to keep up what is new in the technology space.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/page/Videos-of-my-Presentations">Links to this and all my other recorded sessions can be found here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>New Build Management VSTS tasks</title>
      <link>https://blog.richardfennell.net/posts/new-build-management-vsts-tasks/</link>
      <pubDate>Wed, 13 Jul 2016 11:19:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-build-management-vsts-tasks/</guid>
      <description>&lt;p&gt;Just published a &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-BuildUpdating-Tasks&#34;&gt;new VSTS extension&lt;/a&gt; with a couple of tasks in it. The aim to to help formalise the end of a release process. The tasks&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Allow you to set the retension ‘keep forever’ flag on a build (or all builds linked to a release)&lt;/li&gt;
&lt;li&gt;Update increment a build variable e.g. all or part of a version number, in a build (or all builds linked to a release)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first just replicates functionality I used to have &lt;a href=&#34;https://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/28/An-alternative-to-setting-a-build-quality-on-a-TFS-vNext-build&#34;&gt;in house for builds&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just published a <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-BuildUpdating-Tasks">new VSTS extension</a> with a couple of tasks in it. The aim to to help formalise the end of a release process. The tasks</p>
<ul>
<li>Allow you to set the retension ‘keep forever’ flag on a build (or all builds linked to a release)</li>
<li>Update increment a build variable e.g. all or part of a version number, in a build (or all builds linked to a release)</li>
</ul>
<p>The first just replicates functionality I used to have <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/28/An-alternative-to-setting-a-build-quality-on-a-TFS-vNext-build">in house for builds</a></p>
<p>The second one is important to me as once I have released to production a version of a product I never want to generate another build with the same base version number. For example we version stamp all our DLLs/builds with a version number in form</p>
<blockquote>
<p>$(Major).$(Minor).$(year:yy)$(dayofyear).(rev:r)     e.g. 1.2.16170.2</p></blockquote>
<p>Where the $(Major) and $(Minor) are build variables we set manually (we decide when we increment a major or minor release) and the second two blocks guarantee a unique build number every time. It is too easy to forget to manually increment the Major or Minor build variable during a release. This task means I don’t need to remember to set the value of one or both of these. I can either set an explicit value or just get it to auto-increment. I usually auto increment the Minor value as a default, doing a manual reset of both the Major and Minor if it is a major release.</p>
<p><strong>NOTE</strong>: You do have to add some permissions to the build service account else this second task fails with a 403 permission error – <a href="https://github.com/rfennell/vNextBuild/wiki/BuildTasks-Task">so read the WIKI</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Life gets better in Visual Studio Code for PowerShell</title>
      <link>https://blog.richardfennell.net/posts/life-gets-better-in-visual-studio-code-for-powershell/</link>
      <pubDate>Fri, 08 Jul 2016 09:55:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/life-gets-better-in-visual-studio-code-for-powershell/</guid>
      <description>&lt;p&gt;I have been using &lt;a href=&#34;https://code.visualstudio.com&#34;&gt;Visual Studio Code&lt;/a&gt; for &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/06/21/Using-Visual-Studio-Code-to-develop-VSTS-Build-Tasks-with-PowerShell-and-Pester-tests&#34;&gt;PowerShell development&lt;/a&gt;, but got a bit behind on &lt;a href=&#34;https://code.visualstudio.com/Updates&#34;&gt;reading release notes&lt;/a&gt;. Today I just realised I can make my Integrated Terminal a Code a PowerShell instance.&lt;/p&gt;
&lt;p&gt;In File &amp;gt; Preferences &amp;gt; user Settings (settings.json) enter the following&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;// Place your settings in this file to overwrite the default settings  
{  
     // The path of the shell that the terminal uses on Windows.  
    &amp;#34;terminal.integrated.shell.windows&amp;#34;: &amp;#34;C:\\windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe&amp;#34;  
} 
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Now my terminal is a PowerShell instance, and you can see it has loaded by profile so &lt;a href=&#34;https://chocolatey.org/packages/poshgit&#34;&gt;POSH Git&lt;/a&gt; is work as well&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been using <a href="https://code.visualstudio.com">Visual Studio Code</a> for <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/06/21/Using-Visual-Studio-Code-to-develop-VSTS-Build-Tasks-with-PowerShell-and-Pester-tests">PowerShell development</a>, but got a bit behind on <a href="https://code.visualstudio.com/Updates">reading release notes</a>. Today I just realised I can make my Integrated Terminal a Code a PowerShell instance.</p>
<p>In File &gt; Preferences &gt; user Settings (settings.json) enter the following</p>
<pre tabindex="0"><code>// Place your settings in this file to overwrite the default settings  
{  
     // The path of the shell that the terminal uses on Windows.  
    &#34;terminal.integrated.shell.windows&#34;: &#34;C:\\windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe&#34;  
} 
</code></pre><p>Now my terminal is a PowerShell instance, and you can see it has loaded by profile so <a href="https://chocolatey.org/packages/poshgit">POSH Git</a> is work as well</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_319.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_315.png" title="image"></a></p>
<p>So I think we have reached the goodbye PowerShell ISE point</p>
]]></content:encoded>
    </item>
    <item>
      <title>Gotcha’s when developing VSTS Build Extension</title>
      <link>https://blog.richardfennell.net/posts/gotchas-when-developing-vsts-build-extension/</link>
      <pubDate>Tue, 05 Jul 2016 17:53:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/gotchas-when-developing-vsts-build-extension/</guid>
      <description>&lt;p&gt;I recently posted on &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/06/21/Using-Visual-Studio-Code-to-develop-VSTS-Build-Tasks-with-PowerShell-and-Pester-tests&#34;&gt;my development process for VSTS Extensions&lt;/a&gt;, it has been specifically PowerShell based build ones I have been working on. During this development I have come across a few more gotcha’s that I think are worth mentioning&lt;/p&gt;
&lt;h3 id=&#34;3264-bit&#34;&gt;32/64 bit&lt;/h3&gt;
&lt;p&gt;The VSTS build agent launches PowerShell 64bit (as does the PowerShell command line on dev PC), but VSCode launches it 32bit. Whilst working &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-StyleCop-Task&#34;&gt;my StyleCop extension&lt;/a&gt; this caused me a problem as StyleCop it seems can only load dictionaries for spell checking based rules when in a 32bit shell. So my Pester tests for the extension worked in VSCode but failed at the command line and within a VSTS build&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently posted on <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/06/21/Using-Visual-Studio-Code-to-develop-VSTS-Build-Tasks-with-PowerShell-and-Pester-tests">my development process for VSTS Extensions</a>, it has been specifically PowerShell based build ones I have been working on. During this development I have come across a few more gotcha’s that I think are worth mentioning</p>
<h3 id="3264-bit">32/64 bit</h3>
<p>The VSTS build agent launches PowerShell 64bit (as does the PowerShell command line on dev PC), but VSCode launches it 32bit. Whilst working <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-StyleCop-Task">my StyleCop extension</a> this caused me a problem as StyleCop it seems can only load dictionaries for spell checking based rules when in a 32bit shell. So my Pester tests for the extension worked in VSCode but failed at the command line and within a VSTS build</p>
<p>After many hours my eventual solution was to put some guard code in my scripts to force a reload in 32bit mode</p>
<pre tabindex="0"><code>param  
(  
    \[string\]$treatStyleCopViolationsErrorsAsWarnings,  
    \[string\]$maximumViolationCount,  
    … other params  
) 

if ($env:Processor\_Architecture -ne &#34;x86&#34;)     
{   
    # Get the command parameters  
    $args = $myinvocation.BoundParameters.GetEnumerator() | ForEach-Object {$($\_.Value)}  
    write-warning &#39;Launching x86 PowerShell&#39;  
    &amp;&#34;$env:windirsyswow64windowspowershellv1.0powershell.exe&#34; -noprofile -executionpolicy bypass -file $myinvocation.Mycommand.path $args  
    exit  
}  
write-verbose &#34;Running in $($env:Processor\_Architecture) PowerShell&#34;  

... rest of my code
</code></pre><p>The downside of this trick is that you can’t pass return values back as you swapped execution process. For the type of things I am doing with VSTS tasks this not an issue as the important data has usually be dropped to a file which is accessible by everything, such as test results.</p>
<p>For a worked sample of production code and Pester tests see by <a href="https://github.com/rfennell/vNextBuild/tree/master/Extensions/StyleCop/StyleCopTask">GitHub repo</a>.</p>
<h3 id="using-modules">Using Modules</h3>
<p>In the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/06/21/Using-Visual-Studio-Code-to-develop-VSTS-Build-Tasks-with-PowerShell-and-Pester-tests">last post</a> I mentioned the problem when trying to run Pester tests against scripts, the script content is executed. I stupidly did not mention the obvious solution of moving all the code into functions in a PowerShell modules. This makes it easier to write tests for all bar the outer wrapper .PS1 script that is called by the VSTS agent.</p>
<p>Again see by <a href="https://github.com/rfennell/vNextBuild/tree/master/Extensions/StyleCop/StyleCopTask">GitHub repo</a> so a good sample. Note how I have split out the files so that I have</p>
<ul>
<li>A module that contains the functions I can test via Pester</li>
<li>A .PS1 script called by VSTS (this will run 64bit) where I deal with interaction with VSTS/TFS</li>
<li>An inner PS1 string that we force into 32bit mode as needed (see above)</li>
</ul>
<h3 id="hacking-around-on-your-code">Hacking around on your code</h3>
<p>You always get to the point I find when developing things like VSTS build tasks that you want to make some quick change to try something without the full development/build/release cycle. This is in effect the local development stage, it is just build task development makes with awkward. It is hard to fully test a task locally, it need to be deployed within a build</p>
<p>I have found a way to help here is to use a local build agent, you can then get at the deployed task and edit the .PS1 code. The important bit to node is that the task will not be redeployed so you local ‘hack’ can be tested within a real TFS build without having to increment the task’s version and redeploy.</p>
<p>Hacky but handy to know.</p>
<p>You of course do need to make sure you hacked code is eventually put through your formal release process.</p>
<h3 id="and-maybe-something-or-nothings">And maybe something or nothings…</h3>
<p>I may have seen these issues, but have not got to the bottom of them, so they may not be real issues</p>
<ul>
<li>The order parameters are declared in a task.json file seems to need to match the order they are declared in the .PS1 file call. I had thought they we associated by name not order, but in one task they all got transposed until I fixed the order.</li>
<li>The F5 dev debug cycle is still a little awkward with VSCode, sometime it seems to leave stuff running and you get high CPU utilisation – just restart the VSCode  - the old fix!</li>
<li>If using the 32 bit relaunch discussed above write-verbose messages don’t awlays seem to show up in the VSTS log, I assume a –verbose parameter is being lost somewhere, or it is the spawning of another PowerShell instance that cause the problem.</li>
</ul>
<p>SO again I hope these tips help with your VSTS extension development</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running TSLint within SonarQube on a TFS build</title>
      <link>https://blog.richardfennell.net/posts/running-tslint-within-sonarqube-on-a-tfs-build/</link>
      <pubDate>Tue, 05 Jul 2016 12:47:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-tslint-within-sonarqube-on-a-tfs-build/</guid>
      <description>&lt;p&gt;I wanted to add some level of static analysis to our Typescript projects, &lt;a href=&#34;https://www.npmjs.com/package/tslint&#34;&gt;TSLint&lt;/a&gt; being the obvious choice. To make sure it got run as part of our build release process I wanted to wire it into our SonarQube system, this meant using the community &lt;a href=&#34;https://github.com/Pablissimo/SonarTsPlugin&#34;&gt;TSLintPlugin&lt;/a&gt;, which is still pre-release (0.6 preview at the time of writing).&lt;/p&gt;
&lt;p&gt;I followed the &lt;a href=&#34;https://github.com/Pablissimo/SonarTsPlugin&#34;&gt;installation&lt;/a&gt; process for plugin without any problems setting the TSLint path to match our build boxes&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I wanted to add some level of static analysis to our Typescript projects, <a href="https://www.npmjs.com/package/tslint">TSLint</a> being the obvious choice. To make sure it got run as part of our build release process I wanted to wire it into our SonarQube system, this meant using the community <a href="https://github.com/Pablissimo/SonarTsPlugin">TSLintPlugin</a>, which is still pre-release (0.6 preview at the time of writing).</p>
<p>I followed the <a href="https://github.com/Pablissimo/SonarTsPlugin">installation</a> process for plugin without any problems setting the TSLint path to match our build boxes</p>
<blockquote>
<p>C:UsersTfsbuildAppDataRoamingnpmnode_modulestslintbintslint</p></blockquote>
<p>Within my TFS/VSTS build I added three extra tasks</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_317.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_313.png" title="image"></a></p>
<ul>
<li>An NPM install to make sure that TSLint was installed in the right folder by running the command ‘<em>install -g tslint typescript ‘</em></li>
<li>A <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2015/08/24/build-tasks-for-sonarqube-analysis/">pre-build SonarQube MSBuild task</a> to link to our SonarQube instance</li>
<li>A post-build SonarQube MSBuild task to complete the analysis</li>
</ul>
<p>Once this build was run with a simple Hello World TypeScript project, I could see SonarQube attempting to do TSLint analysis but failing with the error</p>
<blockquote>
<p>2016-07-05T11:36:02.6425918Z INFO: Sensor com.pablissimo.sonar.TsLintSensor</p>
<p>2016-07-05T11:36:07.1425492Z ##[error]ERROR: TsLint Err: Invalid option for configuration: tslint.json</p>
<p>2016-07-05T11:36:07.3612994Z INFO: Sensor com.pablissimo.sonar.TsLintSensor (done) | time=4765ms</p></blockquote>
<p>The problem was the build task generated <strong>sonar-project.properties</strong> file did not contain the path to the <strong>TSLint.json</strong> file. In the current version of the TSLint plugin this file needs to be <a href="https://github.com/Pablissimo/SonarTsPlugin/issues/17">managed manually</a>, it is not generated by the SonarQube ruleset. Hence is a file in the source code folder on the build box, a path that the SonarQube server cannot know.</p>
<p>The Begin Analysis SonarQube for MSBuild task generates the <strong>sonar-project.properties,</strong> but only adds the entries for MSBuild (as its name suggests). It does nothing related to TsLint plugin or any other plugins.</p>
<p>The solution was to add the required setting via the advanced properties of the Begin Analysis task i.e. point to the <strong>tslint.json</strong> file under source control, using a build variable to set the base folder.</p>
<blockquote>
<p>/d:sonar.ts.tslintconfigpath=$(build.sourcesdirectory)tslint.json</p></blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_318.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_314.png" title="image"></a></p>
<p>Once this setting was added I could see the TSLint rules being evaluated and the showing up in the SonarQube analysis.</p>
<p>Another step to improving our overall code quality through consistent analysis of technical debt.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Scroll bars in MTM Lab Center had me foxed – User too stupid error</title>
      <link>https://blog.richardfennell.net/posts/scroll-bars-in-mtm-lab-center-had-me-foxed-user-too-stupid-error/</link>
      <pubDate>Fri, 24 Jun 2016 10:09:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/scroll-bars-in-mtm-lab-center-had-me-foxed-user-too-stupid-error/</guid>
      <description>&lt;p&gt;I thought I had a problem with our TFS Lab Manager setup, 80% of our environments had disappeared. I wondered if it was rights, was it just showing environments I owned? No it was not that.&lt;/p&gt;
&lt;p&gt;Turns our the issue was a UX/Scrollbar issue.&lt;/p&gt;
&lt;p&gt;I had MTM full screen in ‘Test Center’ mode, with a long list of test suites, so long a  scroll bar was needed and I had scrolled to the bottom of the list&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I thought I had a problem with our TFS Lab Manager setup, 80% of our environments had disappeared. I wondered if it was rights, was it just showing environments I owned? No it was not that.</p>
<p>Turns our the issue was a UX/Scrollbar issue.</p>
<p>I had MTM full screen in ‘Test Center’ mode, with a long list of test suites, so long a  scroll bar was needed and I had scrolled to the bottom of the list</p>
<p>I then switched to ‘Lab Center’ mode, this list was shorter, not needing a scrollbar, but the the pane listing the environments (that had been showing the test suites) was still scrolled to the bottom. The need for the scrollbar was unexpected and I just missed it visually (in my defence it is light grey on white). Exiting and reloading MTM had no effect, the scroll did not reset on a reload or change of Test Plan/Team Project.</p>
<p>In fact I only realised the solution to the problem when it was pointed out by another member of our team after I asked if they were experiencing issues with Labs; the same had happened to them. Between us we wasted a fair bit of time on this issue!</p>
<p>Just goes to show how you can miss standard UX signals when you are not expecting them.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Visual Studio Code to develop VSTS Build Tasks with PowerShell and Pester tests</title>
      <link>https://blog.richardfennell.net/posts/using-visual-studio-code-to-develop-vsts-build-tasks-with-powershell-and-pester-tests/</link>
      <pubDate>Tue, 21 Jun 2016 21:15:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-visual-studio-code-to-develop-vsts-build-tasks-with-powershell-and-pester-tests/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;I am finding  myself writing a lot of PowerShell at present, mostly for VSTS build extensions. Here I hit a problem (or is it an opportunity for choice?) as to what development environment to use?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;PowerShell ISE is the ‘best’ experience for debugging a script, but has no source control integration – and it is on all PCs&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://code.visualstudio.com&#34;&gt;Visual Studio Code&lt;/a&gt; has good Git support, but you need to jump through some hoops to get debugging working.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://visualstudiogallery.msdn.microsoft.com/c9eb3ba8-0c59-4944-9a62-6eee37294597&#34;&gt;Visual Studio PowerShell tools&lt;/a&gt;, are just too heavy weight, it is not even in the frame for me for this job.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So I have found myself getting the basic scripts working in the PowerShell ISE then moving to VS Code to package up the task/extensions as this means writing .JSON too – so awkward&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>I am finding  myself writing a lot of PowerShell at present, mostly for VSTS build extensions. Here I hit a problem (or is it an opportunity for choice?) as to what development environment to use?</p>
<ul>
<li>PowerShell ISE is the ‘best’ experience for debugging a script, but has no source control integration – and it is on all PCs</li>
<li><a href="https://code.visualstudio.com">Visual Studio Code</a> has good Git support, but you need to jump through some hoops to get debugging working.</li>
<li><a href="https://visualstudiogallery.msdn.microsoft.com/c9eb3ba8-0c59-4944-9a62-6eee37294597">Visual Studio PowerShell tools</a>, are just too heavy weight, it is not even in the frame for me for this job.</li>
</ul>
<p>So I have found myself getting the basic scripts working in the PowerShell ISE then moving to VS Code to package up the task/extensions as this means writing .JSON too – so awkward</p>
<p>This gets worse when I want to add <a href="https://github.com/pester/Pester/wiki">Pester</a> based unit tests, I needed a better way of working, and I chose to focus on VS Code</p>
<h3 id="the-powershell-extension-for-vs-code">The PowerShell Extension for VS Code</h3>
<p><a href="https://blogs.msdn.microsoft.com/powershell/2015/11/16/announcing-powershell-language-support-for-visual-studio-code-and-more/">Visual Studio Code now supports PowerShell</a>. Once you have installed VS Code you can install the extension as follows</p>
<ol>
<li>Open the command pallet (Ctrl+Shift+P)</li>
<li>Type “Extension”</li>
<li>Select “Install Extensions”. </li>
<li>Once the extensions list loads, type PowerShell and press Enter.</li>
</ol>
<p>Once this extension is installed you get Intellisense etc. as you would expect. So you have a good editor experience, but we still need a F5 debugging experience.</p>
<h3 id="setting-up-the-f5-debugging-experience">Setting up the F5 Debugging experience</h3>
<p>Visual Studio Code can launch any tool to provide a debugging experience. The PowerShell extension provides the tools to get this running for PowerShell.</p>
<p>I found <a href="https://rkeithhill.wordpress.com/2015/12/27/debugging-powershell-script-with-visual-studio-code/">Keith Hill provided a nice walkthrough with screenshots of the setup</a>, but here is my quick summary</p>
<ol>
<li>Open VS Code and load a folder structure, for me this usually this will be a Git repo</li>
<li>Assuming the PowerShell extension is installed, goto the debug page in VS Code</li>
<li>Press the cog at the top of the page and a .<strong>vscodelaunch.json</strong> file will be added to the root of the folder structure currently loaded i.e. the root of your Git repo</li>
<li>As Keith points out the important line, the program, the file/task to run when you press F5 is empty – a strange empty default.</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_314.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_310.png" title="image"></a></p>
<p>We need to edit this file to tell it what to run when we press F5. I have decided I have two options and it depends on what I am putting in my Git Repo as to which I use</p>
<ul>
<li>If we want to run the PowerShell file we have in focus in VS Code (at the moment we press F5) then we need the line</li>
</ul>
<blockquote>
<p>              &ldquo;program&rdquo;: &ldquo;${file}&rdquo;</p></blockquote>
<ul>
<li>However, I soon released this was not that useful as I wanted to run Pester based tests. I was usually editing a script file but wanted to run a test script. So this meant changing the file in focus prior to pressing F5. In this case I decided it was easier to hard code the program setting to run to a script that ran all the Pester tests in my folder structure</li>
</ul>
<blockquote>
<p>               &ldquo;program&rdquo;: &ldquo;${workspaceRoot}/Extensions/Tests/runtests.ps1&rdquo;</p>
<p>    Where my script contained the single line to run the tests in the script’s folder and below</p>
<p>               Invoke-Pester $PSScriptRoot –Verbose</p></blockquote>
<p><strong>Note:</strong> I have seen some comments that if you edit the <strong>launch.json</strong> file you need to reload VS Code for it to be read the new value, but this has not been my experience</p>
<p>So now when I press F5 my Pester tests run, I can debug into them as I want, but that raises some new issues due to the requirements of VSTS build tasks</p>
<h3 id="changes-to-my-build-task-to-enable-testing">Changes to my build task to enable testing</h3>
<p>A VSTS build task is basically a PowerShell script that has some parameters. The problem is I needed to load the .PS1 script to allow any Pester tests to execute functions in the script file. This is done using the form</p>
<blockquote>
<p># Load the script under test<br>
. &ldquo;$PSScriptRoot&hellip;&hellip;versioningversiondacpactaskUpdate-DacPacVersionNumber.ps1&rdquo;</p></blockquote>
<p><strong>Problem 1:</strong> If any of the parameters for the script are mandatory this include fails with errors over missing values. The fix is to make sure that any mandatory parameters are passed or they are not mandatory – I chose the latter as I can make any task parameter ‘required’ in the task.json file</p>
<p><strong>Problem 2</strong>: When you include the script it is executed – not what I wanted at all. I had to put a guard if test at the top of the script to exit if the required parameters were not at least reasonable – I can’t think of a neater solution</p>
<blockquote>
<p># check if we are in test mode i.e.<br>
If ($VersionNumber -eq &quot;&quot; -and $path -eq &ldquo;&rdquo;) {Exit}<br>
# the rest of my code …..</p></blockquote>
<p>Once these changes were made I was able to run the Pester tests with an F5 as I wanted using mocks to help test program flow logic</p>
<blockquote>
<p># Load the script under test<br>
. &ldquo;$PSScriptRoot&hellip;&hellip;versioningversiondacpactaskUpdate-DacPacVersionNumber.ps1&rdquo;</p>
<p>Describe &ldquo;Use SQL2012 ToolPath settings&rdquo; {<br>
    Mock Test-Path  {return $false} -ParameterFilter {<br>
            $Path -eq &ldquo;C:Program Files (x86)Microsoft Visual Studio 14.0Common7IDEExtensionsMicrosoftSQLDBDAC120Microsoft.SqlServer.Dac.Extensions.dll&rdquo;<br>
        }<br>
    Mock Test-Path  {return $true} -ParameterFilter {<br>
            $Path -eq &ldquo;C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDEExtensionsMicrosoftSQLDBDAC120Microsoft.SqlServer.Dac.Extensions.dll&rdquo;<br>
        }    <br>
 <br>
    It &ldquo;Find DLLs&rdquo; {<br>
        $path = Get-Toolpath -ToolPath &quot;&quot;<br>
        $path | Should be &ldquo;C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDEExtensionsMicrosoftSQLDBDAC120&rdquo;<br>
    }<br>
}</p></blockquote>
<h3 id="summary">Summary</h3>
<p>So I think I now have a workable solution with a good IDE with a reasonable F5 debug experience. Ok the PowerShell console in VS Code is not as rich as that in the PowerShell ISE, but I think I can live with that given the quality of the rest of the debug tools.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Updated Reprint - Migrating a TFS TFVC team project to a Git team project</title>
      <link>https://blog.richardfennell.net/posts/updated-reprint-migrating-a-tfs-tfvc-team-project-to-a-git-team-project/</link>
      <pubDate>Mon, 20 Jun 2016 08:05:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updated-reprint-migrating-a-tfs-tfvc-team-project-to-a-git-team-project/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://www.microsoft.com/en-gb/developers/articles/week03aug14/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project/&#34;&gt;This is a copy of the guest post done on the Microsoft UK web site&lt;/a&gt; published on the 7th June 2016&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This is a revised version of a post originally published in August 2014. In this revision I have updated version numbers and links for tools used and added a discussion of adapting the process to support VSTS.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;a href=&#34;https://github.com/rfennell/VSTSPowershell/tree/master/DOTNET&#34;&gt;The code for this post can be found in my&lt;/a&gt;&lt;/em&gt; &lt;a href=&#34;https://github.com/rfennell/VSTSPowershell/tree/master/DOTNET&#34;&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;In the past I&amp;rsquo;ve written on the theory behind &lt;a href=&#34;http://www.microsoft.com/en-gb/developers/articles/week02mar2014/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-retaining-as-much-source-and-work-item-history-as-possible&#34;&gt;migrating TFVC to Git with history&lt;/a&gt;. I&amp;rsquo;ve since used this process for real, as opposed to as a proof of concept, and this post documents my experiences. The requirement was to move an on-premises TFS 2013.2 Scrum Team Project using TFVC to another on premises TFS 2013.2 Scrum Team Project, but this time using Git.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="https://www.microsoft.com/en-gb/developers/articles/week03aug14/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project/">This is a copy of the guest post done on the Microsoft UK web site</a> published on the 7th June 2016</p>
<p><em>This is a revised version of a post originally published in August 2014. In this revision I have updated version numbers and links for tools used and added a discussion of adapting the process to support VSTS.</em></p>
<p><em><a href="https://github.com/rfennell/VSTSPowershell/tree/master/DOTNET">The code for this post can be found in my</a></em> <a href="https://github.com/rfennell/VSTSPowershell/tree/master/DOTNET">GitHub Repo</a></p>
<hr>
<p>In the past I&rsquo;ve written on the theory behind <a href="http://www.microsoft.com/en-gb/developers/articles/week02mar2014/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-retaining-as-much-source-and-work-item-history-as-possible">migrating TFVC to Git with history</a>. I&rsquo;ve since used this process for real, as opposed to as a proof of concept, and this post documents my experiences. The requirement was to move an on-premises TFS 2013.2 Scrum Team Project using TFVC to another on premises TFS 2013.2 Scrum Team Project, but this time using Git.</p>
<p>This process is equally applicable to any version of TFS that supports Git, and to <a href="https://www.visualstudio.com/">VSTS</a>.</p>
<h3 id="create-new-team-project">Create new team project</h3>
<p>On the target server create a new team project using the same (or as close as possible) process template as was used on the source TFS server. As we were using the same non-customised process template for both the source and the target we did not have to worry over any work item customisation. However, if you were changing the process template, this is where you would do any customisation required.</p>
<p>Remember that if you are targeting VSTS your customisation options are limited. You can<a href="https://blogs.msdn.microsoft.com/visualstudioalm/2015/12/10/adding-a-custom-field-to-a-work-item/">add custom fields to VSTS</a> as of the time of writing (May 2016), but that is all.</p>
<h3 id="adding-a-field-to-all-work-item-types">Adding a field to all Work Item Types</h3>
<p>We need to be able to associate the old work item ID with the new migrated one. For on-premises TFS servers, the TFS Integration Platform has a feature to do this automatically, but it suffers a <a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/07/05/tfs-integration-tools-why-is-reflectedworkitemid-not-working-bug-690647.aspx">bug</a>. It is meant to automatically add a field for this purpose, but it actually needs it to be manually added prior to the migration.</p>
<p>To do this edit we need to either:</p>
<ol>
<li>Edit the process templates in place using the <a href="http://visualstudiogallery.msdn.microsoft.com/f017b10c-02b4-4d6d-9845-58a06545627f">Process Template Editor Power Tool</a></li>
<li>Export the WIT with <a href="http://msdn.microsoft.com/en-us/library/dd236914.aspx">WITADMIN.exe</a> and edit them in Notepad and re-import them</li>
</ol>
<p>In either case the field to add to ALL WORK ITEM TYPES is as follows:</p>
<pre tabindex="0"><code>&lt;FIELD refname=&#34;TfsMigrationTool.ReflectedWorkItemId&#34; name=&#34;ReflectedWorkItemId&#34; type=&#34;String&#34;&gt;
</code></pre><p>Once the edit is made the revised work item types need to be re-imported back into the new Team project.</p>
<p>If you are using VSTS this way of adding the field is not an option, but we can <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2015/12/10/adding-a-custom-field-to-a-work-item/">add custom fields to a work item type to VSTS</a>. If we do this you will need to use the <a href="http://tfsintegrationmapper.codeplex.com/releases/view/110128">TFS Integration Mapper tool</a> (mentioned below) to make sure the required old work item ID ends up in your custom location. TFS Integration Platform will not do this by default, but<a href="https://blogs.blackmarble.co.uk/blogs/rfennell/post/2016/05/20/Migrating-work-items-to-VSTS-with-custom-fields-using-TFS-Integration-Platform">I have documented this process in an associated post</a>.</p>
<h3 id="the-work-item-migration">The Work Item Migration</h3>
<p>The actual work item migration is done using the <a href="http://visualstudiogallery.msdn.microsoft.com/eb77e739-c98c-4e36-9ead-fa115b27fefe">TFS Integration Platform</a>. This tool says it only supports TFS 2012, but it will function with newer versions of TFS as well as VSTS. This will move over all work item types from the source team project to the target team project. The process is as follows:</p>
<ol>
<li>Install TFS Integration Platform.</li>
<li>Load TFS Integration Platform, as it seems it must be loaded after the team project is created, else it gets confused!</li>
<li>Select &lsquo;Create New&rsquo;.</li>
<li>Pick the &lsquo;Team Foundation ServerWorkItemTracking&rsquo; template. As we are migrating with the same process template this is OK. If you need to change field mappings use the template for field matching and look at the <a href="http://tfsintegrationmapper.codeplex.com/releases/view/110128">TFS Integration Mapper tool</a>.</li>
<li>Provide a sensible name for the migration. Not really needed for a one-off migration, but if testing, it&rsquo;s easy to end up with many test runs all of the same name, which is confusing in the logs.</li>
<li>Pick the source server and team project as the left server.</li>
<li>Pick the target server and team project as the right server.</li>
<li>Accept the defaults and save to database.</li>
<li>On the left menu select Start. The UI on this tool is not great. Avoid looking on the output tab as this seems to slow the process. Also, altering the refresh time on the options for once a minute seems to help process performance. All details of actions are placed in log files so nothing is lost by these changes.</li>
<li>The migration should complete without any issues, assuming there are no outstanding template issues that need to be resolved.</li>
</ol>
<p><img alt="Article image" loading="lazy" src="https://www.microsoft.com/en-gb/developers/images/articles/content/471/01-01.jpg?v=1.0"></p>
<h3 id="add-the-new-id-to-the-changesets-on-the-source-server">Add the New ID to the Changesets on the source server</h3>
<p>The key to this migration process to retain the links between the work items and source code checkins. This is done using the technique I outlined in the <a href="http://www.microsoft.com/en-gb/developers/articles/week02mar2014/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-retaining-as-much-source-and-work-item-history-as-possible">previous post</a> i.e. editing the comments field of the changeset on the source team project prior to migration the source, adding #123 style references to point to the new work items on the target server.</p>
<p>To do this I used some PowerShell. This PowerShell was written before the new <a href="https://www.visualstudio.com/en-us/integrate/api/overview">TFS REST API</a> was available, hence uses the older C# API. If I was writing it now I would have used the REST API.</p>
<pre tabindex="0"><code>

function Update-TfsCommentWithMigratedId  
{

&lt;#    
.SYNOPSIS    
This function is used as part of the migration for TFVC to Git to help retain checkin associations to work items    
    
.DESCRIPTION    
This function takes two team project references and looks up changset association in the source team project, it then looks for     
the revised work itme IT in the new team project and updates the source changeset    
    
.PARAMETER SourceCollectionUri    
Source TFS Collection URI    
    
.PARAMETER TargetCollectionUri    
Target TFS Collection URI    
    
.PARAMETER SourceTeamProject    
Source Team Project Name    
    
.EXAMPLE    
    
Update-TfsCommentWithMigratedId -SourceCollectionUri &#34;[http://server1:8080/tfs/defaultcollection&#34;](http://server1:8080/tfs/defaultcollection&#34;) -TargetCollectionUri &#34;[http://server2:8080/tfs/defaultcollection&#34;](http://server2:8080/tfs/defaultcollection&#34;) -SourceTeamProject &#34;Scrumproject&#34;    
   
#&gt;    
    
    Param    
    (    
    \[Parameter(Mandatory=$true)\]    
    \[uri\] $SourceCollectionUri,     
    
    \[Parameter(Mandatory=$true)\]    
    \[uri\] $TargetCollectionUri,    
    
    \[Parameter(Mandatory=$true)\]    
    \[string\] $SourceTeamProject    
    
    )    
   
    # get the source TPC    
    $sourceTeamProjectCollection = New-Object Microsoft.TeamFoundation.Client.TfsTeamProjectCollection($sourceCollectionUri)    
    # get the TFVC repository    
    $vcService = $sourceTeamProjectCollection.GetService(\[Microsoft.TeamFoundation.VersionControl.Client.VersionControlServer\])    
    # get the target TPC    
    $targetTeamProjectCollection = New-Object Microsoft.TeamFoundation.Client.TfsTeamProjectCollection($targetCollectionUri)    
    #Get the work item store    
    $wiService = $targetTeamProjectCollection.GetService(\[Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore\])    
   
    # Find all the changesets for the selected team project on the source server    
    foreach ($cs in $vcService.QueryHistory(”$/$SourceTeamProject”, \[Microsoft.TeamFoundation.VersionControl.Client.RecursionType\]::Full, \[Int32\]::MaxValue))    
    {    
        if ($cs.WorkItems.Count -gt 0)    
        {    
            foreach ($wi in $cs.WorkItems)    
            {    
                &#34;Changeset {0} linked to workitem {1}&#34; -f $cs.ChangesetId, $wi.Id    
                # find new id for each changeset on the target server    
                foreach ($newwi in $wiService.Query(&#34;select id  FROM WorkItems WHERE \[TfsMigrationTool.ReflectedWorkItemId\] = &#39;&#34; + $wi.id + &#34;&#39;&#34;))    
                {    
                    # if ID found update the source server if the tag has not already been added    
                    # we have to esc the \[ as gets treated as a regular expression    
                    # we need the white space around between the \[\] else the TFS agent does not find the tags     
                    if ($cs.Comment -match &#34;\[ Migrated ID #{0} \]&#34; -f $newwi.Id)    
                    {    
                        Write-Output (&#34;New Id {0} already associated with changeset {1}&#34; -f $newwi.Id , $cs.ChangesetId)    
                    } else {    
                        Write-Output (&#34;New Id {0} being associated with changeset {1}&#34; -f $newwi.Id, $cs.ChangesetId )    
                        $cs.Comment += &#34;\[ Migrated ID #{0} \]&#34; -f $newwi.Id    
                    }    
                }    
            }    
            $cs.Update()    
        }    
    }    
}  
      
</code></pre><p>With the usage:</p>
<pre tabindex="0"><code>Update-TfsCommentWithMigratedId -SourceCollectionUri &#34;http://localhost:8080/tfs/defaultcollection&#34; -TargetCollectionUri &#34;http://localhost:8080/tfs/defaultcollection&#34; -SourceTeamProject &#34;Old team project&#34;  
</code></pre><p>NOTE: This script is written so that it can be run multiple times, but only adds the migration entries once for any given changeset. This means both it and TFS Integration Platform can be run repeatedly on the same migration to do a staged migration e.g. get the bulk of the content over first whilst the team is using the old team project, then do a smaller migration of the later changes when the actual swap over happens.</p>
<p>When this script is run expect to see output similar to:</p>
<p><img alt="Article image" loading="lazy" src="https://www.microsoft.com/en-gb/developers/images/articles/content/471/01-02.jpg?v=1.0"></p>
<p>You can see the impact of the script in Visual Studio Team Explorer or the TFS web client when looking at changesets in the old team project. Expect to see a changeset comment in the form shown below with new [ Migrated ID #123 ] blocks in the comment field, with 123 being the work item ID on the new team project. Also note the changeset is still associated with the old work item ID on the source server.</p>
<p><img alt="Article image" loading="lazy" src="https://www.microsoft.com/en-gb/developers/images/articles/content/471/01-03.jpg?v=1.0"></p>
<p><strong>NOTE</strong>: The space after the #123 is vital. If it is not there, then the TFS job agent cannot find the tag to associate the commit to a work item after the migration.</p>
<h3 id="source-code-migration">Source code migration</h3>
<p>The source code can now be migrated. This is done by cloning the TFVC code to a local Git repo and then pushing it up to the new TFS Git repo using <a href="https://gittf.codeplex.com/">Git TF</a>. We clone the source to a local repo in the folder localrepo with the -deep option used to retain history.</p>
<pre tabindex="0"><code>git tf clone http://typhoontfs:8080/tfs/defaultcollection &#39;$/Scrum TFVC Source/Main&#39; localrepo --deep
</code></pre><p>NOTE: I have seen problems with this command. On larger code bases we saw the error &lsquo;TF 400732 server cancelled error&rsquo; as files were said to be missing or we had no permission - neither of which was true. This problem was repeated on a number of machines, including one that had in the past managed to do the clone. It was thought the issue was on the server connectivity, but no errors were logged.</p>
<p>As a work around the <a href="http://git-tfs.com/">Git-TFS tool</a> was used. This community tool uses the .NET TFS API, unlike the Microsoft one which uses the Java TFS API. Unfortunately, it also gave TF400732 errors, but did provide a suggested command line to retry continue, which continued from where it errored.</p>
<p>The command to do the clone was:</p>
<pre tabindex="0"><code>Git tfs clone http://typhoontfs:8080/tfs/defaultcollection &#39;$/Scrum TFVC Source/Main&#39; localrepo
</code></pre><p>The command to continue after an error was (from within the repo folder):</p>
<pre tabindex="0"><code>Git tfs fetch  
</code></pre><p>It should be noted that Git-TFS seems a good deal faster than Git TF, presumably due to being a native .NET client as opposed to using the Java VM. Also, Git-TFS has support for converting TFVC branches to Git branches, something Git TF is not able to do. So for some people, Git-TFS will be a better tool to use.</p>
<p>Once the clone is complete, we need to add the TFS Git repo as a remote target and then push the changes up to the new team project. The exact commands for this stage are shown on the target TFS server. Load the web client, go to the code section and you should see the commands needed:</p>
<pre tabindex="0"><code>git remote add origin http://typhoontfs:8080/tfs/DefaultCollection/\_git/newproject  git push -u origin --all  
</code></pre><p>Once this stage is complete the new TFS Git repo can be used. The Git commits should have the correct historic date and work item associations as shown below. Note now that the migration ID comments match the work item associations.</p>
<p><img alt="Article image" loading="lazy" src="https://www.microsoft.com/en-gb/developers/images/articles/content/471/01-04.jpg?v=1.0"></p>
<p>NOTE: There may be a lack in the associations being shown immediately after the git push. This is because the associations are done by a background TFS job process which may take a while to catch up when there are a lot of commits. On one system I worked on this took days, not hours! Be patient.</p>
<h3 id="shared-test-steps">Shared Test Steps</h3>
<p>At this point all work items have been moved over and their various associations with source commits are retained e.g. PBIs link to test cases and tasks. However, there is a problem that any test cases that have shared steps will be pointing to the <a href="http://blogs.msdn.com/b/broken_shared_steps_link_after_migration_from_tfs_integration_platform/archive/2012/11/05/broken-shared-steps-link-after-migration-from-tfs-integration-platform.aspx">old shared step work items</a>. As there is already an open source tool to do this update, there was no immediate need to rewrite it as a PowerShell tool. So to use the open source tool use the command line: </p>
<pre tabindex="0"><code>UpdateSharedStep.exe http://localhost:8080/tfs/defaultcollection myproject
</code></pre><h3 id="test-plans-and-suites">Test Plans and Suites</h3>
<p>Historically in TFS, test plans and suites were not work items, <a href="http://blogs.msdn.com/b/bharry/archive/2014/05/30/visual-studio-team-foundation-server-2013-update-3-ctp1-vs-2013-3-1-if-you-wish.aspx">they became work items in TFS 2013.3</a>. This means if you need these moved over too, then you had to use the TFS API.</p>
<p>Though these scripts were written for TFS 2013.2, there is no reason for these same API calls not to work with newer versions of TFS or VSTS. Just remember to make sure you exclude the Test Plans and Suites work items from the migration performed TFS Integration Platform so you don&rsquo;t move them twice.</p>
<p>This script moves the three test suite types as follows:</p>
<ol>
<li>Static - Creates a new suite, finds the migrated IDs of the test cases on the source suite and adds them to the new suite.</li>
<li>Dynamic - Creates a new suite using the existing work item query. IMPORTANT - The query is NOT edited, so it may or may not work depending on what it actually contained. These suites will need to be checked by a tester manually in all cases and their queries &rsquo;tweaked&rsquo;.</li>
<li>Requirements - Create a new suite based on the migrated IDs of the requirement work items. This is the only test suite type where we edit the name to make it consistent with the new requirement ID not the old.</li>
</ol>
<p>The script is as follows: </p>
<pre tabindex="0"><code>function Update-TestPlanAfterMigration  
{  
&lt;#    
.SYNOPSIS    
This function migrates a test plan and all its child test suites to a different team project    
    
.DESCRIPTION    
This function migrates a test plan and all its child test suites to a different team project, reassign work item IDs as required    
    
.PARAMETER SourceCollectionUri    
Source TFS Collection URI    
    
.PARAMETER SourceTeamProject    
Source Team Project Name    
    
.PARAMETER SourceCollectionUri    
Target TFS Collection URI    
    
.PARAMETER SourceTeamProject    
Targe Team Project Name    
    
    
.EXAMPLE    
    
Update-TestPlanAfterMigration -SourceCollectionUri &#34;[http://server1:8080/tfs/defaultcollection&#34;](http://server1:8080/tfs/defaultcollection&#34;) -TargetCollectionUri &#34;[http://serrver2:8080/tfs/defaultcollection&#34;](http://serrver2:8080/tfs/defaultcollection&#34;)  -SourceTeamProjectName &#34;Old project&#34; -TargetTeamProjectName &#34;New project&#34;    
    
#&gt;    
    param(    
    \[Parameter(Mandatory=$true)\]    
    \[uri\] $SourceCollectionUri,    
    
    \[Parameter(Mandatory=$true)\]    
    \[string\] $SourceTeamProjectName,    
    
    \[Parameter(Mandatory=$true)\]    
    \[uri\] $TargetCollectionUri,    
    
    \[Parameter(Mandatory=$true)\]    
    \[string\] $TargetTeamProjectName    
    
    )    
    
    # Get TFS connections    
    $sourcetfs = \[Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory\]::GetTeamProjectCollection($SourceCollectionUri)    
    try    
    {    
        $Sourcetfs.EnsureAuthenticated()    
    }    
    catch    
    {    
        Write-Error &#34;Error occurred trying to connect to project collection: $\_ &#34;    
        exit 1    
    }    
    $targettfs = \[Microsoft.TeamFoundation.Client.TfsTeamProjectCollectionFactory\]::GetTeamProjectCollection($TargetCollectionUri)    
    try    
    {    
        $Targettfs.EnsureAuthenticated()    
    }    
    catch    
    {    
        Write-Error &#34;Error occurred trying to connect to project collection: $\_ &#34;    
        exit 1    
    }    
    
    # get the actual services    
    $sourcetestService = $sourcetfs.GetService(&#34;Microsoft.TeamFoundation.TestManagement.Client.ITestManagementService&#34;)    
    $targettestService = $targettfs.GetService(&#34;Microsoft.TeamFoundation.TestManagement.Client.ITestManagementService&#34;)    
    $sourceteamproject = $sourcetestService.GetTeamProject($sourceteamprojectname)    
    $targetteamproject = $targettestService.GetTeamProject($targetteamprojectname)    
    # Get the work item store    
    $wiService = $targettfs.GetService(\[Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore\])    
    
    
    # find all the plans in the source    
     foreach ($plan in $sourceteamproject.TestPlans.Query(&#34;Select \* From TestPlan&#34;))    
     {    
         if ($plan.RootSuite -ne $null -and $plan.RootSuite.Entries.Count -gt 0)    
         {    
            # copy the plan to the new tp    
            Write-Host(&#34;Migrating Test Plan - {0}&#34; -f $plan.Name)     
            $newplan = $targetteamproject.TestPlans.Create();    
            $newplan.Name = $plan.Name    
            $newplan.AreaPath = $plan.AreaPath    
            $newplan.Description = $plan.Description    
            $newplan.EndDate = $plan.EndDate    
            $newplan.StartDate = $plan.StartDate    
            $newplan.State = $plan.State    
            $newplan.Save();    
            # we use a function as it can be recursive    
            MoveTestSuite -sourceSuite $plan.RootSuite -targetSuite $newplan.RootSuite -targetProject $targetteamproject -targetPlan $newplan -wiService $wiService    
            # and have to save the test plan again to persit the suites    
            $newplan.Save();    
    
         }    
     }    
    
    
    
}    
    
\# - is missing in name so this method is not exposed when module loaded    
function MoveTestSuite    
{    
&lt;#    
.SYNOPSIS    
This function migrates a test suite and all its child test suites to a different team project    
    
.DESCRIPTION    
This function migrates a test suite and all its child test suites to a different team project, it is a helper function Move-TestPlan and will probably not be called directly from the command line    
    
.PARAMETER SourceSuite    
Source TFS test suite    
    
.PARAMETER TargetSuite    
Target TFS test suite    
    
.PARAMETER TargetPlan    
The new test plan the tests suite are being created in    
    
.PARAMETER targetProject    
The new team project test suite are being created in    
    
.PARAMETER WiService    
Work item service instance used for lookup    
    
    
.EXAMPLE    
    
Move-TestSuite -sourceSuite $plan.RootSuite -targetSuite $newplan.RootSuite -targetProject $targetteamproject -targetPlan $newplan -wiService $wiService    
    
#&gt;    
    param     
    (    
        \[Parameter(Mandatory=$true)\]    
        $sourceSuite,    
    
        \[Parameter(Mandatory=$true)\]    
        $targetSuite,    
    
        \[Parameter(Mandatory=$true)\]    
        $targetProject,    
    
        \[Parameter(Mandatory=$true)\]    
        $targetplan,    
    
        \[Parameter(Mandatory=$true)\]    
        $wiService    
    )    
    
    foreach ($suite\_entry in $sourceSuite.Entries)    
    {    
       # get the suite to a local variable to make it easier to pass around    
       $suite = $suite\_entry.TestSuite    
       if ($suite -ne $null)    
       {    
           # we have to build a suite of the correct type    
           if ($suite.IsStaticTestSuite -eq $true)    
           {    
                Write-Host(&#34;    Migrating static test suite - {0}&#34; -f $suite.Title)          
                $newsuite = $targetProject.TestSuites.CreateStatic()    
                $newsuite.Title = $suite.Title    
                $newsuite.Description = $suite.Description     
                $newsuite.State = $suite.State     
                # need to add the suite to the plan else you cannot add test cases    
                $targetSuite.Entries.Add($newSuite) &gt;$nul # sent to null as we get output    
                foreach ($test in $suite.TestCases)    
                {    
                    $migratedTestCaseIds = $targetProject.TestCases.Query(&#34;Select \* from \[WorkItems\] where \[TfsMigrationTool.ReflectedWorkItemId\] = &#39;{0}&#39;&#34; -f $Test.Id)    
                    # we assume we only get one match    
                    if ($migratedTestCaseIds\[0\] -ne $null)    
                    {    
                        Write-Host (&#34;        Test {0} has been migrated to {1} and added to suite {2}&#34; -f $Test.Id , $migratedTestCaseIds\[0\].Id, $newsuite.Title)    
                        $newsuite.Entries.Add($targetProject.TestCases.Find($migratedTestCaseIds\[0\].Id))  &gt;$nul # sent to null as we get output    
                    }    
                }    
           }    
    
       
           if ($suite.IsDynamicTestSuite -eq $true)    
           {    
               Write-Host(&#34;    Migrating query based test suite - {0} (Note - query may need editing)&#34; -f $suite.Title)          
               $newsuite = $targetProject.TestSuites.CreateDynamic()    
               $newsuite.Title = $suite.Title    
               $newsuite.Description = $suite.Description     
               $newsuite.State = $suite.State     
               $newsuite.Query = $suite.Query    
    
               $targetSuite.Entries.Add($newSuite) &gt;$nul # sent to null as we get output    
               # we don&#39;t need to add tests as this is done dynamically    
      
           }    
    
           if ($suite.IsRequirementTestSuite -eq $true)    
           {    
               $newwis = $wiService.Query(&#34;select \*  FROM WorkItems WHERE \[TfsMigrationTool.ReflectedWorkItemId\] = &#39;{0}&#39;&#34; -f $suite.RequirementId)      
               if ($newwis\[0\] -ne $null)    
               {    
                    Write-Host(&#34;    Migrating requirement based test suite - {0} to new requirement ID {1}&#34; -f $suite.Title, $newwis\[0\].Id )        
           
                    $newsuite = $targetProject.TestSuites.CreateRequirement($newwis\[0\])    
                    $newsuite.Title = $suite.Title -replace $suite.RequirementId, $newwis\[0\].Id    
                    $newsuite.Description = $suite.Description     
                    $newsuite.State = $suite.State     
                    $targetSuite.Entries.Add($newSuite) &gt;$nul # sent to null as we get output    
                    # we don&#39;t need to add tests as this is done dynamically    
               }    
           }    
      
           # look for child test cases    
           if ($suite.Entries.Count -gt 0)    
           {    
                 MoveTestSuite -sourceSuite $suite -targetSuite $newsuite -targetProject $targetteamproject -targetPlan $newplan -wiService $wiService    
           }    
        }    
    }    
 }  
      
</code></pre><p>NOTE: This script needs PowerShell 3.0 installed. This appears to be because some the TFS assemblies are .NET 4.5 which is not supported by previous PowerShell versions. If the version is wrong the test suite migration will fail as the TestPlan (ITestPlanHelper) object will be null.</p>
<p>The command to run the migration of test plans is:</p>
<pre tabindex="0"><code>Update-TestPlanAfterMigration -SourceCollectionUri &#34;http://typhoontfs:8080/tfs/defaultcollection&#34; -TargetCollectionUri &#34;http://typhoontfs:8080/tfs/defaultcollection&#34; -SourceTeamProjectName &#34;Scrum TFVC Source&#34; -TargetTeamProjectName &#34;NewProject&#34;  
</code></pre><p>This will create the new set of test plans and suites in addition to any already in place on the target server. It should give an output similar to:</p>
<p><img alt="Article image" loading="lazy" src="https://www.microsoft.com/en-gb/developers/images/articles/content/471/01-05.jpg?v=1.0"></p>
<h3 id="summary">Summary</h3>
<p>Once all this is done you should have migrated a TFVC team project to a new team project based on Git on either on-premises TFS or VSTS, retaining as much history as is possible. I hope you find this of use!</p>
<p><em>This article was first published on the Microsoft’s UK Developers site</em> <a href="http://www.microsoft.com/en-gb/developers/articles/week03aug14/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project"><em>Migrating a TFS TFVC based team project to a Git team project - a practical example</em></a> <em>originally published August the 15th 2014 updated 7 June 2016</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Update guest post on migrating TFVC projects to Git</title>
      <link>https://blog.richardfennell.net/posts/update-guest-post-on-migrating-tfvc-projects-to-git/</link>
      <pubDate>Tue, 07 Jun 2016 08:47:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/update-guest-post-on-migrating-tfvc-projects-to-git/</guid>
      <description>&lt;p&gt;Microsoft asked my to revise my August 2014 article on ‘Migrating a TFS TFVC team project to a Git team project’, this revised version is now live on &lt;a href=&#34;https://www.microsoft.com/en-gb/developers/articles/week03aug14/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project/&#34;&gt;their web site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The changes in the revision are updated links for tools, and information on how to use the technique with VSTS now some work item customisation is available.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Microsoft asked my to revise my August 2014 article on ‘Migrating a TFS TFVC team project to a Git team project’, this revised version is now live on <a href="https://www.microsoft.com/en-gb/developers/articles/week03aug14/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project/">their web site</a>.</p>
<p>The changes in the revision are updated links for tools, and information on how to use the technique with VSTS now some work item customisation is available.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running WebTests as part of a VSTS VNext Release pipeline</title>
      <link>https://blog.richardfennell.net/posts/running-webtests-as-part-of-a-vsts-vnext-release-pipeline/</link>
      <pubDate>Mon, 30 May 2016 21:08:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-webtests-as-part-of-a-vsts-vnext-release-pipeline/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;Most projects will have a range of tests&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Unit tests (maybe using a mocking framework) running inside the build process&lt;/li&gt;
&lt;li&gt;Integration/UX and load tests run as part of a release pipeline&lt;/li&gt;
&lt;li&gt;and finally manual tests&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In a recent project we were using &lt;a href=&#34;https://msdn.microsoft.com/en-us/library/ms182539%28v=vs.110%29.aspx&#34;&gt;WebTests&lt;/a&gt; to provide some integration tests (in addition to integration tests written using unit testing frameworks) as a means to test a REST/ODATA API, injecting data via the API, pausing while a backend &lt;a href=&#34;http://www.hanselman.com/blog/IntroducingWindowsAzureWebJobs.aspx&#34;&gt;Azure WebJob&lt;/a&gt; processed the injected data, then checking a second API to make sure the processed data was correctly presented. Basically mimicking user operations.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>Most projects will have a range of tests</p>
<ul>
<li>Unit tests (maybe using a mocking framework) running inside the build process</li>
<li>Integration/UX and load tests run as part of a release pipeline</li>
<li>and finally manual tests</li>
</ul>
<p>In a recent project we were using <a href="https://msdn.microsoft.com/en-us/library/ms182539%28v=vs.110%29.aspx">WebTests</a> to provide some integration tests (in addition to integration tests written using unit testing frameworks) as a means to test a REST/ODATA API, injecting data via the API, pausing while a backend <a href="http://www.hanselman.com/blog/IntroducingWindowsAzureWebJobs.aspx">Azure WebJob</a> processed the injected data, then checking a second API to make sure the processed data was correctly presented. Basically mimicking user operations.</p>
<p>In past iterations we ran these tests via <a href="https://www.visualstudio.com/en-us/features/lab-management-vs.aspx">TFS Lab Management’s</a> tooling, using the Test Agent that is deploys when an environment is created.</p>
<p>The problem was we are migrating to <a href="https://www.visualstudio.com/en-us/features/release-management-vs.aspx">VSTS/TFS 2015.2 Release Management</a>. This uses the new <a href="https://www.visualstudio.com/en-us/docs/build/steps/test/run-functional-tests">Functional Testing Task</a>, which uses the newer Test Agent that is <a href="https://github.com/Microsoft/vsts-tasks/blob/master/Tasks/DeployVisualStudioTestAgent/README.md">deployed on demand as part of the release pipeline</a> (not pre-installed) and this agent does not support running WebTests at present.</p>
<p>This means my only option was to use MsTest if I wanted to continue using this form of webtest. However, there is no out the box MsTest task for VSTS, so I needed to write a script to do the job that I could deploy as part of my build artifacts.</p>
<p>Now I could write a build/release task to make this nice and easy to use, but that is more work and I suspect that I am not going to need this script too often in the future (I might be wrong here only time will tell). Also I hope that Microsoft will at some point provide an out the box task to do the job either by providing an MStest task or adding webtest support to the functional test task.</p>
<p>This actually reflects my usual work practice for build tasks, get the script working first locally, use it as PowerShell script in the build, and if I see enough refuse make it a task/extension.</p>
<p>So what did I actually need to do?</p>
<h3 id="preparation">Preparation</h3>
<ol>
<li>
<p>Install Visual Studio on the VM where the tests will be run from. I need to do this because though MSTest was already present  it fails to run .Webtest tests <a href="http://stackoverflow.com/questions/3406636/running-webtests-without-microsoft-visual-studio-testing-edition-team-suite">unless a suitable SKU of Visual Studio is installed</a></p>
</li>
<li>
<p>Set the solution configuration so that the projects containing the webtests is not built, we only need the <strong>.webtest</strong> files copied to the drops location. If you build the project the files get duplicated into the <strong>bin</strong> folder, which we don’t need as we then need to work out which copy to use.</p>
</li>
<li>
<p>Make sure the solution contains a <strong>.TestSettings</strong> file that switches on ‘Think Times’, and this file is copied as a build artifact. This stalled me for ages, could not work out why tests worked in Visual Studio and failed from the command line. Without this file there is no think time at all so my background process never had time to run.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_307.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_303.png" title="image"></a></p>
</li>
<li>
<p>Write a script that finds all my <strong>.Webtest</strong> files and place it in source control such that it is copied to the builds drop location.</p>
</li>
</ol>
<pre tabindex="0"><code>param 

(

    $tool = &#34;C:Program Files (x86)Microsoft Visual Studio 14.0Common7IDEMSTest.exe&#34;,  
    $path ,  
    $include = &#34;\*.webtest&#34;,  
    $results ,  
    $testsettings

)

 

$web\_tests = get-ChildItem -Path $paths -Recurse -Include $include

foreach ($item in $web\_tests) {  
    $args += &#34;/TestContainer:$item &#34;

}

  
&amp; $tool $args /resultsfile:$Results /testsettings:$testsettings
</code></pre><h3 id="build">Build</h3>
<p>Once the script and other settings are made I altered the build so that the <strong>.webtests</strong> (including their associated JSON test data sub folders), the script and the <strong>.testsettings</strong> files are all copied to the drops location</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_308.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_304.png" title="image"></a></p>
<h3 id="release">Release</h3>
<p>In the release pipeline I need to call my script with suitable parameters so it find the tests, uses the <strong>.testsettings</strong> and creates a <strong>.TRX</strong> results file. I then need to use the ‘Publish Test Results’ task to uploaded these MSTest format results</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_309.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_305.png" title="image"></a></p>
<p>So for the PowerShell MSTest task I set the following</p>
<ul>
<li>Script name is <strong>$(System.DefaultWorkingDirectory)/MyBuilddropScriptsRunMSTest.ps1</strong> </li>
<li>The argument is <strong>-path $(System.DefaultWorkingDirectory)MyBuilddropSrcWebtestsProject -results $(System.DefaultWorkingDirectory)webtests.trx -testsettings $(System.DefaultWorkingDirectory)MyBuilddropsrcwebtest.testsettings</strong></li>
</ul>
<p>And for the publish test results task.</p>
<ul>
<li>Format – <strong>VSTest</strong></li>
<li>Arguments - <strong>$(System.DefaultWorkingDirectory)webtests.trx</strong></li>
<li>I also set this task to always run to make sure I got test results even if some test failed</li>
</ul>
<p>Once all this was done and the build/release run I got my test results I needed</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_310.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_306.png" title="image"></a></p>
<p>I can drill into my detailed test reports as needed</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_311.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_307.png" title="image"></a></p>
<p>So I have a functioning release pipeline that can run all the various types of automated tests within my solution.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Easier management of DevTest VMs with Azure DevTest Labs</title>
      <link>https://blog.richardfennell.net/posts/easier-management-of-devtest-vms-with-azure-devtest-labs/</link>
      <pubDate>Fri, 27 May 2016 11:48:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/easier-management-of-devtest-vms-with-azure-devtest-labs/</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Struggling to manage those DevTest VM in Azure?&lt;/li&gt;
&lt;li&gt;Finding it hard to standardise your VMs and get the right things installed?&lt;/li&gt;
&lt;li&gt;Burning through your Azure credit too fast because you forgot to switch things off?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Well &lt;a href=&#34;https://blogs.msdn.microsoft.com/devtestlab/2016/05/25/announcing-general-availability-of-azure-devtest-labs/&#34;&gt;Azure DevTest Labs has just been released for general availability&lt;/a&gt;, might will just be the thing to help, have a look.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<ul>
<li>Struggling to manage those DevTest VM in Azure?</li>
<li>Finding it hard to standardise your VMs and get the right things installed?</li>
<li>Burning through your Azure credit too fast because you forgot to switch things off?</li>
</ul>
<p>Well <a href="https://blogs.msdn.microsoft.com/devtestlab/2016/05/25/announcing-general-availability-of-azure-devtest-labs/">Azure DevTest Labs has just been released for general availability</a>, might will just be the thing to help, have a look.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Building bridges - getting DevOps working through Devs and IT Pros talking and learning from each other</title>
      <link>https://blog.richardfennell.net/posts/building-bridges-getting-devops-working-through-devs-and-it-pros-talking-and-learning-from-each-other/</link>
      <pubDate>Wed, 25 May 2016 20:58:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/building-bridges-getting-devops-working-through-devs-and-it-pros-talking-and-learning-from-each-other/</guid>
      <description>&lt;p&gt;I was lucky enough to attended and be on a panel at yesterdays &lt;a href=&#34;http://www.winops.org/&#34;&gt;WinOps London conference&lt;/a&gt;, it was a different and very interesting view on DevOps for me. I spend most of my time consulting with test and development teams, with these teams it is very rare to come across a team not using source control and they commonly have some form of automated build too. This means any DevOps discussion usually come from the side of ‘how can I extend my build into deployment…’.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was lucky enough to attended and be on a panel at yesterdays <a href="http://www.winops.org/">WinOps London conference</a>, it was a different and very interesting view on DevOps for me. I spend most of my time consulting with test and development teams, with these teams it is very rare to come across a team not using source control and they commonly have some form of automated build too. This means any DevOps discussion usually come from the side of ‘how can I extend my build into deployment…’.</p>
<p>At the conference yesterday, where there seemed to be more IT Pro attendees than developers, this ‘post build’ view of was not the norm. Much of the conference content was focused around the provisioning and configuration of infrastructure, getting the environment ‘ready for deployment of a build’. What surprised me most was how repeatedly speakers stressed the importance of using source control to manage scripts and hence control the version of the environments being provisioning.</p>
<p>So what does this tell us?</p>
<p>The obvious fact to me is that the bifurcation of our industry between Devs and IT Pros  means there is a huge scope for swapping each group’s best practices. What seem ingrained best practice for one role is new and interesting for the other. We can all learn from each other – assuming we communicate.</p>
<p>This goes to the core of DevOps, that it is not a tool but a process based around collaboration.</p>
<p>If you want to find out more about how we see DevOps at Black Marble we are running events and are out and about at user groups. Keep an eye on the <a href="http://www.blackmarble.co.uk/events">Black Marble events site</a> or drop me an email.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Migrating work items to VSTS with custom fields using TFS Integration Platform</title>
      <link>https://blog.richardfennell.net/posts/migrating-work-items-to-vsts-with-custom-fields-using-tfs-integration-platform/</link>
      <pubDate>Fri, 20 May 2016 13:41:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/migrating-work-items-to-vsts-with-custom-fields-using-tfs-integration-platform/</guid>
      <description>&lt;p&gt;If you wish to migrate work items from TFS to VSTS your options are limited. You can of course just pull over work items, without history, using Excel. If you have no work item customisation them &lt;a href=&#34;https://visualstudiogallery.msdn.microsoft.com/28a90a17-d00c-4660-b7ae-42d58315ccf2&#34;&gt;OpsHub&lt;/a&gt; is an option, but if you have work item customisation then you are going to have to use &lt;a href=&#34;https://visualstudiogallery.msdn.microsoft.com/eb77e739-c98c-4e36-9ead-fa115b27fefe&#34;&gt;TFS Integration Platform&lt;/a&gt;. And we all know what a lovely experience that is!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: TFS Integration Platform will cease to be supported by Microsoft at the end of May 2016, this does not mean the tool is going away, just that there will be no support via forums.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you wish to migrate work items from TFS to VSTS your options are limited. You can of course just pull over work items, without history, using Excel. If you have no work item customisation them <a href="https://visualstudiogallery.msdn.microsoft.com/28a90a17-d00c-4660-b7ae-42d58315ccf2">OpsHub</a> is an option, but if you have work item customisation then you are going to have to use <a href="https://visualstudiogallery.msdn.microsoft.com/eb77e739-c98c-4e36-9ead-fa115b27fefe">TFS Integration Platform</a>. And we all know what a lovely experience that is!</p>
<p><strong>Note</strong>: TFS Integration Platform will cease to be supported by Microsoft at the end of May 2016, this does not mean the tool is going away, just that there will be no support via forums.</p>
<p>In this post I will show how you can use TFS Integration platform to move over custom fields to VSTS, including the original TFS work item ID, this enabling migrations with history as detailed in my <a href="https://www.microsoft.com/en-gb/developers/articles/week03aug14/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project/">MSDN article</a></p>
<h3 id="tfs-integration-platform-setup">TFS Integration Platform Setup</h3>
<h4 id="reference-assemblies">Reference Assemblies</h4>
<p>TFS Integration Platform being a somewhat old tool, design for TFS 2010, does not directly support TFS 2015 or VSTS. You have to select the Dev11 connection options (which is TFS 2012 by its internal code name). However, this will still cause problems as it fails to find all the assemblies it expects</p>
<p>The solution to this problem is provided in <a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/07/04/tfs-integration-tools-issue-this-tool-requires-the-tfs-client-object-model.aspx">this post</a>, the key being to add dummy registry entries</p>
<ol>
<li>Install either
<ul>
<li>Visual Studio Ultimate 2012</li>
<li>Team Explorer 2012</li>
<li>Or the <a href="https://visualstudiogallery.msdn.microsoft.com/f30e5cc7-036e-449c-a541-d522299445aa">Visual Studio Object Model</a></li>
</ul>
</li>
<li>Add the following registry key after you have installed Team Explorer or equiv.
<pre tabindex="0"><code>Windows Registry Editor Version 5.00 

\[HKEY\_LOCAL\_MACHINESOFTWAREWow6432NodeMicrosoftVisualStudio11.0InstalledProductsTeam System Tools for Developers\] 

@=&#34;#101&#34; 

&#34;LogoID&#34;=&#34;#100&#34; 

&#34;Package&#34;=&#34;{97d9322b-672f-42ab-b3cb-ca27aaedf09d}&#34; 

&#34;ProductDetails&#34;=&#34;#102&#34; 

&#34;UseVsProductID&#34;=dword:00000001
</code></pre></li>
</ol>
<h4 id="msi">MSI</h4>
<p>Once this is done the TFS Integration Tools installation should work.</p>
<p>Accept the default options, you will need to select a SQL server for the tool to use as a database to store its progress. The installer will create a DB called tfs_integrationplatform on the SQL instance</p>
<h3 id="creating-a-mappings-file">Creating a Mappings File</h3>
<p>TFS Integration platform needs a mapping file to work out which fields go where.</p>
<ol>
<li>
<p>We assume there is a local TFS server with the source to migrate from and a VSTS instance containing a team project using a reasonably compatible uncustomised process template</p>
</li>
<li>
<p>Download the <a href="https://tfsintegrationmapper.codeplex.com/">TFS Process Mapper</a> and run it.</p>
</li>
<li>
<p>You need to load into the process mapper the current work item configuration, the tools provides buttons to do this from XML files (exported with WITADMIN) or directly from the TFS/VSTS server.</p>
</li>
<li>
<p>You should see a list of fields in both the source and target server definitions of the given work item type.</p>
</li>
<li>
<p>Use the automap button to match the fields</p>
</li>
<li>
<p>Any unmatch fields will be left on the left columns</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_306.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_302.png" title="image"></a></p>
</li>
<li>
<p>Some field you may be match manually e.g. handing name changes from ‘Area ID’ to ‘AreadID’</p>
</li>
<li>
<p>If you have local custom fields you can add matching fields on the VSTS instance, this is done using the <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2015/12/10/adding-a-custom-field-to-a-work-item/">process on MSDN</a>.</p>
</li>
<li>
<p>Once you have added your custom filed I have found it best to clear the mapping tool and re-import the VSTS work item definitions. The new fields appear in the list and can be mapped manually to their old equivalents.</p>
</li>
<li>
<p>I now exported my mappings file.</p>
</li>
<li>
<p>This process described above is the same as manually editing the mapping file in the form<br>
<code>&lt;MappedField MapFromSide=&quot;Left&quot; LeftName=&quot;BM.Custom1&quot; RightName=&quot;BMCustom1&quot; /&gt;</code><br>
There is a good chance one of the fields you want is the old TFS servers work item. If you add the mapping as above for System.ID you would expect it to work. However, it does not the field is empty on the target system. I don’t think this is a bug, just an unexpected behaviour in the way the unique WI IDs are handled by the tool. As a workaround I found I had to also be an aggregate field to force the System.ID to be transferred. In my process customisation on VSTS I created an Integer <strong>OldId</strong> custom field. I then added the following to my mapping, it is important to note that I don’t use the  line in the <strong>mappedfields</strong> block, I used a <strong>AggregatedField.</strong> <br>
<code>&lt;MappedFields&gt;            &lt;-- all auto generated mapping stuff,    This is where you would expect a line like the one below            &lt;MappedField MapFromSide=&quot;Left&quot; LeftName=&quot;System.Id&quot; RightName=&quot;OldID&quot; /&gt; –&gt;   &lt;/MappedFields&gt;   &lt;AggregatedFields&gt;          &lt;FieldsAggregationGroup MapFromSide=&quot;Left&quot; TargetFieldName=&quot;OldID&quot; Format=&quot;{0}&quot;&gt;              &lt;SourceField Index=&quot;0&quot; SourceFieldName=&quot;System.Id&quot; valueMap=&quot;&quot;/&gt;          &lt;/FieldsAggregationGroup&gt;   &lt;/AggregatedFields&gt; </code></p>
</li>
<li>
<p>I could now use my edited mappings file</p>
</li>
</ol>
<h3 id="running-tfs-integration-platform">Running TFS Integration Platform</h3>
<p>I could now run the TFS Integration tools using the mappings file</p>
<ol>
<li>Load TFS Integration Platform</li>
<li>Create a new configuration</li>
<li>Select the option for work items with explicit mappings</li>
<li>Select your source TFS server</li>
<li>Select your target VSTS server</li>
<li>Select the work item query that returns the items we wish to move</li>
<li>Edit the mapping XML, but and past in the edited block from the previous section. Note that if you are moving multiple work item types then you will be combining a number of these mapping sections</li>
<li>Save the mapping file, you are now ready to use it in TFS Integration Platform</li>
</ol>
<p>And hopefully work migration will progress as you hope. It might take some trial and error but you should get there in the end.</p>
<h3 id="but-really">But really……</h3>
<p>This all said, I would still recommend just bring over the active work item backlog and current source when moving to VSTS. It a easier, faster and give you a chance to sort out structures without bringing in all your poor choices of the past.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New version of my VSTS Generate Release Notes extension  - now supports Builds and Release</title>
      <link>https://blog.richardfennell.net/posts/new-version-of-my-vsts-generate-release-notes-extension-now-supports-builds-and-release/</link>
      <pubDate>Thu, 19 May 2016 22:21:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-version-of-my-vsts-generate-release-notes-extension-now-supports-builds-and-release/</guid>
      <description>&lt;p&gt;I am pleased to announce that I have just made public on the VSTS marketplace a new version of my &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task-DEV&#34;&gt;VSTS Generate Release Notes extension&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This new version now supports both VSTS/TFS vNext Builds and vNext Releases. The previous versions only supported the generation of release notes as part of a build.&lt;/p&gt;
&lt;p&gt;The adding of support for release has meant I have had to rethink the internals of how the templates is process as well as the way templates are passed into the task and where results are stored&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am pleased to announce that I have just made public on the VSTS marketplace a new version of my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task-DEV">VSTS Generate Release Notes extension</a>.</p>
<p>This new version now supports both VSTS/TFS vNext Builds and vNext Releases. The previous versions only supported the generation of release notes as part of a build.</p>
<p>The adding of support for release has meant I have had to rethink the internals of how the templates is process as well as the way templates are passed into the task and where results are stored</p>
<ul>
<li>You can now provide a template as a file (usually from source control) as before, but also as an inline property. The latter is really designed for Releases where there is usually no access to source control, only to build artifact drops (though you could put the template in one of these if you wanted)</li>
<li>With a build the obvious place to put the release notes file is in the drops location. For a release there is no such artifact drop location, so I just leave the releases notes on the release agent, it is up to the user to get this file copied to a sensible location for their release process.</li>
</ul>
<p>To find out more check out the <a href="https://github.com/rfennell/vNextBuild/wiki/GenerateReleaseNotes%20-Tasks">documentation on my GitHub repo</a> and have a look at my <a href="https://github.com/rfennell/vNextBuild/tree/master/SampleTemplates">sample templates</a> to get you started generating release notes</p>
]]></content:encoded>
    </item>
    <item>
      <title>Putting a release process around my VSTS extension development</title>
      <link>https://blog.richardfennell.net/posts/putting-a-release-process-around-my-vsts-extension-development/</link>
      <pubDate>Fri, 06 May 2016 12:17:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/putting-a-release-process-around-my-vsts-extension-development/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated: 5th Aug 2016 added notes in PublisherID&lt;/strong&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;  I have been developing a few VSTS/TFS build related extensions and have published a few in the &lt;a href=&#34;https://marketplace.visualstudio.com/search?term=fennell&amp;amp;target=VSTS&amp;amp;sortBy=Relevance&#34;&gt;VSTS marketplace&lt;/a&gt;. This has all been a somewhat manual process, a mixture of Gulp and PowerShell has helped a bit, but I decided it was time to try to do a more formal approach. To do this I have used &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=jessehouwing.jessehouwing-vsts-extension-tasks&#34;&gt;Jesse Houwing’s VSTS Extension Tasks&lt;/a&gt;. Even with this set of tasks I am not sure what I have is ‘best practice’, but it does work. The doubt is due to the way the marketplace handles revisions and preview flags. What I have works for me, but ‘your mileage may differ’&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated: 5th Aug 2016 added notes in PublisherID</strong></p>
<hr>
<p>  I have been developing a few VSTS/TFS build related extensions and have published a few in the <a href="https://marketplace.visualstudio.com/search?term=fennell&amp;target=VSTS&amp;sortBy=Relevance">VSTS marketplace</a>. This has all been a somewhat manual process, a mixture of Gulp and PowerShell has helped a bit, but I decided it was time to try to do a more formal approach. To do this I have used <a href="https://marketplace.visualstudio.com/items?itemName=jessehouwing.jessehouwing-vsts-extension-tasks">Jesse Houwing’s VSTS Extension Tasks</a>. Even with this set of tasks I am not sure what I have is ‘best practice’, but it does work. The doubt is due to the way the marketplace handles revisions and preview flags. What I have works for me, but ‘your mileage may differ’</p>
<h3 id="my-workflow">My Workflow</h3>
<p>The core of my workflow is that I am building the VSIX package twice, once as a private package and the other as a public one. They both contain the same code and have the same version number, they differ in only visibility flags I am not using a the preview flag options at all. I have found they do not really help me. My workflow is to build the private package, upload it and test it by sharing it with a test VSTS instance. if all is good publish the matched public package on the marketplace. In this model there is no need to use a preview, it just adds complexity I don’t need. This may not be true for everyone.</p>
<h3 id="build">Build</h3>
<p>The build’s job is to take the code, set the version number and package it into multiple VSIX package.</p>
<ol>
<li>First I have the vNext build get my source from my GitHub repo.</li>
<li>I add two build variables <strong>$(Major)</strong> and <strong>$(Minor)</strong> that I use to manually manage my version number</li>
<li>I set my build number format to <strong>$(Major).$(Minor).$(rev:r)</strong>, so the final .number is incremented until I choose to increment the major or minor version.</li>
<li>I then use one of Jesse’s tasks to package the extension multiple times using the <strong>extension tag</strong> model parameter. Each different package step uses different <strong>Visibility</strong> settings (circled in red). I also set the version, using the override options, to the <strong>$(Build.BuildNumber)</strong> (circled in green)<a href="/wp-content/uploads/sites/2/historic/image_304.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_300.png" title="image"></a></li>
<li>**[Updated Aug 2016] **Set the PublisherID and ExtensionID on the tasks, using a pair of build variables is a good idea here to avoid entering strings twice. It is important thay the PublisherID is entered with the correct case - it is case sensitive within the marketplace. Strange things happend of the PublisherID in a VSIX package differ from the one registered on the marketplace</li>
<li>As I am using the VSTS hosted build agent I also need to make sure I check the <strong>install Tfx-cli</strong> in the global setting section</li>
<li>I then add a second identical publish task, but this time there is no tag set and the visibility is set to public.</li>
<li>Finally I use a ‘publish build artifacts’ task to copy the VSIX packages to a drop location</li>
</ol>
<h3 id="release">Release</h3>
<p>So now I have multiple VSIX packages I can use the same family of tasks to create a release pipeline. I create a new release linked to be a Continuous Deployment of the previously created build and set its release name format to <strong>Release-$(Build.BuildNumber)</strong> My first environment uses three tasks, all using the option - to work from a VSIX package. <strong>Note</strong> In all cases I am using the VSIX path in the format <strong>$(System.DefaultWorkingDirectory)/GenerateReleaseNotes.Master/vsix/<package name>-<tag>-$(Build.BuildNumber).vsix</strong>. I am including the build number variable in the path as I chose to put all the packages in a single folder, so path wildcards are not an option as the task would not know which package to use unless I alter my build to put one VSIX package per folder. My tasks for the first environment are</p>
<ol>
<li>Publish VSTS Extension – using my private package so it is added as a private package to the marketplace</li>
<li>Share VSTS Extension – to my test VSTS account</li>
<li>Install VSTS Extension – to my test VSTS account</li>
</ol>
<p><em>For details in the usage of these tasks and setting up the link to the VSTS Marketplace</em> <a href="https://github.com/jessehouwing/vsts-extension-tasks/wiki"><em>see Jesse’s wiki</em></a> If I only intend a extension to ever be private this is enough. However I want to make mine public so I add a second environment that has manual pre-approval (so I have to confirm the public release) This environment only needs single task</p>
<ol>
<li>Publish VSTS Extension – using my public package so it is added as a public package to the marketplace</li>
</ol>
<p>I can of course add other tasks to this environment maybe send a Tweet or email to publicise the new version’s release</p>
<h3 id="summary">Summary</h3>
<p>So now I have a formal way to release my extensions. The dual packaging model means I can publish two different versions at the same time one privately and the other public <a href="/wp-content/uploads/sites/2/historic/image_305.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_301.png" title="image"></a> It is now just a case of moving all my extensions over to the new model. Though I am still interested to hear what other people view are? Does this seem a reasonable process flow?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading BlogEngine to 3.3</title>
      <link>https://blog.richardfennell.net/posts/upgrading-blogengine-to-3-3/</link>
      <pubDate>Sun, 01 May 2016 12:16:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-blogengine-to-3-3/</guid>
      <description>&lt;p&gt;I have just completed the upgrade of our Blog Server to &lt;a href=&#34;http://dotnetblogengine.net/&#34;&gt;BlogEngine 3.3&lt;/a&gt;. This upgrade is a &lt;a href=&#34;http://dotnetblogengine.net/post/blogengine-net-3-3-goes-live.aspx&#34;&gt;bit more complex than the usual upgrade&lt;/a&gt; as between 3.2 to 3.3 there is a change to Razor views for all the widgets. This means you need to remove all the old widgets you have and re-add them using the new razor equivalents.&lt;/p&gt;
&lt;p&gt;As our blog is backed by SQL, this mean a SQL script to clear down the old widgets, then a manual add of the new versions on each blog we have on our server. One point to note, if using SQL you do need to get BlogEngine 3.3 from its &lt;a href=&#34;https://github.com/rxtur/BlogEngine.NET&#34;&gt;GitHub repo&lt;/a&gt; (at the time of writing, I am sure this will change) as after the formal 3.3 release on CodePlex there is a fix for an issue that stopped the editing of widget properties.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just completed the upgrade of our Blog Server to <a href="http://dotnetblogengine.net/">BlogEngine 3.3</a>. This upgrade is a <a href="http://dotnetblogengine.net/post/blogengine-net-3-3-goes-live.aspx">bit more complex than the usual upgrade</a> as between 3.2 to 3.3 there is a change to Razor views for all the widgets. This means you need to remove all the old widgets you have and re-add them using the new razor equivalents.</p>
<p>As our blog is backed by SQL, this mean a SQL script to clear down the old widgets, then a manual add of the new versions on each blog we have on our server. One point to note, if using SQL you do need to get BlogEngine 3.3 from its <a href="https://github.com/rxtur/BlogEngine.NET">GitHub repo</a> (at the time of writing, I am sure this will change) as after the formal 3.3 release on CodePlex there is a fix for an issue that stopped the editing of widget properties.</p>
<p>So first experiences with 3.3?</p>
<p>Seems much more responsive, so all is looking good</p>
]]></content:encoded>
    </item>
    <item>
      <title>Updates to my StyleCop task for VSTS/TFS 2015.2</title>
      <link>https://blog.richardfennell.net/posts/updates-to-my-stylecop-task-for-vststfs-2015-2/</link>
      <pubDate>Tue, 26 Apr 2016 20:46:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updates-to-my-stylecop-task-for-vststfs-2015-2/</guid>
      <description>&lt;p&gt;Tracking the current version of StyleCop is a bit awkward. Last week I got an automated email from &lt;a href=&#34;https://stylecop.codeplex.com/releases/view/620994&#34;&gt;CodePlex saying 4.7.52.0 had been released&lt;/a&gt; . I thought this was the most up to date version, so upgraded my StyleCop command line wrapper and my VSTS StyleCop task from 4.7.47.0 to 4.7.52.0.&lt;/p&gt;
&lt;p&gt;However, I was wrong about the current version. I had not realised that the &lt;a href=&#34;https://github.com/Visual-Stylecop/Visual-StyleCop&#34;&gt;StyleCop team  had forked the code onto GitHub&lt;/a&gt;. GitHub is now the home of the Visual Studio 2015 and C# 6 development of StyleCop, while Codeplex remains the home of the legacy Visual Studio versions. I had only upgraded to a legacy patch version, not the current version.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Tracking the current version of StyleCop is a bit awkward. Last week I got an automated email from <a href="https://stylecop.codeplex.com/releases/view/620994">CodePlex saying 4.7.52.0 had been released</a> . I thought this was the most up to date version, so upgraded my StyleCop command line wrapper and my VSTS StyleCop task from 4.7.47.0 to 4.7.52.0.</p>
<p>However, I was wrong about the current version. I had not realised that the <a href="https://github.com/Visual-Stylecop/Visual-StyleCop">StyleCop team  had forked the code onto GitHub</a>. GitHub is now the home of the Visual Studio 2015 and C# 6 development of StyleCop, while Codeplex remains the home of the legacy Visual Studio versions. I had only upgraded to a legacy patch version, not the current version.</p>
<p>So I upgraded my <a href="https://github.com/rfennell/StyleCopCmdLine/releases/tag/v.1.2.0.0">StyleCop Command Line tool</a> and my <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-StyleCop-Task">VSTS StyleCop task</a> to wrapper 4.7.59.0, thus I think bringing me up to date.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to build a connection string from other parameters within MSDeploy packages to avoid repeating yourself in Release Management variables</title>
      <link>https://blog.richardfennell.net/posts/how-to-build-a-connection-string-from-other-parameters-within-msdeploy-packages-to-avoid-repeating-yourself-in-release-management-variables/</link>
      <pubDate>Mon, 18 Apr 2016 21:04:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-build-a-connection-string-from-other-parameters-within-msdeploy-packages-to-avoid-repeating-yourself-in-release-management-variables/</guid>
      <description>&lt;p&gt;Whilst working with the new &lt;a href=&#34;https://www.visualstudio.com/en-us/features/release-management-vs.aspx&#34;&gt;Release Management features in VSTS/TFS 2015.2&lt;/a&gt; I found I needed to pass in configuration variables i.e. server name, Db name, UID and Password to create a SQL server via an &lt;a href=&#34;https://azure.microsoft.com/en-gb/documentation/articles/resource-group-overview/&#34;&gt;Azure Resource Management Template&lt;/a&gt; release step and a connection string to the same SQL instance for a web site’s web.config, set using an MSDeploy release step using token replacement (&lt;a href=&#34;https://www.microsoft.com/en-gb/developers/articles/week01feb16/how-to-extend-a-VSTS-release-process-to-on-premises/&#34;&gt;as discussed in this post&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Now I could just create RM configuration variables for both the connection string and ARM settings,&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst working with the new <a href="https://www.visualstudio.com/en-us/features/release-management-vs.aspx">Release Management features in VSTS/TFS 2015.2</a> I found I needed to pass in configuration variables i.e. server name, Db name, UID and Password to create a SQL server via an <a href="https://azure.microsoft.com/en-gb/documentation/articles/resource-group-overview/">Azure Resource Management Template</a> release step and a connection string to the same SQL instance for a web site’s web.config, set using an MSDeploy release step using token replacement (<a href="https://www.microsoft.com/en-gb/developers/articles/week01feb16/how-to-extend-a-VSTS-release-process-to-on-premises/">as discussed in this post</a>)</p>
<p>Now I could just create RM configuration variables for both the connection string and ARM settings,</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_301.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_297.png" title="image"></a></p>
<p>However, this seems wrong for a couple of reason</p>
<ol>
<li>You should not repeat your self, too easy to get the two values out of step</li>
<li>I don’t really want to obfuscate the whole of a connection string in RM, when only a password really needs to be hidden (note the connection string variable is not set as secure in the above screenshot)</li>
</ol>
<h3 id="what-did-not-work">What did not work</h3>
<p>I first considered nesting the RM variables, e.g. setting a the connection string variable to be equal to ‘<strong>Server=tcp: $(DatabaseServer).database.windows.net,1433;Database=$(DatabaseName)….</strong>’, but this does not give the desired results, the <strong>S(DatabaseServer)</strong> and <strong>$(DatabaseName)</strong> variables are not expanded at runtime, you just get a string with the variable names in it.</p>
<h3 id="how-i-got-want-i-was-after">How I got want I was after….</h3>
<p><em>(In this post as a sample I am using the</em> <a href="https://fabrikam.codeplex.com/"><em>Fabrikam Fiber solution</em></a><em>. This means I need to provide a value for the <strong>FabrikamFiber-Express</strong> connection string)</em></p>
<p>I wanted to build the connection string from the other variables in the MSDeploy package. So to get the behaviour I want…</p>
<ol>
<li>
<p>In Visual Studio load the Fabrikam web site solution.</p>
</li>
<li>
<p>In the web project, use the publish option to create a publish profile use the ‘WebDeploy package’ option.</p>
</li>
<li>
<p>If you publish this package you end up with a <strong>setparameter.xml</strong> file containing the default connection string</p>
<pre tabindex="0"><code>&lt;setParameter name=&#34;FabrikamFiber-Express-Web.config Connection String&#34; value=&#34;Your value”/&gt;
```Where ‘your value’ is the value you set in the Publish wizard. So to use this I would need to pass in a whole connection string, where I only want to pass parts of this string
</code></pre></li>
<li>
<p>To add bespoke parameters to an MSDeploy package you add a <strong>parameter.xml</strong> file to the project in Visual Studio (<a href="https://visualstudiogallery.msdn.microsoft.com/cbf2764d-d205-49d6-810f-25324402c3a9">I wrote a Visual Studio Extension that help add this file</a>, but you can create it by hand). My tool will create the <strong>parameters.xml</strong> file based on the <strong>AppSettings</strong> block of the projects <strong>Web.config</strong>. So if you have a <strong>web.config</strong> containing the following</p>
<pre tabindex="0"><code>&lt;appSettings&gt;  
    &lt;add key=&#34;Location&#34; value=&#34;DEVPC&#34; /&gt;  
  &lt;/appSettings&gt;
```It will create a **parameters.xml** file as follows  
</code></pre><?xml version="1.0" encoding="utf-8"?>  
<parameters>  
  <parameter defaultValue="\_\_LOCATION\_\_" description="Description for Location" name="Location" tags="">  
    <parameterentry kind="XmlFile" match="/configuration/appSettings/add\[@key='Location'\]/@value" scope="\\web.config$" />  
  </parameter>   
</parameters>
```
</li>
<li>
<p>If we publish at this point we will get a <strong>setparameters.xml</strong> file containing</p>
<pre tabindex="0"><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;  
&lt;parameters&gt;  
  &lt;setParameter name=&#34;IIS Web Application Name&#34; value=&#34;\_\_Sitename\_\_&#34; /&gt;  
  &lt;setParameter name=&#34;Location&#34; value=&#34;\_\_LOCATION\_\_&#34; /&gt;  
  &lt;setParameter name=&#34;FabrikamFiber-Express-Web.config Connection String&#34; value=&#34;\_\_FabrikamFiberWebContext\_\_&#34; /&gt;  
&lt;/parameters&gt;
```This is assuming I used the publish wizard to set the site name to **\_\_SiteName\_\_** and the DB connection string to **\_\_FabrikamFiberWebContext\_\_**
</code></pre></li>
<li>
<p>Next step is to add my DB related parameters to the <strong>paramaters.xml</strong> file, this I do by hand, my tool does not help</p>
<pre tabindex="0"><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;  
&lt;parameters&gt;  
  &lt;parameter defaultValue=&#34;\_\_LOCATION\_\_&#34; description=&#34;Description for Location&#34; name=&#34;Location&#34; tags=&#34;&#34;&gt;  
    &lt;parameterentry kind=&#34;XmlFile&#34; match=&#34;/configuration/appSettings/add\[@key=&#39;Location&#39;\]/@value&#34; scope=&#34;\\web.config$&#34; /&gt;  
  &lt;/parameter&gt;  

  &lt;parameter name=&#34;Database Server&#34; defaultValue=&#34;\_\_sqlservername\_\_&#34;&gt;&lt;/parameter&gt;  
  &lt;parameter name=&#34;Database Name&#34; defaultValue=&#34;\_\_databasename\_\_&#34;&gt;&lt;/parameter&gt;  
  &lt;parameter name=&#34;Database User&#34; defaultValue=&#34;\_\_SQLUser\_\_&#34;&gt;&lt;/parameter&gt;  
  &lt;parameter name=&#34;Database Password&#34; defaultValue=&#34;\_\_SQLPassword\_\_&#34;&gt;&lt;/parameter&gt;  
 &lt;/parameters&gt;
</code></pre></li>
<li>
<p>If I publish again, this time the new variables also appear in the <strong>setparameters .xml</strong> file</p>
</li>
<li>
<p>Now I need to supress the auto generated creation of the connection string  parameter, and replace it with a parameter that uses the other parameters to generate the connection string. You would think this was a case of added more text to the <strong>parameters.xml</strong> file, but that does not work. If you add the block you would expect (making sure the name matches the auto generated connection string name) as below</p>
<pre tabindex="0"><code>&lt;parameter   
  defaultValue=&#34;Server=tcp:{Database Server}.database.windows.net,1433;Database={Database Name};User ID={Database User}@{Database Server};Password={Database Password};Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;&#34;   
  description=&#34;Enter the value for FabrikamFiber-Express connection string&#34;   
  name=&#34;FabrikamFiber-Express-Web.config Connection String&#34;   
  tags=&#34;&#34;&gt;  
  &lt;parameterentry   
    kind=&#34;XmlFile&#34;   
    match=&#34;/configuration/connectionStrings/add\[@name=&#39;FabrikamFiber-Express&#39;\]/@connectionString&#34;   
    scope=&#34;\\web.config$&#34; /&gt;  
&lt;/parameter&gt;
</code></pre><p>It does add the entry to <strong>setparameters.xml</strong>, but this blocks the successful operations at deployment. <a href="http://stackoverflow.com/questions/17880558/remove-parameters-from-the-generated-setparameters-xml">It seems that</a> if a value needs to be generated from other variables there can be no entry for it in the <strong>setparameters.xml</strong>. Documentation hints you can set the <strong>Tag</strong> to ‘Hidden’ but this does not appear to work.</p>
<p>One option would be to let the <strong>setparameters.xml</strong> file be generated and then remove the offending line prior to deployment but this feels wrong and prone to human error</p>
</li>
<li>
<p>To get around this you need to added a file name <strong><projectname>.wpp.target</strong> to the same folder as the project (and add it to the project). In this file place the following</p>
<pre tabindex="0"><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;  
&lt;Project ToolsVersion=&#34;4.0&#34; xmlns=&#34;[http://schemas.microsoft.com/developer/msbuild/2003&#34;](http://schemas.microsoft.com/developer/msbuild/2003&#34;)\&gt;  
&lt;Target Name=&#34;DeclareCustomParameters&#34;  
          BeforeTargets=&#34;Package&#34;&gt;  
    &lt;ItemGroup&gt;  
      &lt;MsDeployDeclareParameters Include=&#34;FabrikamFiber-Express&#34;&gt;  
        &lt;Kind&gt;XmlFile&lt;/Kind&gt;  
        &lt;Scope&gt;Web.config&lt;/Scope&gt;  
        &lt;Match&gt;/configuration/connectionStrings/add\[@name=&#39;FabrikamFiber-Express&#39;\]/@connectionString&lt;/Match&gt;  
        &lt;Description&gt;Enter the value for FabrikamFiber-Express connection string&lt;/Description&gt;  
        &lt;DefaultValue&gt;Server=tcp:{Database Server}.database.windows.net,1433;Database={Database Name};User ID={Database User}@{Database Server};Password={Database Password};Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;&lt;/DefaultValue&gt;  
        &lt;Tags&gt;&lt;/Tags&gt;  
        &lt;ExcludeFromSetParameter&gt;True&lt;/ExcludeFromSetParameter&gt;  
      &lt;/MsDeployDeclareParameters&gt;  
    &lt;/ItemGroup&gt;  
  &lt;/Target&gt;  
  &lt;PropertyGroup&gt;  
    &lt;AutoParameterizationWebConfigConnectionStrings&gt;false&lt;/AutoParameterizationWebConfigConnectionStrings&gt;  
  &lt;/PropertyGroup&gt;  
&lt;/Project&gt;
</code></pre><p>The first block declares the parameter I wish to use to build the connection string. Note the ‘ExcludeFromSetParameter’ setting so this parameter is not in the <strong>setparameters.xml</strong> file. This is what you cannot set in the <strong>parameters.xml</strong></p>
<p>The second block stops the auto generation of the connection string. (Thanks to <a href="http://stackoverflow.com/users/105999/sayed-ibrahim-hashimi">Sayed Ibrahim Hashimi</a> for various posts on getting this working)</p>
</li>
<li>
<p>Once the edits are made unload and reload the project as the <strong><project>. wpp.targets</strong> file is cached on loading by Visual Studio.</p>
</li>
<li>
<p>Make sure the publish profile is not set to generate a connection string</p>
</li>
</ol>
<pre><code>[![image](/wp-content/uploads/sites/2/historic/image_thumb_298.png &quot;image&quot;)](/wp-content/uploads/sites/2/historic/image_302.png)  
</code></pre>
<ol start="12">
<li>Now when you publish the project, you should get a <strong>setparameters.xml</strong> file with only the four  SQL variables, the AppSettings variables and the site name.<br>
(Note I have set the values for all of these to the format  __NAME__, this is so I can use token replacement in  my release pipeline)<br>
<code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt;   &lt;parameters&gt;     &lt;setParameter name=&quot;IIS Web Application Name&quot; value=&quot;\_\_Sitename\_\_&quot; /&gt;     &lt;setParameter name=&quot;Location&quot; value=&quot;\_\_LOCATION\_\_&quot; /&gt;     &lt;setParameter name=&quot;Database Server&quot; value=&quot;\_\_sqlservername\_\_&quot; /&gt;     &lt;setParameter name=&quot;Database Name&quot; value=&quot;\_\_databasename\_\_&quot; /&gt;     &lt;setParameter name=&quot;Database User&quot; value=&quot;\_\_SQLUser\_\_&quot; /&gt;     &lt;setParameter name=&quot;Database Password&quot; value=&quot;\_\_SQLPassword\_\_&quot; /&gt;   &lt;/parameters&gt;</code></li>
<li>If you deploy the web site, the <strong>web.config</strong> should have your values from the <strong>setparameters.xml</strong> file in it<br>
<code>&lt;appSettings&gt;      &lt;add key=&quot;Location&quot; value=&quot;\_\_LOCATION\_\_&quot; /&gt;   &lt;/appSettings&gt;   &lt;connectionStrings&gt;        &lt;add name=&quot;FabrikamFiber-Express&quot; connectionString=&quot;Server=tcp:\_\_sqlservername\_\_.database.windows.net,1433;Database=\_\_databasename\_\_;User ID=\_\_SQLUser\_\_@\_\_sqlservername\_\_;Password=\_\_SQLPassword\_\_;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;&quot; providerName=&quot;System.Data.SqlClient&quot; /&gt;   &lt;/connectionStrings&gt;</code></li>
</ol>
<p>You are now in a position manage the values of the <strong>setparameters.xml</strong> file however you wish. My choice is to use the ‘Replace Tokens’ build/release tasks from <a href="https://marketplace.visualstudio.com/items?itemName=colinsalmcorner.colinsalmcorner-buildtasks">Colin’s ALM Corner Build &amp; Release Tools Extension</a>, as this tasks correctly handles secure/encrypted RM variables as long as you use the ‘Secret Tokens’ option on the advanced menu.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_303.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_299.png" title="image"></a></p>
<h3 id="summary">Summary</h3>
<p>So yes, it all seems a but too complex, but it does work, and I think it makes for a cleaner deployment solution, less prone to human error. Which is what any DevOps solution must always strive for.</p>
<p>Depending on the values you put in the <strong><project>.wpp.targets</strong> you can parameterise the connection string however you need.</p>
]]></content:encoded>
    </item>
    <item>
      <title>In place upgrade times from TFS 2013 to 2015</title>
      <link>https://blog.richardfennell.net/posts/in-place-upgrade-times-from-tfs-2013-to-2015/</link>
      <pubDate>Tue, 29 Mar 2016 17:58:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/in-place-upgrade-times-from-tfs-2013-to-2015/</guid>
      <description>&lt;p&gt;There is no easy way to work out how long a TFS in place upgrade will take, there are just too many factors to make any calculation reasonable&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Start and end TFS version&lt;/li&gt;
&lt;li&gt;Quality/Speed of hardware&lt;/li&gt;
&lt;li&gt;Volume of source code&lt;/li&gt;
&lt;li&gt;Volume of work items&lt;/li&gt;
&lt;li&gt;Volume of work item attachments&lt;/li&gt;
&lt;li&gt;The list goes on….&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The best option I have found to a graph various upgrades I have done and try to make an estimate based in the shape of the curve. &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/21/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take.aspx&#34;&gt;I did this for 2010 &amp;gt; 2013 upgrades&lt;/a&gt;, and now I think I have enough data from upgrades of sizable TFS instances to do the same for 2013 to 2015.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is no easy way to work out how long a TFS in place upgrade will take, there are just too many factors to make any calculation reasonable</p>
<ul>
<li>Start and end TFS version</li>
<li>Quality/Speed of hardware</li>
<li>Volume of source code</li>
<li>Volume of work items</li>
<li>Volume of work item attachments</li>
<li>The list goes on….</li>
</ul>
<p>The best option I have found to a graph various upgrades I have done and try to make an estimate based in the shape of the curve. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/21/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take.aspx">I did this for 2010 &gt; 2013 upgrades</a>, and now I think I have enough data from upgrades of sizable TFS instances to do the same for 2013 to 2015.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_300.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_296.png" title="image"></a></p>
<p><strong>Note</strong>: I extracted this data from the TFS logs using the script in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/09/17/Powershell-to-help-plot-graphs-of-how-long-TFS-upgrades-take.aspx">this blog post</a> it is also in my <a href="https://github.com/rfennell/VSTSPowershell/">git repo</a> </p>
<p>So as a rule of thumb, the upgrade process will pause around step 100 (the exact number varies depending on your starting 2013.x release), time this pause, and expect the upgrade to complete in about 10x this period.</p>
<p>It is not 100% accurate, but close enough so you know how long to go for a coffee/meal/pub or bed for the night</p>
]]></content:encoded>
    </item>
    <item>
      <title>Announcing release of my vNext build tasks as extensions in the VSTS/TFS Marketplace</title>
      <link>https://blog.richardfennell.net/posts/announcing-release-of-my-vnext-build-tasks-as-extensions-in-the-vststfs-marketplace/</link>
      <pubDate>Tue, 22 Mar 2016 18:40:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/announcing-release-of-my-vnext-build-tasks-as-extensions-in-the-vststfs-marketplace/</guid>
      <description>&lt;p&gt;In the past I have posted about the vNext TFS build tasks I have made available via my &lt;a href=&#34;https://github.com/rfennell/vnextbuild&#34;&gt;GitHub repo&lt;/a&gt;. Over the past few weeks I have been making an effort to repackage these as extensions in the new &lt;a href=&#34;https://marketplace.visualstudio.com/VSTS&#34;&gt;VSTS/TFS Marketplace&lt;/a&gt;, thus making them easier to consume in VSTS or using the &lt;a href=&#34;https://www.visualstudio.com/news/tfs2015-update2-vs#suppext&#34;&gt;new extensions support in TFS 2015.2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This it is an ongoing effort, but I pleased to announce the release of the first set of extension.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In the past I have posted about the vNext TFS build tasks I have made available via my <a href="https://github.com/rfennell/vnextbuild">GitHub repo</a>. Over the past few weeks I have been making an effort to repackage these as extensions in the new <a href="https://marketplace.visualstudio.com/VSTS">VSTS/TFS Marketplace</a>, thus making them easier to consume in VSTS or using the <a href="https://www.visualstudio.com/news/tfs2015-update2-vs#suppext">new extensions support in TFS 2015.2</a></p>
<p>This it is an ongoing effort, but I pleased to announce the release of the first set of extension.</p>
<ul>
<li><a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">Generate Release Notes</a> – generates a markdown release notes file based on work items associated with a build</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-PesterRunner-Task">Pester Test Runner</a> – allows Pester based tests to be run in a build</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-StyleCop-Task">StyleCop Runner</a> – allows a StyleCop analysis to be made of files in a build</li>
<li><a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-TypeMockRunner-Task">Typemock TMockRunner</a> – used TMockrunner to wrapper MSTest, allowing Typemock test to be run on a private build agent</li>
</ul>
<p>To try to avoid people going down the wrong path I intend to go back through my older blog posts on these tasks to update them to point at new resources.</p>
<p>Hope you find these tasks useful. If you find any log any <a href="https://github.com/rfennell/vNextBuild/issues">issues on Github</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Happy 10th Birthday TFS</title>
      <link>https://blog.richardfennell.net/posts/happy-10th-birthday-tfs/</link>
      <pubDate>Fri, 18 Mar 2016 09:01:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/happy-10th-birthday-tfs/</guid>
      <description>&lt;p&gt;Did you know TFS was 10 years old this week? I have been working with TFS all that time, doesn’t time fly, and wow has the product changed from TFS 2005 to 2015/VSTS or what.&lt;/p&gt;
&lt;p&gt;If you want to find out a bit more about the past 10 years try listening to the &lt;a href=&#34;http://radiotfs.com/Show/109/HappyBirthdayTFS&#34;&gt;latest Radio TFS podcast with Brian Harry&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Did you know TFS was 10 years old this week? I have been working with TFS all that time, doesn’t time fly, and wow has the product changed from TFS 2005 to 2015/VSTS or what.</p>
<p>If you want to find out a bit more about the past 10 years try listening to the <a href="http://radiotfs.com/Show/109/HappyBirthdayTFS">latest Radio TFS podcast with Brian Harry</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>New books on VSTS/TFS ALM DevOps</title>
      <link>https://blog.richardfennell.net/posts/new-books-on-vststfs-alm-devops/</link>
      <pubDate>Thu, 03 Mar 2016 19:13:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-books-on-vststfs-alm-devops/</guid>
      <description>&lt;p&gt;It has been a while since I have mentioned any had new books on TFS/VSTS, and just like buses a couple come along together.&lt;/p&gt;
&lt;p&gt;These two, one from Tarun Arora and the other from Mathias Olausson and Jakob Ehn are both nicely on trend for the big area of interest for many of the companies I am working with at present; best practice ‘cook book’ style guidance on how to best use the tools in an ALM process.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It has been a while since I have mentioned any had new books on TFS/VSTS, and just like buses a couple come along together.</p>
<p>These two, one from Tarun Arora and the other from Mathias Olausson and Jakob Ehn are both nicely on trend for the big area of interest for many of the companies I am working with at present; best practice ‘cook book’ style guidance on how to best use the tools in an ALM process.</p>
<p><a href="http://amzn.to/1TbAy8f"><img loading="lazy" src="http://ecx.images-amazon.com/images/I/51H4uFlvjAL._SX403_BO1,204,203,200_.jpg"></a>           <a href="http://amzn.to/1TbAiWR"><img loading="lazy" src="http://ecx.images-amazon.com/images/I/41Ta9qDtqCL._SX328_BO1,204,203,200_.jpg"></a></p>
<p>If your are working with TFS/VSTS worth a look</p>
]]></content:encoded>
    </item>
    <item>
      <title>A vNext build task and PowerShell script to generate release notes as part of TFS vNext build.</title>
      <link>https://blog.richardfennell.net/posts/a-vnext-build-task-and-powershell-script-to-generate-release-notes-as-part-of-tfs-vnext-build/</link>
      <pubDate>Tue, 01 Mar 2016 12:03:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-vnext-build-task-and-powershell-script-to-generate-release-notes-as-part-of-tfs-vnext-build/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 22 Mar 2016:&lt;/strong&gt; This task is now available as &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task&#34;&gt;an extension in the VSTS marketplace&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A common request I get from clients is how can I create a custom set of release notes for a build? The standard TFS build report often includes the information required (work items and changesets/commits associate with the build) but not in a format that is easy to redistribute. So I decided to create a set to tools to try to help.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 22 Mar 2016:</strong> This task is now available as <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-GenerateReleaseNotes-Task">an extension in the VSTS marketplace</a></p>
<p>A common request I get from clients is how can I create a custom set of release notes for a build? The standard TFS build report often includes the information required (work items and changesets/commits associate with the build) but not in a format that is easy to redistribute. So I decided to create a set to tools to try to help.</p>
<p>The tools are available on <a href="https://github.com/rfennell">my github account</a> in two forms:</p>
<ul>
<li>a <a href="https://github.com/rfennell/VSTSPowershell">PowerShell script</a> that can be run upon demand from the command line</li>
<li>a <a href="https://github.com/rfennell/vNextBuild">vNext build task</a> that can be added to the TFS/VSTS vNext build process.</li>
</ul>
<p>Both generate a markdown release notes file based on a template passed into the tool. The output report being something like the following:</p>
<blockquote>
<h3 id="release-notes-for-build-samplesolutionmaster">Release notes for build SampleSolution.Master</h3>
<p><strong>Build Number</strong>: 20160229.3<br>
<strong>Build started:</strong> 29/02/16 15:47:58<br>
<strong>Source Branch:</strong> refs/heads/master</p>
<h5 id="associated-work-items">Associated work items</h5>
<ul>
<li><strong>Task 60</strong> [Assigned by: Bill <TYPHOONTFSBill>] Design WP8 client</li>
</ul>
<h5 id="associated-change-setscommits">Associated change sets/commits</h5>
<ul>
<li><strong>ID bf9be94e61f71f87cb068353f58e860b982a2b4b</strong> Added a template</li>
<li><strong>ID 8c3f8f9817606e48f37f8e6d25b5a212230d7a86</strong> Start of the project</li>
</ul></blockquote>
<h2 id="the-template">The Template</h2>
<p>The use of a template allows the user to define the layout and fields shown in the release notes document. It is basically a markdown file with tags to denote the fields (the properties on the JSON response objects returned from the <a href="https://www.visualstudio.com/en-us/integrate/api/overview">VSTS REST API</a>) to be replaced when the tool generates the report file.</p>
<p>The only real change from standard markdown is the use of the <em>@@TAG@@</em> blocks to denote areas that should be looped over i.e: the points where we get the details of all the work items and commits associated with the build.</p>
<pre tabindex="0"><code>#Release notes for build $defname    
\*\*Build Number\*\*  : $($build.buildnumber)      
\*\*Build started\*\* : $(&#34;{0:dd/MM/yy HH:mm:ss}&#34; -f \[datetime\]$build.startTime)      
\*\*Source Branch\*\* : $($build.sourceBranch)    
###Associated work items    
@@WILOOP@@    
\* \*\*$($widetail.fields.&#39;System.WorkItemType&#39;) $($widetail.id)\*\* \[Assigned by: $($widetail.fields.&#39;System.AssignedTo&#39;)\] $($widetail.fields.&#39;System.Title&#39;)    
@@WILOOP@@    
###Associated change sets/commits    
@@CSLOOP@@    
\* \*\*ID $($csdetail.changesetid)$($csdetail.commitid)\*\* $($csdetail.comment)      
@@CSLOOP@@   
</code></pre><p><strong>Note 1</strong>: We can return the builds <strong>startTime</strong> and/or <strong>finishTime</strong>, remember if you are running the template within an automated build the build by definition has not finished so the <strong>finishTime</strong> property is empty to can’t be parsed. This does not stop the generation of the release notes, but an error is logged in the build logs.</p>
<p><strong>Note 2</strong>: We have some special handling in the <em>@@CSLOOP@@</em> section, we include both the <em>changesetid</em> and the <em>commitid</em> values, only one of there will contain a value, the other is blank. Thus allowing the template to work for both GIT and TFVC builds.</p>
<p>What is done behind the scenes is that each line of the template is evaluated as a line of PowerShell in turn, the in memory versions of the objects are used to provide the runtime values. The available objects to get data from at runtime are</p>
<ul>
<li><strong>$build</strong> – the build details returned by the <a href="https://www.visualstudio.com/integrate/api/build/builds#Getbuilddetails">REST call Get Build Details</a></li>
<li><strong>$workItems</strong> – the list of work items associated with the build returned by the <a href="https://www.visualstudio.com/integrate/api/build/builds#GetbuilddetailsWorkitems">REST call Build Work Items</a></li>
<li><strong>$widetail</strong> – the details of a given work item inside the loop returned by the <a href="https://www.visualstudio.com/integrate/api/wit/work-items#Getaworkitem">REST call Get Work Item</a></li>
<li><strong>$changesets</strong> – the list of changeset/commit associated with the build build returned by the <a href="https://www.visualstudio.com/integrate/api/build/builds#GetbuilddetailsChanges">REST call Build Changes</a></li>
<li><strong>$csdetail</strong> – the details of a given changeset/commit inside the loop by the REST call to <a href="https://www.visualstudio.com/integrate/api/tfvc/changesets">Changes</a> or <a href="https://www.visualstudio.com/integrate/api/git/commits#Getacommit">Commit</a> depending on whether it is a GIT or TFVC based build</li>
</ul>
<p><a href="https://github.com/rfennell/VSTSPowershell/blob/master/REST/templatedump.md">There is a templatedump.md file that just dumps out all the available fields in the PowerShell repo to help you find all the available options</a></p>
<h2 id="differences-between-the-script-and-the-task">Differences between the script and the task</h2>
<p>The main difference between the PowerShell script and the build task is the way the connection is made to the REST API. Within the build task we pickup the access token from the build agent’s context. For the PowerShell script we need to pass credentials in some form or the other, either via parameters or using the default Windows credentials.</p>
<h2 id="usage">Usage</h2>
<h3 id="powershell">PowerShell</h3>
<p>The script can be used in a number of ways</p>
<p>To generate a report for a specific build on VSTS</p>
<pre tabindex="0"><code> .Create-ReleaseNotes.ps1 -collectionUrl https://yoursite.visualstudio.com/defaultcollection -teamproject &#34;Scrum Project&#34; –defname &#34;BuildTest&#34; -outputfile &#34;releasenotes.md&#34; -templatefile &#34;template.md&#34; -buildnumber &#34;yourbuildnum&#34; -password yourpersonalaccesstoken
</code></pre><p>Or for the last successful build just leave out the buildnumber</p>
<pre tabindex="0"><code> .Create-ReleaseNotes.ps1 -collectionUrl https://yoursite.visualstudio.com/defaultcollection -teamproject &#34;Scrum Project&#34; –defname &#34;BuildTest&#34; -outputfile &#34;releasenotes.md&#34; -templatefile &#34;template.md&#34; -password yourpersonalaccesstoken
</code></pre><p>Authentication options</p>
<ol>
<li>VSTS with a <a href="https://www.visualstudio.com/en-us/get-started/setup/use-personal-access-tokens-to-authenticate">personal access token</a> – just provide the token using the <strong>password</strong> parameter</li>
<li>If you are using VSTS and want to use <a href="https://www.visualstudio.com/en-us/integrate/get-started/auth/overview">alternate credentials</a> just pass a <strong>username</strong> and <strong>password</strong></li>
<li>If your are using the script with an on-premises TFS just leave off both the <strong>username</strong> and <strong>password</strong> and the Windows default credentials will be used.</li>
</ol>
<p>In all cases the debug output is something like the following</p>
<pre tabindex="0"><code>  
VERBOSE: Getting details of build \[BuildTest\] from server \[https://yoursite.visualstudio.com/defaultcollection/Scrum Project\]  
VERBOSE: Getting build number \[20160228.2\]  
VERBOSE:    Get details of workitem 504  
VERBOSE:    Get details of changeset/commit ba7e613388c06b8440c9e7601a8d6fa29d588051  
VERBOSE:    Get details of changeset/commit 52570b2abb80b61a4a629dfd31c0ce071c487709  
VERBOSE: Writing output file  for build \[BuildTest\] \[20160228.2\].
</code></pre><p>You should expect to get a report like the example shown at the start of this post.</p>
<h3 id="build-task">Build Task</h3>
<p>The build task needs to be built and uploaded as per the <a href="https://github.com/rfennell/vNextBuild/wiki/Build-Tasks">standard process detailed on my vNext Build’s Wiki</a> (am considering creating a build extensions package to make this easier, keep an eye on this blog)</p>
<p>Once the tool is upload to your TFS or VSTS server it can be added to a build process</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_298.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_294.png" title="image"></a></p>
<p>The task takes two parameters</p>
<ul>
<li>The output file name which defaults to <strong>$(Build.ArtifactStagingDirectory)releasenotes.md</strong></li>
<li>The template file name, which should point to a file in source control.</li>
</ul>
<p>There is no need to pass credentials, this is done automatically</p>
<p>When run you should expect to see a build logs as below and a releases notes file in your drops location.</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_299.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_295.png" title="image"></a></p>
<h2 id="summary">Summary</h2>
<p>So I hope some people find these tools useful in generating release notes, let me know if they help and how they could be improved.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using MSDeploy to deploy to nested virtual applications in Azure Web Apps</title>
      <link>https://blog.richardfennell.net/posts/using-msdeploy-to-deploy-to-nested-virtual-applications-in-azure-web-apps/</link>
      <pubDate>Thu, 25 Feb 2016 20:52:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-msdeploy-to-deploy-to-nested-virtual-applications-in-azure-web-apps/</guid>
      <description>&lt;p&gt;Azure provides many ways to scale and structure web site and virtual applications. I recently needed to deploy the following structure where each service endpoint was its own Visual Studio Web Application Project built as a MSDeploy Package&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://demo.azurewebsites.net/api/service1&#34;&gt;http://demo.azurewebsites.net/api/service1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://demo.azurewebsites.net/api/service2&#34;&gt;http://demo.azurewebsites.net/api/service2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://demo.azurewebsites.net/api/service3&#34;&gt;http://demo.azurewebsites.net/api/service3&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To do this in the Azure Portal in …&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Created a Web App for the site &lt;strong&gt;&lt;a href=&#34;http://demo.azurewebsites.net&#34;&gt;http://demo.azurewebsites.net&lt;/a&gt;&lt;/strong&gt; This pointed to the disk location &lt;strong&gt;sitewwwoot&lt;/strong&gt;, I disabled the folder as an application as there is not application running at this level &lt;/li&gt;
&lt;li&gt;Created a virtual directory &lt;strong&gt;api&lt;/strong&gt; point to &lt;strong&gt;sitewwrootapi&lt;/strong&gt;, again disabling this folder as an application &lt;/li&gt;
&lt;li&gt;Created a virtual application for each of my services, each with their own folder&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_297.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_293.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Azure provides many ways to scale and structure web site and virtual applications. I recently needed to deploy the following structure where each service endpoint was its own Visual Studio Web Application Project built as a MSDeploy Package</p>
<ul>
<li><a href="http://demo.azurewebsites.net/api/service1">http://demo.azurewebsites.net/api/service1</a></li>
<li><a href="http://demo.azurewebsites.net/api/service2">http://demo.azurewebsites.net/api/service2</a></li>
<li><a href="http://demo.azurewebsites.net/api/service3">http://demo.azurewebsites.net/api/service3</a></li>
</ul>
<p>To do this in the Azure Portal in …</p>
<ol>
<li>Created a Web App for the site <strong><a href="http://demo.azurewebsites.net">http://demo.azurewebsites.net</a></strong> This pointed to the disk location <strong>sitewwwoot</strong>, I disabled the folder as an application as there is not application running at this level </li>
<li>Created a virtual directory <strong>api</strong> point to <strong>sitewwrootapi</strong>, again disabling this folder as an application </li>
<li>Created a virtual application for each of my services, each with their own folder</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_297.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_293.png" title="image"></a></p>
<p>I knew from past experience I could use MSDeploy to deploy to the root site or the <strong>api</strong> virtual directory. However I found when I tried to deploy to any of the service virtual applications I got an error that the web site could not be created. Now I would not expect MSDEPLOY to create a directory so I knew something was wrong at the Azure end.</p>
<p>The fix in the end was simple, it seems the folder service folders e.g <strong>sitewwwrootapiservice1</strong> had not been created by the Azure Portal when I created the virtual directory. I FTP’d onto the web application and create the folder <strong>sitewwwrootapiservice1</strong>  once this was done MSDEPlOY worked perfectly, and I could build the structure I wanted.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Pester PowerShell tests in the VSTS hosted build service</title>
      <link>https://blog.richardfennell.net/posts/running-pester-powershell-tests-in-the-vsts-hosted-build-service/</link>
      <pubDate>Sun, 21 Feb 2016 23:43:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-pester-powershell-tests-in-the-vsts-hosted-build-service/</guid>
      <description>&lt;p&gt;**Updated 22 Mar 2016 **This task is available in the &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-PesterRunner-Task&#34;&gt;VSTS Marketplace&lt;/a&gt; If you are using &lt;a href=&#34;https://github.com/pester/Pester/wiki&#34;&gt;Pester&lt;/a&gt; to unit test your PowerShell code then there is a good chance you will want to include it in your automated build process. To do this, you need to get Pester installed on your build machine. The usual options would be&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Manual install from &lt;a href=&#34;https://github.com/pester/Pester&#34;&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Install via &lt;a href=&#34;https://chocolatey.org/packages/pester&#34;&gt;Chocolaty&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Install via &lt;a href=&#34;https://www.nuget.org/packages/Pester/&#34;&gt;Nuget&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you own the build agent VM then any of these options are good, you can even write the NuGet restore into your build process itself. However there is a problem, both the first two options need administrative access as they put the Pester module in the &lt;strong&gt;$PSModules&lt;/strong&gt; folder (under ‘Program Files’); so these can’t be used on VSTS’s hosted build system, where your are not an administrator So this means you are left with copying the module (and associated functions folder) to some local working folder and running it manually; but do you really want to have to store the Pester module in your source repo? My solution was to write a vNext build tasks to deploy the Pester files and run the Pester tests. &lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb[12].png&#34;&gt;&lt;img alt=&#34;image_thumb[12]&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb[12]_thumb.png&#34; title=&#34;image_thumb[12]&#34;&gt;&lt;/a&gt; The task takes two parameters&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>**Updated 22 Mar 2016 **This task is available in the <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-PesterRunner-Task">VSTS Marketplace</a> If you are using <a href="https://github.com/pester/Pester/wiki">Pester</a> to unit test your PowerShell code then there is a good chance you will want to include it in your automated build process. To do this, you need to get Pester installed on your build machine. The usual options would be</p>
<ul>
<li>Manual install from <a href="https://github.com/pester/Pester">GitHub</a></li>
<li>Install via <a href="https://chocolatey.org/packages/pester">Chocolaty</a></li>
<li>Install via <a href="https://www.nuget.org/packages/Pester/">Nuget</a></li>
</ul>
<p>If you own the build agent VM then any of these options are good, you can even write the NuGet restore into your build process itself. However there is a problem, both the first two options need administrative access as they put the Pester module in the <strong>$PSModules</strong> folder (under ‘Program Files’); so these can’t be used on VSTS’s hosted build system, where your are not an administrator So this means you are left with copying the module (and associated functions folder) to some local working folder and running it manually; but do you really want to have to store the Pester module in your source repo? My solution was to write a vNext build tasks to deploy the Pester files and run the Pester tests. <a href="/wp-content/uploads/sites/2/historic/image_thumb[12].png"><img alt="image_thumb[12]" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb[12]_thumb.png" title="image_thumb[12]"></a> The task takes two parameters</p>
<ul>
<li>The root folder to look for test scripts with the naming convention  <strong>*.tests.ps1</strong>. Defaults to <strong>$(Build.SourcesDirectory)*</strong></li>
<li>The results file name, defaults to <strong>$(Build.SourcesDirectory)Test-Pester.XML</strong></li>
</ul>
<p>The Pester task does not in itself upload the test results, it just throws and error if tests fails. It relies on the standard test results upload task. Add this task and set</p>
<ul>
<li>it to look for nUnit format files</li>
<li>it already defaults to the correct file name pattern.</li>
<li>IMPORTANT: As the Pester task will stop the build on an error you need to set the ‘Always run’ to make sure the results are published.</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_thumb[11].png"><img alt="image_thumb[11]" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb[11]_thumb.png" title="image_thumb[11]"></a> Once all this is added to your build you can see your Pester test results in the build summary <a href="/wp-content/uploads/sites/2/historic/image_thumb[10].png"><img alt="image_thumb[10]" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb[10]_thumb.png" title="image_thumb[10]"></a> <a href="/wp-content/uploads/sites/2/historic/image_thumb[14].png"><img alt="image_thumb[14]" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb[14]_thumb.png" title="image_thumb[14]"></a> You can find the task in my <a href="https://github.com/rfennell/vNextBuild">vNextBuild repo</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>A vNext build task to get artifacts from a different TFS server</title>
      <link>https://blog.richardfennell.net/posts/a-vnext-build-task-to-get-artifacts-from-a-different-tfs-server/</link>
      <pubDate>Thu, 18 Feb 2016 12:53:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-vnext-build-task-to-get-artifacts-from-a-different-tfs-server/</guid>
      <description>&lt;p&gt;With the advent of &lt;a href=&#34;https://blogs.msdn.microsoft.com/bharry/2016/02/10/team-foundation-server-2015-update-2-rc-1-is-available/&#34;&gt;TFS 2015.2 RC&lt;/a&gt; (and the associated VSTS release) we have seen &lt;a href=&#34;https://blogs.msdn.microsoft.com/visualstudioalm/2015/11/28/deploy-artifacts-from-onprem-tfs-server-with-release-management-service/&#34;&gt;the short term removal of the ‘External TFS Build’ option for the Release Management artifacts source&lt;/a&gt;. This causes me a bit of a problem as I wanted to try out the new on premises vNext based Release Management features on 2015.2, but don’t want to place the RC on my production server (though there is go live support). Also the &lt;a href=&#34;https://blogs.msdn.microsoft.com/visualstudioalm/2015/11/28/deploy-artifacts-from-onprem-tfs-server-with-release-management-service/&#34;&gt;ability to get artifacts from an on premises TFS instance when using VSTS&lt;/a&gt; open up a number of scenarios, something I know some of my clients had been investigating.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With the advent of <a href="https://blogs.msdn.microsoft.com/bharry/2016/02/10/team-foundation-server-2015-update-2-rc-1-is-available/">TFS 2015.2 RC</a> (and the associated VSTS release) we have seen <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2015/11/28/deploy-artifacts-from-onprem-tfs-server-with-release-management-service/">the short term removal of the ‘External TFS Build’ option for the Release Management artifacts source</a>. This causes me a bit of a problem as I wanted to try out the new on premises vNext based Release Management features on 2015.2, but don’t want to place the RC on my production server (though there is go live support). Also the <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2015/11/28/deploy-artifacts-from-onprem-tfs-server-with-release-management-service/">ability to get artifacts from an on premises TFS instance when using VSTS</a> open up a number of scenarios, something I know some of my clients had been investigating.</p>
<p>To get around this blocker I have written a <a href="https://github.com/rfennell/vNextBuild/blob/master/Tasks/GetArtifactFromUncShare/GetArtifactFromUncShare.ps1">vNext build task</a> that does the getting of a build artifact from the UNC drop. It supports both XAML and vNext builds. Thus replacing the built in artifact linking features.</p>
<h3 id="usage">Usage</h3>
<p>To use the new task</p>
<ul>
<li>Get the task from my <a href="https://github.com/rfennell/vNextBuild">vNextBuild repo</a> (build using the instructions on the repo’s wiki) and install it on your TFS 2015.2 instance (also use the notes on the repo’s wiki).</li>
<li>In your build, disable the auto getting of the artifacts for the environment (though in some scenarios you might choose to use both the built in linking and my custom task)</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_291.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_287.png" title="image"></a></p>
<ul>
<li>Add the new task to your environment’s release process, the parameters are
<ul>
<li>TFS Uri – the Uri of the TFS server inc. The TPC name</li>
<li>Team Project – the project containing the source build</li>
<li>Build Definition name – name of the build (can be XAML or vNext)</li>
<li>Artifact name – the name of the build artifact (seems to be ‘drop’ if a XAML build)</li>
<li>Build Number – default is to get the latest successful completed build, but you can pass a specific build number</li>
<li>Username/Password – if you don’t want to use default credentials (the user the build agent is running as), these are the ones used. These are passed as ‘basic auth’ so can be used against an on prem TFS (if basic auth is enabled in IIS)  or VSTS (with alternate credentials enabled).</li>
</ul>
</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_292.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_288.png" title="image"></a></p>
<p>When the task runs it should drop artifacts in the same location as the standard mechanism, so can be picked up by any other tasks on the release pipeline using a path similar to <strong>$(System.DefaultWorkingDirectory)SABS.Master.CIdrop</strong></p>
<h3 id="limitations">Limitations</h3>
<p>The task in its current form does not provide any linking of artifacts to the build reports, or allow the selection of build versions when the release is created. This removing audit trail features.</p>
<p>However, it does provide a means to get a pair of TFS servers working together, so can certainly enable some R&amp;D scenarios while we await 2015.2 to RTM and/or the ‘official’ linking of External TFS builds as artifacts</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running a SaaS service at scale</title>
      <link>https://blog.richardfennell.net/posts/running-a-saas-service-at-scale/</link>
      <pubDate>Fri, 12 Feb 2016 17:16:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-a-saas-service-at-scale/</guid>
      <description>&lt;p&gt;Brian Harry has done a couple of very interesting posts (&lt;a href=&#34;https://blogs.msdn.microsoft.com/bharry/2016/02/05/vs-team-services-incidents-on-feb-3-4/&#34;&gt;post 1&lt;/a&gt; and &lt;a href=&#34;https://blogs.msdn.microsoft.com/bharry/2016/02/06/a-bit-more-on-the-feb-3-and-4-incidents/&#34;&gt;post 2&lt;/a&gt;) on the recent outages of the VSTS service. Whether you use VSTS or not they make interesting reading for anyone who is involved in running SaaS based systems, or anything at scale.&lt;/p&gt;
&lt;p&gt;From the posts the obvious reading is you cannot under estimate the importance of&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;in production montoring&lt;/li&gt;
&lt;li&gt;having an response plan&lt;/li&gt;
&lt;li&gt;doing a proper root cause analysis&lt;/li&gt;
&lt;li&gt;and putting steps in place to stop the problem happening again&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Well worth a read&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Brian Harry has done a couple of very interesting posts (<a href="https://blogs.msdn.microsoft.com/bharry/2016/02/05/vs-team-services-incidents-on-feb-3-4/">post 1</a> and <a href="https://blogs.msdn.microsoft.com/bharry/2016/02/06/a-bit-more-on-the-feb-3-and-4-incidents/">post 2</a>) on the recent outages of the VSTS service. Whether you use VSTS or not they make interesting reading for anyone who is involved in running SaaS based systems, or anything at scale.</p>
<p>From the posts the obvious reading is you cannot under estimate the importance of</p>
<ul>
<li>in production montoring</li>
<li>having an response plan</li>
<li>doing a proper root cause analysis</li>
<li>and putting steps in place to stop the problem happening again</li>
</ul>
<p>Well worth a read</p>
]]></content:encoded>
    </item>
    <item>
      <title>Repost: What I learnt extending my VSTS Release Process to on-premises Lab Management Network Isolated Environments</title>
      <link>https://blog.richardfennell.net/posts/repost-what-i-learnt-extending-my-vsts-release-process-to-on-premises-lab-management-network-isolated-environments/</link>
      <pubDate>Fri, 12 Feb 2016 17:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/repost-what-i-learnt-extending-my-vsts-release-process-to-on-premises-lab-management-network-isolated-environments/</guid>
      <description>&lt;p&gt;This a a repost of a guest article first posted on the Microsoft UK Developers Blog: &lt;a href=&#34;https://www.microsoft.com/en-gb/developers/articles/week01feb16/how-to-extend-a-VSTS-release-process-to-on-premises/&#34;&gt;How to extend a VSTS release process to on-premises&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Note that since I write the original post there have been some changes on &lt;a href=&#34;https://blogs.msdn.microsoft.com/visualstudioalm/2016/02/12/impact-of-new-release-management-orchestration-features/&#34;&gt;VSTS&lt;/a&gt; and the release to &lt;a href=&#34;https://blogs.msdn.microsoft.com/bharry/2016/02/10/team-foundation-server-2015-update-2-rc-1-is-available/&#34;&gt;TFS 2015.2 RC1&lt;/a&gt;. These mean there is no longer an option to pull build artifacts from the an external TFS server as part of a release; so invalidating some of the options this post discusses. I have struck out the outdated sections. The rest of the post is still valid, especially the section on where to update configuration settings. The release of &lt;a href=&#34;https://blogs.msdn.microsoft.com/bharry/2016/02/10/team-foundation-server-2015-update-2-rc-1-is-available/&#34;&gt;TFS 2015.2 RC1&lt;/a&gt; actually makes many of options easier as you don’t have to bridge between on premises TFS and VSTS as both build and release features are on the same server.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This a a repost of a guest article first posted on the Microsoft UK Developers Blog: <a href="https://www.microsoft.com/en-gb/developers/articles/week01feb16/how-to-extend-a-VSTS-release-process-to-on-premises/">How to extend a VSTS release process to on-premises</a></p>
<p>Note that since I write the original post there have been some changes on <a href="https://blogs.msdn.microsoft.com/visualstudioalm/2016/02/12/impact-of-new-release-management-orchestration-features/">VSTS</a> and the release to <a href="https://blogs.msdn.microsoft.com/bharry/2016/02/10/team-foundation-server-2015-update-2-rc-1-is-available/">TFS 2015.2 RC1</a>. These mean there is no longer an option to pull build artifacts from the an external TFS server as part of a release; so invalidating some of the options this post discusses. I have struck out the outdated sections. The rest of the post is still valid, especially the section on where to update configuration settings. The release of <a href="https://blogs.msdn.microsoft.com/bharry/2016/02/10/team-foundation-server-2015-update-2-rc-1-is-available/">TFS 2015.2 RC1</a> actually makes many of options easier as you don’t have to bridge between on premises TFS and VSTS as both build and release features are on the same server.</p>
<hr>
<h3 id="background">Background</h3>
<p>Visual Studio Team Services (VSTS) provides a completely new version of <a href="https://msdn.microsoft.com/library/vs/alm/release/overview-rmpreview">Release Management</a>, replacing the <a href="https://msdn.microsoft.com/library/vs/alm/release/overview-rm2015">version shipped with TFS 2013/2015</a>. This new system is based on the same cross platform agent model as the new <a href="https://msdn.microsoft.com/Library/vs/alm/Build/overview">vNext build</a> system shipped with TFS 2015 (and also available on VSTS). At present this new Release Management system is only available on VSTS, but the <a href="https://www.visualstudio.com/en-us/news/release-archive-vso.aspx">features timeline</a> suggest we should see it on-premises in the upcoming update 2015.2.</p>
<p>You might immediately think that as this feature is only available in VSTS at present, that you cannot use this new release management system with on-premises services, but this would not be true. The Release Management team have provided an excellent <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/28/deploy-artifacts-from-onprem-tfs-server-with-release-management-service.aspx">blog post</a> on running an agent connected to your VSTS instance inside your on-premises network to enable hybrid scenarios.</p>
<p>This works well for deploying to domain connected targets, especially if you are using <a href="https://msdn.microsoft.com/en-us/library/azure/dn790204.aspx">Azure Active Directory Sync</a> to sync your corporate domain and AAD to provide a <a href="https://www.visualstudio.com/en-us/get-started/setup/manage-organization-access-for-your-account-vs">directory backed VSTS instance</a>. In this case you can use a single corporate domain account to connect to VSTS and to the domain services you wish to deploy to from the on-premises agent.</p>
<p>However, I make <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">extensive use of TFS Lab Management</a> to provide isolated dev/test environments (linked to an on-premises TFS 2015.1 instance). If I want to deploy to these VMs it adds complexity in how I need to manage authentication; as I don’t want to have to place a VSTS build agent in each transiently created dev/test lab. One because it is complex and two because there is a <a href="https://www.visualstudio.com/products/visual-studio-team-services-pricing-vs">cost to having more than one self provisioned vNext build agent</a>.</p>
<p>It is fair to say that deploying to an on-premises Lab Management environment from a VSTS instance is an edge case, but the same basic process will be needed when the new Release Management features become available on-premises.</p>
<p>Now, I would be the first to say that there is a good case to look at a move away from Lab Management to <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/18/getting-started-with-devtest-lab-for-azure.aspx">using Azure Dev Labs</a> which are currently in preview, but Dev Labs needs fuller <a href="https://azure.microsoft.com/en-gb/documentation/articles/resource-group-overview/">Azure Resource Manager</a> support before we can replicate the network isolated Lab Management environments I need.</p>
<h3 id="the-example">The Example</h3>
<p>So at this time, I still need to be able to use the new Release Management with my current Lab Management network isolated labs, but this raises some issues of authentication and just what is running where. So let us work through an example; say I want to deploy a SQL DB via a DACPAC and a web site via MSDeploy on the infrastructure shown below.</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_285.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_281.png" title="image"></a></p>
<p>Both the target SQL and Web servers live inside the Lab Management isolated network on the <strong>proj.local</strong> domain, but have DHCP assigned addresses on the corporate LAN in the form <strong>vslm-[guid].corp.com</strong> (managed by Lab Management), so I can access them from the build agent with appropriate credentials (a login for the <strong>proj.local</strong> domain within the network isolated lab).</p>
<p>The first step is to <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/28/deploy-artifacts-from-onprem-tfs-server-with-release-management-service.aspx">install a VSTS build agent linked to my VSTS instance</a>, once this is done we can start to create our release pipeline. The first stage is to get the artifacts we need to deploy i.e. the output of builds. These could be XAML or vNext build on the VSTS instance, or from the on-premises TFS instance or a Jenkins build. Remember a single release can deploy any number of artifacts (builds) e.g. the output of a number of builds. It is this fact that makes this setup not as strange as it initially appears. We are just using VSTS Release Management to orchestrate a deployment to on-premises systems.</p>
<p>The problem we have is that though our release now has artifacts, we now need to run some commands on the VM running the vNext Build Agent to do the actual deployment. VSTS provides a number of deployment tasks to help in this area. Unfortunately, at the time of writing, the list of deployment tasks in VSTS are somewhat Azure focused, so not that much use to me.</p>
<p><a href="/blogs/rfennell/image.axd?picture=clip_image004.jpg"><img alt="clip_image004" loading="lazy" src="/blogs/rfennell/image.axd?picture=clip_image004_thumb.jpg" title="clip_image004"></a></p>
<p>This will change over time as more tasks get released, you can see what is being developed on the <a href="https://github.com/Microsoft/vso-agent-tasks">VSO Agent Task GitHub Repo</a> (and of course you could install versions from this repo if you wish).</p>
<p>So for now I need to use my own scripts, as we are on a Windows based system (not Linux or Mac) this means some PowerShell scripts.</p>
<p>The next choice becomes ‘do I run the script on the Build Agent VM or remotely on the target VM’ (within the network isolated environment). The answer is the age-old consultants answer ‘it depends’. In the case of both DACPAC and MSDeploy deployments, there is the option to do remote deployment i.e. run the deployment command on the Build Agent VM and it remotely connects to the target VMs in the network isolated environment. The problem with this way of working is that I would need to open more ports on the SQL and Web VMs to allow the remote connections; I did not want to do this.</p>
<p>The alternative is to use PowerShell remoting, in this model I trigger the script on the Build Agent VM, but it uses PowerShell remoting to run the command on the target VM. For this I only need to enable remote PowerShell on the target VMs, this is done by running the following command and follow prompts on each target VM to set up the required services and open the correct ports on the target VMs firewall.</p>
<pre tabindex="0"><code>winrm -qc 
</code></pre><p>This is something we are starting to do as standard to allow remote management via PowerShell on all our VMs.</p>
<p>So at this point it all seems fairly straight forward, run a couple of remote PowerShell scripts and all is good, but no. There is a problem.</p>
<p>A key feature of Release Management is that you can provide different configurations for different environments e.g. the DB connection string is different for the QA lab as opposed to production. These values are stored securely in Release Management and applied as needed.</p>
<p><a href="/blogs/rfennell/image.axd?picture=clip_image006.jpg"><img alt="clip_image006" loading="lazy" src="/blogs/rfennell/image.axd?picture=clip_image006_thumb.jpg" title="clip_image006"></a></p>
<p>The way these variables are presented is as environment variables on the Build Agent VM, hence they can accessed from PowerShell in the form <strong>env:$__DOMAIN__</strong>. IT IS IMPORTANT TO REMEMBER that they are not presented on any target VMs in the isolated lab network environment, or to these VMs via PowerShell remoting.</p>
<p>So if we are intending to use remote PowerShell execution for our deployments we can’t just access settings environment variables as part of the scripts being run remotely; we would have to pass the environment variable in as PowerShell command line arguments.</p>
<p>This works OK for the DACPAC deployment as we only need to pass in a few, fixed arguments e.g. The PowerShell script arguments when passing the arguments for the package name, target server and DB name using the Release Management variables in their <em>$(variable)</em> form become:</p>
<pre tabindex="0"><code>\-DBPackage $(DBPACKAGE) -TarhegDBName $(TARGETDDBNAME) –TargetServer $(TARGETSERVERNAME)
</code></pre><p>However, for the MSDeploy deploy there is no simple fixed list of parameters. This is because as well as parameters like package names, we need to <a href="/blogs/rfennell/post/2014/05/01/Changing-WCF-bindings-for-MSDeploy-packages-when-using-Release-Management.aspx">modify the <strong>setparameters.xml</strong> file at deployment time</a> to inject values for our <strong>web.config</strong> from the release management system.</p>
<p>The solution I have adopted is do not try to pass this potentially long list of arguments into a script to be run remotely, the command line argument just becomes hard to edit without making errors, and needs to be updated each time we add an extra variable.</p>
<p>The alternative is to update the <strong>setparameters.xml</strong> file on the Build Agent VM before we attempt to run it remotely. To this end I have written a custom build task to handle the process which can found on <a href="https://github.com/rfennell/vNextBuild/tree/master/Tasks/UpdateWebDeployParameters">my GitHub repo</a>. This updates a named <strong>setparameters.xml</strong> file using token replacement based on environment variables set by Release Management. If you would rather automatically find a number of <strong>setparmeters.xml</strong> files using wildcards (because you are deploying many sites/services) and update them all with a single set of tokens, have a look at <a href="http://www.colinsalmcorner.com/post/config-per-environment-vs-tokenization-in-release-management">Colin Dembovsky’s build task</a> which does just that.</p>
<p>So given this technique my release steps become:</p>
<p>1. Get the artifacts from the builds to the Build Agent VM.</p>
<p>2. Update the <strong>setparameters.xml</strong> file using environment variables on the Build Agent VM.</p>
<p>3. Copy the downloaded (and modified) artifacts to all the target machines in the environment.</p>
<p>4. On the SQL VM run the <strong>sqlpackage.exe</strong> command to deploy the DACPAC using remote PowerShell execution.</p>
<p>5. On the Web VM run the MSDeploy command using remote PowerShell execution.</p>
<p><a href="/blogs/rfennell/image.axd?picture=clip_image008.jpg"><img alt="clip_image008" loading="lazy" src="/blogs/rfennell/image.axd?picture=clip_image008_thumb.jpg" title="clip_image008"></a></p>
<p>The PowerShell I run in the final two tasks are just simple wrappers around the underlying commands. The key fact is that because they are scripts it allows remote execution. The targeting of the execution is done by associating each task with a target machine group, and filtering either by name or in my case role, to target specific VMs.</p>
<p><a href="/blogs/rfennell/image.axd?picture=clip_image010.jpg"><img alt="clip_image010" loading="lazy" src="/blogs/rfennell/image.axd?picture=clip_image010_thumb.jpg" title="clip_image010"></a></p>
<p>In my machine group I have defined both my SQL and Web VMs using the names on the corporate LAN. Assigning a role to each to make targeting easier. Note that it is here, in the machine group definition, that I provide the credentials required to access the VMs in my Network Isolated environment i.e. a <strong>proj.local</strong> set of credentials.</p>
<p><a href="/blogs/rfennell/image.axd?picture=clip_image012.jpg"><img alt="clip_image012" loading="lazy" src="/blogs/rfennell/image.axd?picture=clip_image012_thumb.jpg" title="clip_image012"></a>.</p>
<p>Once I get all these settings in place I am able to build a product on my VSTS build system (or my on-premises TFS instance) and using this VSTS connected, but on-premises located; Build Agent deploy my DB and web site to a Lab Management network isolated test environment.</p>
<p>There is no reason why I cannot add more tasks to this release pipeline to perform more actions such as run tests (remember the network isolated environment already has TFS Test Agents installed, but they are pointing to the on-premises TFS instance) or to deploy to other environments.</p>
<h3 id="summary">Summary</h3>
<p>As I said before, this is an edge case, but I hope it shows how flexible the new build and release systems can be for both TFS and VSTS.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Release Manager 2015 stalls at the ‘uploading components’ step and error log shows XML load errors</title>
      <link>https://blog.richardfennell.net/posts/release-manager-2015-stalls-at-the-uploading-components-step-and-error-log-shows-xml-load-errors/</link>
      <pubDate>Thu, 11 Feb 2016 09:22:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/release-manager-2015-stalls-at-the-uploading-components-step-and-error-log-shows-xml-load-errors/</guid>
      <description>&lt;p&gt;Whilst seting up a Release Management 2015.1 server we came across a strange problem. The installation appears to go OK. We were able to install the server and from the client created a simple vNext release pipeline and run it. However, the release stalled on the ‘Upload Components’ step.&lt;/p&gt;
&lt;p&gt;Looking in event log of the VM running the Release Management server we could see many many errors all complaining about invalid XML, all in the general form&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst seting up a Release Management 2015.1 server we came across a strange problem. The installation appears to go OK. We were able to install the server and from the client created a simple vNext release pipeline and run it. However, the release stalled on the ‘Upload Components’ step.</p>
<p>Looking in event log of the VM running the Release Management server we could see many many errors all complaining about invalid XML, all in the general form</p>
<pre tabindex="0"><code>Message: Object reference not set to an instance of an object.: rnrn   at Microsoft.TeamFoundation.Release.Data.Model.SystemSettings.LoadXml(Int32 id)
</code></pre><p><strong>Note</strong>: The assembly that it complaining about varied, but all Release Management Deploayer related.</p>
<p>We tried a reinstall on a new server VM, but got the same results.</p>
<p>Turns out issue was due to the service account that the Release Management server was running as; this was the only thing common between the two server VM instances. We swapped to use ‘Network Server’ and everything lept into life. All we could assume was that some group policy or similar settings on the service account was placing some restriction on assembly or assembly config file loading.</p>
]]></content:encoded>
    </item>
    <item>
      <title>vNext Build editor filePath control always returns a path even if you did not set a value</title>
      <link>https://blog.richardfennell.net/posts/vnext-build-editor-filepath-control-always-returns-a-path-even-if-you-did-not-set-a-value/</link>
      <pubDate>Mon, 08 Feb 2016 12:27:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vnext-build-editor-filepath-control-always-returns-a-path-even-if-you-did-not-set-a-value/</guid>
      <description>&lt;p&gt;You can use the &lt;strong&gt;filePath&lt;/strong&gt; type in a vNext VSTS/TFS task as shown below &lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;{  
     &amp;#34;name&amp;#34;: &amp;#34;settingsFile&amp;#34;,  
     &amp;#34;type&amp;#34;: &amp;#34;filePath&amp;#34;,  
     &amp;#34;label&amp;#34;: &amp;#34;Settings File&amp;#34;,  
     &amp;#34;defaultValue&amp;#34;: &amp;#34;&amp;#34;,  
     &amp;#34;required&amp;#34;: false,  
     &amp;#34;helpMarkDown&amp;#34;: &amp;#34;Path to single settings files to use (as opposed to files in project folders)&amp;#34;,  
     &amp;#34;groupName&amp;#34;:&amp;#34;advanced&amp;#34;  
   }  
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;to present a file picker dialog in the build editor that allows the build editor to pick a file or folder in the build’s source repository&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_290.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_286.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;While doing some task development recently I found that this control did not behave as I had expected&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can use the <strong>filePath</strong> type in a vNext VSTS/TFS task as shown below </p>
<pre tabindex="0"><code>{  
     &#34;name&#34;: &#34;settingsFile&#34;,  
     &#34;type&#34;: &#34;filePath&#34;,  
     &#34;label&#34;: &#34;Settings File&#34;,  
     &#34;defaultValue&#34;: &#34;&#34;,  
     &#34;required&#34;: false,  
     &#34;helpMarkDown&#34;: &#34;Path to single settings files to use (as opposed to files in project folders)&#34;,  
     &#34;groupName&#34;:&#34;advanced&#34;  
   }  
</code></pre><p>to present a file picker dialog in the build editor that allows the build editor to pick a file or folder in the build’s source repository</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_290.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_286.png" title="image"></a></p>
<p>While doing some task development recently I found that this control did not behave as I had expected</p>
<ul>
<li>If a value is explicitally set then the full local path to selected file or folder (on the build agent) is returned e.g. <em>c:agent_work3syourfolderyourfile.txt</em> – just as expected</li>
<li>If you do not set a value, or set a value then remove your setting when you edit a build, then you don’t get an empty string, as I had expected. You get the path to the <strong>BUILD_SOURCESDIRECTORY</strong> e.g. <em>c:agent_work3s –</em> makes sense when you think about it.</li>
</ul>
<p>So, if as in my case, you wanted to have specific behaviour only when this values was set to something other than the repo root you need to add some guard code</p>
<pre tabindex="0"><code>  
if ($settingsFile -eq $Env:BUILD\_SOURCESDIRECTORY )  
{  
    $settingsFile = &#34;&#34;  
}  
</code></pre><p>Once I did this my task behaved as a needed, only running the code when the user had set an explicit value for the settings file.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A VSTS vNext build task to run StyleCop</title>
      <link>https://blog.richardfennell.net/posts/a-vsts-vnext-build-task-to-run-stylecop/</link>
      <pubDate>Sat, 06 Feb 2016 18:51:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-vsts-vnext-build-task-to-run-stylecop/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 22 Mar 2016&lt;/strong&gt; This tasks is available in the &lt;a href=&#34;https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-StyleCop-Task&#34;&gt;VSTS Marketplace&lt;/a&gt; I have &lt;a href=&#34;https://blog.richardfennell.net/blogs/rfennell/post/2015/04/03/Running-StyleCop-from-the-command-line-and-in-a-TFS-2015-vNext-build.aspx&#34;&gt;previously posted&lt;/a&gt; on how a PowerShell script can be used to run StyleCop as part of vNext VSTS/TFS build. Now I have more experience with vNext tasks it seemed a good time to convert this PowerShell script into a true task that can deploy StyleCop and making it far easier to expose the various parameters StyleCop allows. To this end I have written a new StyleCop task that can be found in my &lt;a href=&#34;https://github.com/rfennell/vNextBuild&#34;&gt;vNext Build Repo&lt;/a&gt;, this has been built to use the &lt;a href=&#34;https://stylecop.codeplex.com/&#34;&gt;4.7.49.0 release of StyleCop&lt;/a&gt; (so you don’t need to install StyleCop in the build machine, so it works well on VSTS). To use this task:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 22 Mar 2016</strong> This tasks is available in the <a href="https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-StyleCop-Task">VSTS Marketplace</a> I have <a href="/blogs/rfennell/post/2015/04/03/Running-StyleCop-from-the-command-line-and-in-a-TFS-2015-vNext-build.aspx">previously posted</a> on how a PowerShell script can be used to run StyleCop as part of vNext VSTS/TFS build. Now I have more experience with vNext tasks it seemed a good time to convert this PowerShell script into a true task that can deploy StyleCop and making it far easier to expose the various parameters StyleCop allows. To this end I have written a new StyleCop task that can be found in my <a href="https://github.com/rfennell/vNextBuild">vNext Build Repo</a>, this has been built to use the <a href="https://stylecop.codeplex.com/">4.7.49.0 release of StyleCop</a> (so you don’t need to install StyleCop in the build machine, so it works well on VSTS). To use this task:</p>
<ol>
<li>Clone the repo</li>
<li><a href="https://github.com/rfennell/vNextBuild/wiki/Build-Tasks">Build the tasks using Gulp</a></li>
<li><a href="https://github.com/rfennell/vNextBuild/wiki/Build-Tasks">Upload the task</a> you require to your VSTS or TFS instance</li>
</ol>
<p>Once this is done you can add the task to your build. You probably won’t need to set any parameters as long as you have <strong>settings.stylecop</strong> files to define your StyleCop ruleset in the same folders as your <strong>.CSPROJ</strong> files (or are happy default rulesets). If you do want to set parameters your options are:</p>
<ul>
<li>TreatStyleCopViolationsErrorsAsWarnings - Treat StyleCop violations errors as warnings, if set to False any StyleCop violations will cause the build to fail (default false).</li>
</ul>
<p>And on the advanced panel</p>
<ul>
<li>MaximumViolationCount - Maximum violations before analysis stops (default 1000)</li>
<li>ShowOutput - Sets the flag so StyleCop scanner outputs progress to the console (default false)</li>
<li>CacheResults - Cache analysis results for reuse (default false)</li>
<li>ForceFullAnalysis - Force complete re-analysis (default true)</li>
<li>AdditionalAddInPath - Path to any custom rule sets folder, the directory cannot be a sub directory of current directory at runtime as this is automatically scanned. This folder must contain your custom DLL and the Stylecop.dll and Stylecop.csharp.cs else you will get load errors</li>
<li>SettingsFile - Path to single settings files to use for all analysis (as opposed to <strong>settings.stylecop</strong> files in project folders)</li>
</ul>
<p>  <a href="/wp-content/uploads/sites/2/historic/image_288.png%22"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_284.png" title="image"></a>   When you run the build with the new task you should expect to see a summary of the StyleCop run on the right <a href="/wp-content/uploads/sites/2/historic/image_289.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_285.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>A new vNext task to run StyleCop</title>
      <link>https://blog.richardfennell.net/posts/a-new-vnext-task-to-run-stylecop/</link>
      <pubDate>Thu, 04 Feb 2016 23:21:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-new-vnext-task-to-run-stylecop/</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Update 6 Feb 2016&lt;/strong&gt; - I have made some major changes to this task to expose more parameters, &lt;a href=&#34;https://blog.richardfennell.net/blogs/rfennell/post/2016/02/06/A-VSTS-vNext-build-task-to-run-StyleCop.aspx&#34;&gt;have a look at this post&lt;/a&gt; that details the newer version&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Today a good way to pull together all your measures of code quality is to run &lt;a href=&#34;http://www.sonarqube.org/&#34;&gt;SonarQube&lt;/a&gt; within your automated build; in a .NET world this can show changes in quality over time for tools such as &lt;a href=&#34;https://msdn.microsoft.com/en-us/library/3z0aeatx.aspx&#34;&gt;FxCop (Code Analysis)&lt;/a&gt; and &lt;a href=&#34;https://stylecop.codeplex.com/&#34;&gt;StyleCop.&lt;/a&gt; However sometime you might just want to run one of these tools alone as part of your automated build. For Code Analysis this is easy, it is built into Visual Studio just set it as a property on the project. For StyleCop it is a bit more awkward as StyleCop was not designed to be run from the command line.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em><strong>Update 6 Feb 2016</strong> - I have made some major changes to this task to expose more parameters, <a href="/blogs/rfennell/post/2016/02/06/A-VSTS-vNext-build-task-to-run-StyleCop.aspx">have a look at this post</a> that details the newer version</em></p>
<p>Today a good way to pull together all your measures of code quality is to run <a href="http://www.sonarqube.org/">SonarQube</a> within your automated build; in a .NET world this can show changes in quality over time for tools such as <a href="https://msdn.microsoft.com/en-us/library/3z0aeatx.aspx">FxCop (Code Analysis)</a> and <a href="https://stylecop.codeplex.com/">StyleCop.</a> However sometime you might just want to run one of these tools alone as part of your automated build. For Code Analysis this is easy, it is built into Visual Studio just set it as a property on the project. For StyleCop it is a bit more awkward as StyleCop was not designed to be run from the command line.</p>
<p>To get around this limitation I wrote a <a href="https://github.com/rfennell/StyleCopCmdLine">command line wrapper</a> that could be used within a build process, see my <a href="/blogs/rfennell/post/2015/04/03/Running-StyleCop-from-the-command-line-and-in-a-TFS-2015-vNext-build.aspx">blog post for details of how this could be used with vNext build</a>.</p>
<p>Well that was all best part of a year ago. Now I have more experience with vNext build it seems wrong to use just a PowerShell script when I could create a build task that also deploys StyleCop. I have eventually got around to writing the task which you can find in <a href="https://github.com/rfennell/vNextBuild">my vNextBuild repo</a>.</p>
<p>Once the task is uploaded to your TFS for VSTS instance, the StyleCop task can be added into any build process. The task picks up the file locations from the build environment variables and then hunts for StyleCop settings files (<a href="/blogs/rfennell/post/2015/04/03/Running-StyleCop-from-the-command-line-and-in-a-TFS-2015-vNext-build.aspx">as detailed in my previous post</a>). The only argument that needs to be set is whether the buidl should fail if there are violations</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_286.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_282.png" title="image"></a></p>
<p>Once this is all setup the build can be run and the violations will be shown in the build report, whether the build fails or passes is down to how you set the flag for the handling of violations</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_287.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_283.png" title="image"></a></p>
<p>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Follow up from my session at the Black Marble Tech Update 2016</title>
      <link>https://blog.richardfennell.net/posts/follow-up-from-my-session-at-the-black-marble-tech-update-2016/</link>
      <pubDate>Tue, 02 Feb 2016 17:11:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-up-from-my-session-at-the-black-marble-tech-update-2016/</guid>
      <description>&lt;p&gt;There have been some requests for more information about the areas I convered in my presentation at the &lt;a href=&#34;http://www.blackmarble.co.uk/events&#34;&gt;Black Marble Tech Update 2016&lt;/a&gt; that we held last week.&lt;/p&gt;
&lt;p&gt;I could send out slides, but I think it is far more useful to point you at the ‘live’ resource on the Internet. The key reason for this is that the whole of the Visual Studio family is now being released at a ‘cloud cadence’ i.e. new features are appearing rapidly, so anything I write will soon be out of date. Better to look at the live sources where possible.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There have been some requests for more information about the areas I convered in my presentation at the <a href="http://www.blackmarble.co.uk/events">Black Marble Tech Update 2016</a> that we held last week.</p>
<p>I could send out slides, but I think it is far more useful to point you at the ‘live’ resource on the Internet. The key reason for this is that the whole of the Visual Studio family is now being released at a ‘cloud cadence’ i.e. new features are appearing rapidly, so anything I write will soon be out of date. Better to look at the live sources where possible.</p>
<ul>
<li>The <a href="https://www.visualstudio.com/en-us/news/release-archive-vso.aspx">Visual Studio Team Services Feature Timeline</a> explains what is available for TFS and VSTS, it includes links to UserVioce and/or background blog posts about the new features</li>
<li>You can find more post on all the features of TFS and VSTS at the <a href="https://blogs.msdn.microsoft.com/visualstudioalm/">ALM Team Blog</a> or <a href="https://blogs.msdn.microsoft.com/bharry/">Brian Harry’s Blog</a></li>
<li><a href="https://code.visualstudio.com/">Visual Studio Code can be download from here</a>. Once installed it should keep itself up to date</li>
<li>The <a href="https://channel9.msdn.com/Events/Visual-Studio/Connect-event-2015/">videos from the Connect() conference</a> from late last year gives the best overview of the Visual Studio family. The keynotes are a great place to start</li>
<li>A great discussion on .<a href="https://dotnet.github.io/">NET Core and ASP.NET Core</a> can be found in Scott Hanselmans <a href="http://www.hanselman.com/blog/ASPNET5IsDeadIntroducingASPNETCore10AndNETCore10.aspx">blog</a> and <a href="http://www.hanselminutes.com/511/inside-aspnet-core-10-with-damian-edwards">podcast</a>.</li>
<li>Look out for all Microsoft’s open source projects at the <a href="http://www.dotnetfoundation.org/">.NET Foundation</a> and <a href="https://github.com/Microsoft">GitHub</a></li>
</ul>
<p>Hope you find these useful pointers</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fixing cannot load dashboard issues on BlogEngine.NET using sub blog aggregation</title>
      <link>https://blog.richardfennell.net/posts/fixing-cannot-load-dashboard-issues-on-blogengine-net-using-sub-blog-aggregation/</link>
      <pubDate>Mon, 04 Jan 2016 17:59:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixing-cannot-load-dashboard-issues-on-blogengine-net-using-sub-blog-aggregation/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/12/31/Upgraded-to-BlogEngineNET-32.aspx&#34;&gt;As I discovered during my BlogEngine upgrade&lt;/a&gt;, there is an effort within the project team to focus the codebase on three possible usage models on any given BlogEngine server instance:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Single blog with a user – a personal blog (default)&lt;/li&gt;
&lt;li&gt;Single blog with many users – a team/company blog&lt;/li&gt;
&lt;li&gt;Many blogs each with a single user – a set of related blogs that can be agregated togther&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I needed the third option, problem was in its history our blog has been both of the other two types, so I have multiple user accounts for each blogs, and login usernames are repeated between individual blogs on the server.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/12/31/Upgraded-to-BlogEngineNET-32.aspx">As I discovered during my BlogEngine upgrade</a>, there is an effort within the project team to focus the codebase on three possible usage models on any given BlogEngine server instance:</p>
<ul>
<li>Single blog with a user – a personal blog (default)</li>
<li>Single blog with many users – a team/company blog</li>
<li>Many blogs each with a single user – a set of related blogs that can be agregated togther</li>
</ul>
<p>I needed the third option, problem was in its history our blog has been both of the other two types, so I have multiple user accounts for each blogs, and login usernames are repeated between individual blogs on the server.</p>
<p>This is not fundamentally an issue for a server running in the third mode, except on the primary blog that is setup to provide agregation of all the other blogs.  Even here, on a day to day basis, it is not an issue either, basic post RSS aggregation is fine. However, when you login as an administration user and try to access the dashboard you get the error</p>
<pre tabindex="0"><code>Item has already been added. Key in dictionary: &#39;displayname&#39; Key being added: &#39;displayname&#39;
</code></pre><p>The workaround I have used in the past was to temporarily switch off blog aggregation whenever I needed to access the primary blog dashboard – not the best solution.</p>
<p>After a bit of investigation of the codebase I found that this issue is due to the fact we had users  called ‘admin’ on the primary and all the child blogs. The fix I used was a bit of SQL to do some user renaming from ‘admin’ to ‘adminblogname’ . I needed to rename the username in a few tables.</p>
<p><strong>AS USUAL BEWARE THIS SQL, MAKE SURE YOU HAVE A BACKUP BEFORE YOU USE IT, IT WORKS FOR ME BUT I MIGHT HAVE MISSED SOMETHING YOU NEED</strong></p>
<pre tabindex="0"><code>  
update p  
set p.SettingValue = concat (p.SettingValue , &#39; &#39;, b.BlogName)  
from be\_Profiles p  
    inner join be\_Blogs b on  
        b.BlogID = p.BlogId  
where  
SettingName =&#39;displayname&#39; and  
SettingValue = &#39;admin&#39;; 

update p  
set p.UserName = concat (p.UserName , b.BlogName)  
from be\_Profiles p  
    inner join be\_Blogs b on  
        b.BlogID = p.BlogId  
where  
username= &#39;admin&#39;;

 

update u  
set u.UserName = concat (u.UserName , b.BlogName)  
from be\_Users u  
    inner join be\_Blogs b on  
        b.BlogID = u.BlogId  
where  
username = &#39;admin&#39;;

 

update r  
set r.UserName = concat (r.UserName , b.BlogName)  
from be\_UserRoles r  
    inner join be\_Blogs b on  
        b.BlogID = r.BlogId  
where  
username = &#39;admin&#39;;
</code></pre><p>This is not a problem specific to admin users, any username duplication will cause the same error. This basic SQL script can be modified to fix any other user accounts you might have username clashes on.</p>
<p>Once this SQL was run I was able to login to the dashboard on the primary blog as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgraded to BlogEngine.NET 3.2</title>
      <link>https://blog.richardfennell.net/posts/upgraded-to-blogengine-net-3-2/</link>
      <pubDate>Thu, 31 Dec 2015 13:39:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgraded-to-blogengine-net-3-2/</guid>
      <description>&lt;p&gt;I have just completed the upgrade of this blog server to the new release 3.2 of &lt;a href=&#34;http://dotnetblogengine.net/&#34;&gt;BlogEngine.NET&lt;/a&gt;. I did a manual upgrade (as opposed to the automated built in upgrade) as I needed to make a few changes from the default settings. The process I used followed the &lt;a href=&#34;http://dnbe.net/docs/post/upgrading-blogengine-net-manually&#34;&gt;upgrade process document&lt;/a&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&#34;https://blogengine.codeplex.com/releases/view/619153&#34;&gt;Downloaded&lt;/a&gt; the latest release and unzip the folder&lt;/li&gt;
&lt;li&gt;Run the SQL upgrade script (in &lt;strong&gt;/setup/sqlserver&lt;/strong&gt; folder), this adds some new DB constraints&lt;/li&gt;
&lt;li&gt;Created a IIS web site using the new release&lt;/li&gt;
&lt;li&gt;Copied in the sample &lt;strong&gt;web.config&lt;/strong&gt; from the &lt;strong&gt;/setup/sqlserver&lt;/strong&gt; folder.
&lt;ul&gt;
&lt;li&gt;Edited the SQL connection string to point to my DB&lt;/li&gt;
&lt;li&gt;IMPORTANT I MISSED THIS AT FIRST - Added the setting to &lt;a href=&#34;https://github.com/rxtur/BlogEngine.NET/wiki/Configuration&#34;&gt;change from the default single blog mode, to multi blog mode&lt;/a&gt; (note this is a WIKI on Github not the old CodePlex site)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Copied in my App_DATA folder&lt;/li&gt;
&lt;li&gt;Accessed my site&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As I had not copied anything from the old &lt;strong&gt;custom&lt;/strong&gt; folder, I had theme issues at this point. However, I decided to moved all the blogs to the newest generation of theme templates, so did a quick fix up by hand on each one, picking the required theme and making sure any settings, like Twitter accounts, were set (note these are set on a per blog/per theme basis, so changing a theme means you need to reenter any custom values). I also needed to copy in a few missing logos and any extra widgets from my old &lt;strong&gt;custom&lt;/strong&gt; folder the blogs were using.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just completed the upgrade of this blog server to the new release 3.2 of <a href="http://dotnetblogengine.net/">BlogEngine.NET</a>. I did a manual upgrade (as opposed to the automated built in upgrade) as I needed to make a few changes from the default settings. The process I used followed the <a href="http://dnbe.net/docs/post/upgrading-blogengine-net-manually">upgrade process document</a></p>
<ol>
<li><a href="https://blogengine.codeplex.com/releases/view/619153">Downloaded</a> the latest release and unzip the folder</li>
<li>Run the SQL upgrade script (in <strong>/setup/sqlserver</strong> folder), this adds some new DB constraints</li>
<li>Created a IIS web site using the new release</li>
<li>Copied in the sample <strong>web.config</strong> from the <strong>/setup/sqlserver</strong> folder.
<ul>
<li>Edited the SQL connection string to point to my DB</li>
<li>IMPORTANT I MISSED THIS AT FIRST - Added the setting to <a href="https://github.com/rxtur/BlogEngine.NET/wiki/Configuration">change from the default single blog mode, to multi blog mode</a> (note this is a WIKI on Github not the old CodePlex site)</li>
</ul>
</li>
<li>Copied in my App_DATA folder</li>
<li>Accessed my site</li>
</ol>
<p>As I had not copied anything from the old <strong>custom</strong> folder, I had theme issues at this point. However, I decided to moved all the blogs to the newest generation of theme templates, so did a quick fix up by hand on each one, picking the required theme and making sure any settings, like Twitter accounts, were set (note these are set on a per blog/per theme basis, so changing a theme means you need to reenter any custom values). I also needed to copy in a few missing logos and any extra widgets from my old <strong>custom</strong> folder the blogs were using.</p>
<p>Once this was all done I had an upgraded blog server.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running CodeUI tests on a VM with on remote desktop session open as part of a vNext build</title>
      <link>https://blog.richardfennell.net/posts/running-codeui-tests-on-a-vm-with-on-remote-desktop-session-open-as-part-of-a-vnext-build/</link>
      <pubDate>Wed, 23 Dec 2015 12:21:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-codeui-tests-on-a-vm-with-on-remote-desktop-session-open-as-part-of-a-vnext-build/</guid>
      <description>&lt;p&gt;If you want to run CodeUI tests as part of a build you need to make sure the device running the test has access to the UI, for remote VMs this means having a logged in session open and the build/test agent running interactivally. Problem is what happens when you disconnect the session. UNless you manage it you will get the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Automation engine is unable to playback the test because it is not able to interact with the desktop. This could happen if the computer running the test is locked or it’s remote session window is minimized&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you want to run CodeUI tests as part of a build you need to make sure the device running the test has access to the UI, for remote VMs this means having a logged in session open and the build/test agent running interactivally. Problem is what happens when you disconnect the session. UNless you manage it you will get the error</p>
<blockquote>
<p><em>Automation engine is unable to playback the test because it is not able to interact with the desktop. This could happen if the computer running the test is locked or it’s remote session window is minimized</em></p></blockquote>
<p>In the past I would use a standard TFS Lab Management Environment to manage this,you just check a box to say the VM/PC is running coded UI tests and it sorts out the rest. However, with the advent of vNext build and the move away from Lab Manager this seems overly complex.</p>
<p>It is not a perfect solution but this works</p>
<ol>
<li>Make sure the VM autologs in and starts your build/test agents in interactive mode (I used <a href="https://technet.microsoft.com/en-us/sysinternals/autologon.aspx">SysInternal AutoLogin</a> to set this up)</li>
<li>I connect to the session and make sure all is OK, but I then disconnect redirecting the session
<ul>
<li>To get my session ID, at the command prompt, I use the command <strong>query user</strong></li>
<li>I then redirect the session <strong>tscon.exe RDP-Tcp#99 /dest:console</strong>, where RDP-Tcp#99 is my session ID</li>
</ul>
</li>
<li>Once I was disconnected my CodeUI test still run</li>
</ol>
<p>I am sure I can get a slicker way to do this, but it does fix the immediate issue</p>
<p><strong>Updated</strong>:</p>
<p>This bit of Powershell code could be put in a shortcut on the desktop to do the job, you will want to run the script as administrator</p>
<blockquote>
<p>$OutputVariable = (query user) | Out-String</p>
<p>$session = $OutputVariable.Substring($OutputVariable.IndexOf(&ldquo;rdp-tcp#&rdquo;)).Split(&quot; &ldquo;)[0]</p>
<p>&amp; tscon.exe $session /dest:console</p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>Live Writer becomes Open Live Writer</title>
      <link>https://blog.richardfennell.net/posts/live-writer-becomes-open-live-writer/</link>
      <pubDate>Mon, 14 Dec 2015 17:53:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/live-writer-becomes-open-live-writer/</guid>
      <description>&lt;p&gt;My primary blog editor has been Microsoft Live Writer for years, but it has always been a pain to install via &lt;a href=&#34;http://windows.microsoft.com/en-gb/windows/essentials&#34;&gt;Windows Essentials&lt;/a&gt; (as I don’t want the rest of the product), also I was never able to find the right version when I rebuilt a PC. This was not helped by the fact there has been no development of the product for years, so I struggled to remember what year version I really needed (last one was 2012 by the way).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My primary blog editor has been Microsoft Live Writer for years, but it has always been a pain to install via <a href="http://windows.microsoft.com/en-gb/windows/essentials">Windows Essentials</a> (as I don’t want the rest of the product), also I was never able to find the right version when I rebuilt a PC. This was not helped by the fact there has been no development of the product for years, so I struggled to remember what year version I really needed (last one was 2012 by the way).</p>
<p>So it is great news that the code base has gone Open Source at  <a href="http://openlivewriter.org/" title="http://openlivewriter.org/">http://openlivewriter.org/</a>, and this is my first post using the new editor. Seem to work great.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Nuget restore fails to restore all the files on VSTS build if using project.json files</title>
      <link>https://blog.richardfennell.net/posts/nuget-restore-fails-to-restore-all-the-files-on-vsts-build-if-using-project-json-files/</link>
      <pubDate>Wed, 09 Dec 2015 17:28:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/nuget-restore-fails-to-restore-all-the-files-on-vsts-build-if-using-project-json-files/</guid>
      <description>&lt;p&gt;We are currently working on updating a Windows 8 application to be a Windows 10 Universal application. This has caused a few problem on a TFS vNext automated build box. The revised solution builds fine of the developers box and fine on the build VM if opened in Visual Studio, but fails if built via the VSTS vNext build CI MSBuild process showing loads of references missing.&lt;/p&gt;
&lt;p&gt;Turns out the issue was due to Nuget versions.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We are currently working on updating a Windows 8 application to be a Windows 10 Universal application. This has caused a few problem on a TFS vNext automated build box. The revised solution builds fine of the developers box and fine on the build VM if opened in Visual Studio, but fails if built via the VSTS vNext build CI MSBuild process showing loads of references missing.</p>
<p>Turns out the issue was due to Nuget versions.</p>
<p>The problem was that as part of the upgrade the solution had gained some new projects. These used the new <strong>project.json</strong> file to manage their Nuget references, as opposed to the old <strong>packages.config</strong> file. Visual Studio 2015 handles these OK, hence the build always working in the IDE, but you need Nuget.exe 3.0 or later for it to handle the new format. The version of Nuget installed as part of my vNext build agent was 2.8. So no wonder it had a problem.</p>
<p>To test my assumptions I added a Nuget Installer task to my build and set an explicit path to the newest version of Nuget.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_284.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_280.png" title="image"></a></p>
<p>Once this was done my build was fine.</p>
<p>So my solution options are</p>
<ol>
<li>Don’t use the agents shipped with TFS 2015, get newer ones fro VSTS this has a current version of Nuget (just make sure your agent/server combination is supported. I have had issues with the latest VSTS agent and a TFS 2015 RTM instance)</li>
<li>Manually replace the version of Nuget.exe in my build agent tools folder – easy to forget you did but works</li>
<li>Place a copy of the version of Nuget.exe I want on each build VM and reference it s path explicitly (as I did to diagnose the problem)</li>
</ol>
<p>The first option is the best choice as it is always a good plan to keep build agents up to date</p>
]]></content:encoded>
    </item>
    <item>
      <title>An out-the-box way to let local Hyper-V VMs see the Internet without using a DD-WRT router</title>
      <link>https://blog.richardfennell.net/posts/an-out-the-box-way-to-let-local-hyper-v-vms-see-the-internet-without-using-a-dd-wrt-router/</link>
      <pubDate>Tue, 01 Dec 2015 20:38:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/an-out-the-box-way-to-let-local-hyper-v-vms-see-the-internet-without-using-a-dd-wrt-router/</guid>
      <description>&lt;p&gt;&lt;em&gt;Updated 10 Aug 2016 - Revised for Win10 anniversary build 1607&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I have &lt;a href=&#34;https://blog.richardfennell.net/blogs/rfennell/post/2015/01/31/Living-with-a-DD-WRT-virtual-router-three-months-and-one-day-on-%28static-DHCP-leases%29.aspx&#34;&gt;posted in the past&lt;/a&gt; about using a DD-WRT virtual router to bridge between local VMs on my development PC and the outside world&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/blogs/rfennell/image.axd?picture=image_thumb_211.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;With the recent changes in HyperV on Windows 10 and Server 2016 we have an alternative as discussed by &lt;a href=&#34;http://www.thomasmaurer.ch/2015/11/hyper-v-virtual-switch-using-nat-configuration/?utm_content=bufferbc1e1&amp;amp;utm_medium=social&amp;amp;utm_source=twitter.com&amp;amp;utm_campaign=buffer&#34;&gt;Thomas Maurer in his post&lt;/a&gt; (pre Win10 anniversary build 1607) or this &lt;a href=&#34;http://www.thomasmaurer.ch/2016/05/set-up-a-hyper-v-virtual-switch-using-a-nat-network/&#34;&gt;post&lt;/a&gt; (post Win10 anniversary build 1607). You can use a NATSwitch, thus removing the need for router VM. This does however raise different issues, that of address assignment, the router was also a DHCP server. However, it does mean I don’t have to mess around with manually setting external IP addresses for the router each time I join a different WIFI network. So on the whole I think iti is a better solution.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>Updated 10 Aug 2016 - Revised for Win10 anniversary build 1607</em></p>
<p>I have <a href="/blogs/rfennell/post/2015/01/31/Living-with-a-DD-WRT-virtual-router-three-months-and-one-day-on-%28static-DHCP-leases%29.aspx">posted in the past</a> about using a DD-WRT virtual router to bridge between local VMs on my development PC and the outside world</p>
<p><img loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_211.png"></p>
<p>With the recent changes in HyperV on Windows 10 and Server 2016 we have an alternative as discussed by <a href="http://www.thomasmaurer.ch/2015/11/hyper-v-virtual-switch-using-nat-configuration/?utm_content=bufferbc1e1&amp;utm_medium=social&amp;utm_source=twitter.com&amp;utm_campaign=buffer">Thomas Maurer in his post</a> (pre Win10 anniversary build 1607) or this <a href="http://www.thomasmaurer.ch/2016/05/set-up-a-hyper-v-virtual-switch-using-a-nat-network/">post</a> (post Win10 anniversary build 1607). You can use a NATSwitch, thus removing the need for router VM. This does however raise different issues, that of address assignment, the router was also a DHCP server. However, it does mean I don’t have to mess around with manually setting external IP addresses for the router each time I join a different WIFI network. So on the whole I think iti is a better solution.</p>
<p>My new setup is as follows.</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_283.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_279.png" title="image"></a></p>
<ul>
<li>Using Powershell, create the NATSwitch as per <a href="http://www.thomasmaurer.ch/2015/11/hyper-v-virtual-switch-using-nat-configuration/?utm_content=bufferbc1e1&amp;utm_medium=social&amp;utm_source=twitter.com&amp;utm_campaign=buffer">Thomas Maurer’s post</a>.</li>
<li>On each guest VM set a fixed IP address e.g. 172.92.91.2, with a default route of the NATSwitch’s port on the host OS i.e. 172.92.91.1 and a publicly accessible DNS such as a Google’s 8.8.8.8. Once setup it should be possible to access the internet from the guest VM</li>
<li>To access each guest VM from the host OS you can just use it’s IP address e.g. 172.92.91.2, this works because the host OS has a connection on the same NATSwitch network, 172.92.91.1. It makes sense to add a hosts files entry on the Windows 10 PC so that the user has a friendly name to access each guest VM e.g.</li>
</ul>
<blockquote>
<p>                    172.91.92.2        typhoontfs</p></blockquote>
<p>Once this is all done you seem to have a workable system, only time will tell how it works in practice.</p>
]]></content:encoded>
    </item>
    <item>
      <title>First experience of a Band 2</title>
      <link>https://blog.richardfennell.net/posts/first-experience-of-a-band-2/</link>
      <pubDate>Mon, 30 Nov 2015 16:40:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-experience-of-a-band-2/</guid>
      <description>&lt;p&gt;I have been using a Band 2 for a couple of weeks now as opposed to my original Band. The major thing I have noticed is I don&amp;rsquo;t notice it on my wrist. It feels just like a watch.&lt;/p&gt;
&lt;p&gt;The old one, though not too bad did feel a bit lumpy, banging on the wrist. So that is an improvement, also it looks less like I am a prisoner with a tracker on day release. The Band 2 looks like a designer was more involved as opposed to just engineers.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been using a Band 2 for a couple of weeks now as opposed to my original Band. The major thing I have noticed is I don&rsquo;t notice it on my wrist. It feels just like a watch.</p>
<p>The old one, though not too bad did feel a bit lumpy, banging on the wrist. So that is an improvement, also it looks less like I am a prisoner with a tracker on day release. The Band 2 looks like a designer was more involved as opposed to just engineers.</p>
<p>But how does it compare to my issues with the original Band?</p>
<ul>
<li>Is it now waterproof? - No, still can&rsquo;t swim with it.</li>
<li>How about battery life? - Seems a bit better, on a day to day use I am charging it roughly every couple of two days as opposed to each day. I have not tried a long cycle ride yet, so it remains to be seen if I get more than about 5 hours of full data capture. I would expect a bit better, but not a huge gain</li>
<li>Does the touch screen work when it is raining or my fingers wet? - Does seem better</li>
</ul>
<p>So all positive thus far</p>
]]></content:encoded>
    </item>
    <item>
      <title>ALM Rangers guidance on migrating from RM Agent based releases to the new VSTS release system</title>
      <link>https://blog.richardfennell.net/posts/alm-rangers-guidance-on-migrating-from-rm-agent-based-releases-to-the-new-vsts-release-system/</link>
      <pubDate>Tue, 24 Nov 2015 19:31:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alm-rangers-guidance-on-migrating-from-rm-agent-based-releases-to-the-new-vsts-release-system/</guid>
      <description>&lt;p&gt;Vijay in the Microsoft Release Management product group has provided a nice post on the various methods you can use to migrate from &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/19/moving-from-the-earlier-version-of-release-management-service-to-the-new-one-in-visual-studio-team-services.aspx&#34;&gt;the earlier versions of Release management to the new one in Visual Studio Team Services&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;An area where you will find the biggest change in technology is when moving from on premises agent based releases to the new VSTS PowerShell based system. To help in this process the ALM Rangers have produced a command line tool to &lt;a href=&#34;https://github.com/ALM-Rangers/Migrate-assets-from-RM-server-to-VSTS&#34; title=&#34;Migrate-assets-from-RM-server-to-VSTS&#34;&gt;migrate assets from RM server to VSTS&lt;/a&gt;, it exports all your activities as PowerShell scripts that are easy to re-use in a vNext Release Management process, or in Visual Studio Team Services’ Release tooling.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Vijay in the Microsoft Release Management product group has provided a nice post on the various methods you can use to migrate from <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/19/moving-from-the-earlier-version-of-release-management-service-to-the-new-one-in-visual-studio-team-services.aspx">the earlier versions of Release management to the new one in Visual Studio Team Services</a>.</p>
<p>An area where you will find the biggest change in technology is when moving from on premises agent based releases to the new VSTS PowerShell based system. To help in this process the ALM Rangers have produced a command line tool to <a href="https://github.com/ALM-Rangers/Migrate-assets-from-RM-server-to-VSTS" title="Migrate-assets-from-RM-server-to-VSTS">migrate assets from RM server to VSTS</a>, it exports all your activities as PowerShell scripts that are easy to re-use in a vNext Release Management process, or in Visual Studio Team Services’ Release tooling.</p>
<p>So if you are using agent based releases, why not have a look at the tools and see how easy it makes to migrate your process to newer tooling.</p>
]]></content:encoded>
    </item>
    <item>
      <title>ALM Ranger provided VSTS extensions</title>
      <link>https://blog.richardfennell.net/posts/alm-ranger-provided-vsts-extensions/</link>
      <pubDate>Mon, 23 Nov 2015 20:10:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alm-ranger-provided-vsts-extensions/</guid>
      <description>&lt;p&gt;Many of the major new features of VSTS are delivered by the new &lt;a href=&#34;https://marketplace.visualstudio.com/#VSTS&#34;&gt;marketplace and extension model&lt;/a&gt;, such as &lt;a href=&#34;https://marketplace.visualstudio.com/items/ms.vss-code-search&#34;&gt;code search&lt;/a&gt; and &lt;a href=&#34;https://marketplace.visualstudio.com/items/ms.feed&#34;&gt;package management&lt;/a&gt;. However, did you realise that this new way of adding functionality to VSTS is open to you too, not just to Microsoft?&lt;/p&gt;
&lt;p&gt;To see what can be done why not have a look at the &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalmrangers/archive/2015/11/18/visual-studio-extensions-from-the-rangers.aspx&#34;&gt;Visual Studio Team Services Extensions from the ALM Rangers&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Many of the major new features of VSTS are delivered by the new <a href="https://marketplace.visualstudio.com/#VSTS">marketplace and extension model</a>, such as <a href="https://marketplace.visualstudio.com/items/ms.vss-code-search">code search</a> and <a href="https://marketplace.visualstudio.com/items/ms.feed">package management</a>. However, did you realise that this new way of adding functionality to VSTS is open to you too, not just to Microsoft?</p>
<p>To see what can be done why not have a look at the <a href="http://blogs.msdn.com/b/visualstudioalmrangers/archive/2015/11/18/visual-studio-extensions-from-the-rangers.aspx">Visual Studio Team Services Extensions from the ALM Rangers</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio Dev Essentials announced at Connect() with free Azure time each month</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-dev-essentials-announced-at-connect-with-free-azure-time-each-month/</link>
      <pubDate>Mon, 23 Nov 2015 13:27:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-dev-essentials-announced-at-connect-with-free-azure-time-each-month/</guid>
      <description>&lt;p&gt;One announcement I missed at at &lt;a href=&#34;https://www.google.co.uk/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=1&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=0CCMQFjAAahUKEwils4-1uJrJAhVD1BoKHfZlB7w&amp;amp;url=http%3A%2F%2Fconnect2015.visualstudio.com%2F&amp;amp;usg=AFQjCNGWIW03xz_LwALuizArROkRwiSEXw&#34;&gt;Connect()&lt;/a&gt; last week was that of &lt;a href=&#34;https://www.visualstudio.com/en-us/products/visual-studio-dev-essentials-vs.aspx&#34;&gt;Visual Studio Dev Essentials&lt;/a&gt;. I only heard about this one whilst listening to &lt;a href=&#34;http://www.radiotfs.com/Show/101/ConnectingonConnect&#34;&gt;RadioTFS’s news from Connect() programme&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://www.visualstudio.com/en-us/products/visual-studio-dev-essentials-vs.aspx&#34;&gt;Visual Studio Dev Essentials&lt;/a&gt; is mostly a re-packing of all the tools that were already freely available from Microsoft e.g. Visual Studio Community Edition, Tem Foundation Server Express etc.; but there are some notable additions* (some coming soon)&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Pluralsight  (6-month subscription)—limited time only&lt;/li&gt;
&lt;li&gt;Xamarin University mobile training— coming soon&lt;/li&gt;
&lt;li&gt;WintellectNOW  (3-month subscription)&lt;/li&gt;
&lt;li&gt;Microsoft Virtual Academy&lt;/li&gt;
&lt;li&gt;HackHands Live Programming Help  ($25 credit)&lt;/li&gt;
&lt;li&gt;Priority Forum Support&lt;/li&gt;
&lt;li&gt;Azure credit  ($25/month for 12 months)—coming soon&lt;/li&gt;
&lt;li&gt;Visual Studio Team Services account with five users&lt;/li&gt;
&lt;li&gt;App Service free tier&lt;/li&gt;
&lt;li&gt;PowerBI free tier&lt;/li&gt;
&lt;li&gt;HockeyApp free tier&lt;/li&gt;
&lt;li&gt;Application Insights free tier&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;*Check the &lt;a href=&#34;https://www.visualstudio.com/en-us/products/visual-studio-dev-essentials-vs.aspx&#34;&gt;Visual Studio Dev Essentials site&lt;/a&gt; for the detailed T&amp;amp;C&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One announcement I missed at at <a href="https://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CCMQFjAAahUKEwils4-1uJrJAhVD1BoKHfZlB7w&amp;url=http%3A%2F%2Fconnect2015.visualstudio.com%2F&amp;usg=AFQjCNGWIW03xz_LwALuizArROkRwiSEXw">Connect()</a> last week was that of <a href="https://www.visualstudio.com/en-us/products/visual-studio-dev-essentials-vs.aspx">Visual Studio Dev Essentials</a>. I only heard about this one whilst listening to <a href="http://www.radiotfs.com/Show/101/ConnectingonConnect">RadioTFS’s news from Connect() programme</a>.</p>
<p><a href="https://www.visualstudio.com/en-us/products/visual-studio-dev-essentials-vs.aspx">Visual Studio Dev Essentials</a> is mostly a re-packing of all the tools that were already freely available from Microsoft e.g. Visual Studio Community Edition, Tem Foundation Server Express etc.; but there are some notable additions* (some coming soon)</p>
<ul>
<li>Pluralsight  (6-month subscription)—limited time only</li>
<li>Xamarin University mobile training— coming soon</li>
<li>WintellectNOW  (3-month subscription)</li>
<li>Microsoft Virtual Academy</li>
<li>HackHands Live Programming Help  ($25 credit)</li>
<li>Priority Forum Support</li>
<li>Azure credit  ($25/month for 12 months)—coming soon</li>
<li>Visual Studio Team Services account with five users</li>
<li>App Service free tier</li>
<li>PowerBI free tier</li>
<li>HockeyApp free tier</li>
<li>Application Insights free tier</li>
</ul>
<p>*Check the <a href="https://www.visualstudio.com/en-us/products/visual-studio-dev-essentials-vs.aspx">Visual Studio Dev Essentials site</a> for the detailed T&amp;C</p>
<p>So if you, or a student/hobbyist you know, need great development tools sign up at <a href="https://www.visualstudio.com/en-us/products/visual-studio-dev-essentials-vs.aspx">Visual Studio Dev Essentials</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading to SonarQube 5.2 in the land of Windows, MSBuild and TFS</title>
      <link>https://blog.richardfennell.net/posts/upgrading-to-sonarqube-5-2-in-the-land-of-windows-msbuild-and-tfs/</link>
      <pubDate>Thu, 19 Nov 2015 18:31:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-to-sonarqube-5-2-in-the-land-of-windows-msbuild-and-tfs/</guid>
      <description>&lt;p&gt;SonarQube released version 5.2 a couple of weeks ago. This enabled some new features that really help if you are working with MSbuild or just on a Windows platform in general. These are detailed in the posts&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/13/support-for-active-directory-and-single-sign-on-sso-in-the-sonarqube-ldap-plugin.aspx&#34;&gt;Support for Active Directory and Single Sign On (SSO) in the SonarQube LDAP Plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/13/support-for-team-foundation-server-2015-in-sonarqube-tfvc-scm-plugin.aspx&#34;&gt;Support for Team Foundation Server 2015 in SonarQube TFVC SCM Plugin&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The new ability to manage users with LDAP is good, but one of the most important for me is the way 5.2 ease the configuration with SQL in integrated security mode. This is &lt;a href=&#34;http://docs.sonarqube.org/display/SONAR/Upgrading&#34;&gt;mentioned in the upgrade notes&lt;/a&gt;; basically it boils down to the fact you get better JDBC drivers with better support for SQL Clustering and security.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>SonarQube released version 5.2 a couple of weeks ago. This enabled some new features that really help if you are working with MSbuild or just on a Windows platform in general. These are detailed in the posts</p>
<ul>
<li><a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/13/support-for-active-directory-and-single-sign-on-sso-in-the-sonarqube-ldap-plugin.aspx">Support for Active Directory and Single Sign On (SSO) in the SonarQube LDAP Plugin</a></li>
<li><a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/13/support-for-team-foundation-server-2015-in-sonarqube-tfvc-scm-plugin.aspx">Support for Team Foundation Server 2015 in SonarQube TFVC SCM Plugin</a></li>
</ul>
<p>The new ability to manage users with LDAP is good, but one of the most important for me is the way 5.2 ease the configuration with SQL in integrated security mode. This is <a href="http://docs.sonarqube.org/display/SONAR/Upgrading">mentioned in the upgrade notes</a>; basically it boils down to the fact you get better JDBC drivers with better support for SQL Clustering and security.</p>
<p>We found the upgrade process mostly straight forward</p>
<ol>
<li>Download SonarQube 5.2 and unzipped it</li>
<li>Replaced the files on your SonarQube server</li>
<li>Edited the <em>sonar.properties</em> file with the correct SQL connection details for an integrated security SQL connection. As we wanted to move to integrated security we did not need to set the <em>sonar.jdbc.username</em> setting.<br>
<strong>Important</strong>: One thing was not that clear, if you want to use integrated security you do need the <strong>sqljdbc_auth.dll</strong> file in a folder on a search path (C:windowssystem32 is an obvious place to keep it). You can find this <a href="https://msdn.microsoft.com/en-us/sqlserver/aa937724.aspx">file on MSDN</a></li>
<li>Once the server was restarted we ran the <a href="http://localhost:9000/setup">http://localhost:9000/setup</a> command and it upgraded our DBs</li>
</ol>
<p>And that was it, for the upgrade. We could then use the standard SonarQube upgrade features to upgrade our plug-ins and to add the new ones like the  LDAP one.</p>
<p>Once the LDAP plug-in was in place (and the server restarted) we were automatically logged into SonarQube with our Windows AD accounts, so that was easy.</p>
<p>However we hit a problem with the new SonarQube 5.2 architecture and LDAP. The issue was that with 5.2 there is now no requirement for the upgraded <a href="https://github.com/SonarSource/sonar-msbuild-runner/releases/tag/1.0.2">1.0.2 SonarQube MSBuild runner</a> to talk directly to the SonarQube DB, all communication is via the SonarQube server. Obviously the user account that makes the call to the SonarQube server needs to be granted suitable rights. That is fairly obvious, the point we  tripped up on was ‘who was the runner running as?’ I had assumed it was as the build agent account, but this was not the case. As the connection to SonarQube is a TFS managed service, it had its own security credentials. Prior to 5.2 these credentials (other than the SonarQube server URL) had not mattered as the SonarQube runner made it own direct connection to the SonarQube DB. Post 5.2, with no DB connection and LDAP in use,  these service credentials become important. Once we had set these correctly, to a user with suitable rights, we were able to do new SonarQube analysis runs.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_282.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_278.png" title="image"></a></p>
<p>One other item of note. The change in architecture with 5.2 means that more work is being done on the server as opposed the client runner. The net effect of this is there is a short delay between runs being completed and the results appearing on the dashboard. Once expect it, it is not an issue, but a worry the first time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Finding it hard to make use of Azure for DevTest?</title>
      <link>https://blog.richardfennell.net/posts/finding-it-hard-to-make-use-of-azure-for-devtest/</link>
      <pubDate>Wed, 18 Nov 2015 21:54:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/finding-it-hard-to-make-use-of-azure-for-devtest/</guid>
      <description>&lt;p&gt;Announced at &lt;a href=&#34;https://www.google.co.uk/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=1&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=0CCMQFjAAahUKEwils4-1uJrJAhVD1BoKHfZlB7w&amp;amp;url=http%3A%2F%2Fconnect2015.visualstudio.com%2F&amp;amp;usg=AFQjCNGWIW03xz_LwALuizArROkRwiSEXw&#34;&gt;Connect()&lt;/a&gt; today were a couple of new tools that could really help a team with their DevOps issues when working with VSTS and Azure (and potentially other scenarios too).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DevTest Lab is a new set of tooling within the Azure portal that allows the easy management of Test VMs, their creation and management as well as providing a means to control how many VMs team members can create, thus controlling cost. Have a look at &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/18/getting-started-with-devtest-lab-for-azure.aspx&#34;&gt;Chuck’s post on getting started with DevTest Labs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;To aid general deployment, have a look that new Release tooling now in public preview. This is based on the same agents as the vNext build system and can provide a great way to formalise your deployment process. Have a looks at &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/18/announcing-the-new-release-management-service-in-visual-studio-team-services.aspx&#34;&gt;Vijay’s post on getting started with the new Release tools&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>Announced at <a href="https://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CCMQFjAAahUKEwils4-1uJrJAhVD1BoKHfZlB7w&amp;url=http%3A%2F%2Fconnect2015.visualstudio.com%2F&amp;usg=AFQjCNGWIW03xz_LwALuizArROkRwiSEXw">Connect()</a> today were a couple of new tools that could really help a team with their DevOps issues when working with VSTS and Azure (and potentially other scenarios too).</p>
<ul>
<li>DevTest Lab is a new set of tooling within the Azure portal that allows the easy management of Test VMs, their creation and management as well as providing a means to control how many VMs team members can create, thus controlling cost. Have a look at <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/18/getting-started-with-devtest-lab-for-azure.aspx">Chuck’s post on getting started with DevTest Labs</a></li>
<li>To aid general deployment, have a look that new Release tooling now in public preview. This is based on the same agents as the vNext build system and can provide a great way to formalise your deployment process. Have a looks at <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2015/11/18/announcing-the-new-release-management-service-in-visual-studio-team-services.aspx">Vijay’s post on getting started with the new Release tools</a></li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Chrome extension to help with exploratory testing</title>
      <link>https://blog.richardfennell.net/posts/chrome-extension-to-help-with-exploratory-testing/</link>
      <pubDate>Wed, 18 Nov 2015 19:52:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/chrome-extension-to-help-with-exploratory-testing/</guid>
      <description>&lt;p&gt;One of the many interesting announcements at &lt;a href=&#34;https://www.google.co.uk/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=1&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=0CCMQFjAAahUKEwils4-1uJrJAhVD1BoKHfZlB7w&amp;amp;url=http%3A%2F%2Fconnect2015.visualstudio.com%2F&amp;amp;usg=AFQjCNGWIW03xz_LwALuizArROkRwiSEXw&#34;&gt;Connect()&lt;/a&gt; today was that the new Microsoft Chrome Extension for Exploratory Testing  is  available in the &lt;a href=&#34;https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fmarketplace.visualstudio.com%2fitems%2fms.vss-exploratorytesting-web&amp;amp;data=01%7c01%7cravishan%40064d.mgd.microsoft.com%7c10c32f84487d4936287c08d2ef7ebd8b%7c72f988bf86f141af91ab2d7cd011db47%7c1&amp;amp;sdata=LrsJEhUhWIcopJn%2b8nLpLhW9GVdIHdjlWZ87XiQURSw%3d&#34;&gt;Chrome Store&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is a great tool if you use VSO, sorry VSTS, allowing an easy way to ‘kick the tyres’ on your application, logging any bugs directly back to VSTS as Bug work items.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_280.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_276.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Best of all, it makes it easy to test your application on other platforms with the link to &lt;a href=&#34;http://www.perfectomobile.com&#34;&gt;Perfecto Mobile&lt;/a&gt;. Just press the device button, login and you can launch a session on a real physical mobile device to continue your exploratory testing.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One of the many interesting announcements at <a href="https://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CCMQFjAAahUKEwils4-1uJrJAhVD1BoKHfZlB7w&amp;url=http%3A%2F%2Fconnect2015.visualstudio.com%2F&amp;usg=AFQjCNGWIW03xz_LwALuizArROkRwiSEXw">Connect()</a> today was that the new Microsoft Chrome Extension for Exploratory Testing  is  available in the <a href="https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fmarketplace.visualstudio.com%2fitems%2fms.vss-exploratorytesting-web&amp;data=01%7c01%7cravishan%40064d.mgd.microsoft.com%7c10c32f84487d4936287c08d2ef7ebd8b%7c72f988bf86f141af91ab2d7cd011db47%7c1&amp;sdata=LrsJEhUhWIcopJn%2b8nLpLhW9GVdIHdjlWZ87XiQURSw%3d">Chrome Store</a></p>
<p>This is a great tool if you use VSO, sorry VSTS, allowing an easy way to ‘kick the tyres’ on your application, logging any bugs directly back to VSTS as Bug work items.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_280.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_276.png" title="image"></a></p>
<p>Best of all, it makes it easy to test your application on other platforms with the link to <a href="http://www.perfectomobile.com">Perfecto Mobile</a>. Just press the device button, login and you can launch a session on a real physical mobile device to continue your exploratory testing.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_281.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_277.png" title="image"></a></p>
<p>Only down side I can see that that, if like me, you would love this functionality for on-premises TFS we need to wait a while, this first preview only support VSTS.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Hello to Visual Studio Team Services</title>
      <link>https://blog.richardfennell.net/posts/hello-to-visual-studio-team-services/</link>
      <pubDate>Wed, 18 Nov 2015 16:59:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/hello-to-visual-studio-team-services/</guid>
      <description>&lt;p&gt;After Microsoft’s announcements at todays &lt;a href=&#34;https://www.google.co.uk/url?sa=t&amp;amp;rct=j&amp;amp;q=&amp;amp;esrc=s&amp;amp;source=web&amp;amp;cd=1&amp;amp;cad=rja&amp;amp;uact=8&amp;amp;ved=0CCMQFjAAahUKEwils4-1uJrJAhVD1BoKHfZlB7w&amp;amp;url=http%3A%2F%2Fconnect2015.visualstudio.com%2F&amp;amp;usg=AFQjCNGWIW03xz_LwALuizArROkRwiSEXw&#34;&gt;Connect() event&lt;/a&gt;, Visual Studio Online (VSO) is now Visual Studio Team Services (VSTS). It is a good job I never changed the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/?tag=/VSTS&#34;&gt;tag on this blog from VSTS&lt;/a&gt; when Microsoft dropped the Team System name a few years ago.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2015/11/18/news-from-connect-2015.aspx&#34;&gt;For a run down of all the VSTS announcements have a look at Brian Harry’s blog&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After Microsoft’s announcements at todays <a href="https://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;uact=8&amp;ved=0CCMQFjAAahUKEwils4-1uJrJAhVD1BoKHfZlB7w&amp;url=http%3A%2F%2Fconnect2015.visualstudio.com%2F&amp;usg=AFQjCNGWIW03xz_LwALuizArROkRwiSEXw">Connect() event</a>, Visual Studio Online (VSO) is now Visual Studio Team Services (VSTS). It is a good job I never changed the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/?tag=/VSTS">tag on this blog from VSTS</a> when Microsoft dropped the Team System name a few years ago.</p>
<p><a href="http://blogs.msdn.com/b/bharry/archive/2015/11/18/news-from-connect-2015.aspx">For a run down of all the VSTS announcements have a look at Brian Harry’s blog</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Why you need to use vNext build tasks to share scripts between builds</title>
      <link>https://blog.richardfennell.net/posts/why-you-need-to-use-vnext-build-tasks-to-share-scripts-between-builds/</link>
      <pubDate>Tue, 17 Nov 2015 16:51:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-you-need-to-use-vnext-build-tasks-to-share-scripts-between-builds/</guid>
      <description>&lt;p&gt;Whilst doing a &lt;a href=&#34;https://msdn.microsoft.com/Library/vs/alm/Build/overview&#34;&gt;vNext build&lt;/a&gt; from a TFVC repository I needed map both my production code branch and a common folder of scripts that I intended to use in a number of builds, so my build workspace was set to&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Map – &lt;strong&gt;$/BM/mycode/main&lt;/strong&gt;                                     - my production code&lt;/li&gt;
&lt;li&gt;Map – &lt;strong&gt;$/BM/BuildDefinations/vNextScripts&lt;/strong&gt; - my shared PowerShell I wish to run in different builds e.g. assembly versioning.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As I wanted this to be a CI build, I also  set the trigger to &lt;strong&gt;$/tp1/mycode/main&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst doing a <a href="https://msdn.microsoft.com/Library/vs/alm/Build/overview">vNext build</a> from a TFVC repository I needed map both my production code branch and a common folder of scripts that I intended to use in a number of builds, so my build workspace was set to</p>
<ul>
<li>Map – <strong>$/BM/mycode/main</strong>                                     - my production code</li>
<li>Map – <strong>$/BM/BuildDefinations/vNextScripts</strong> - my shared PowerShell I wish to run in different builds e.g. assembly versioning.</li>
</ul>
<p>As I wanted this to be a CI build, I also  set the trigger to <strong>$/tp1/mycode/main</strong></p>
<p>The problem I found was that with my workspace set as above, the associated changes for the build include anything checked into <strong>$/BM</strong> and below. Also the source branch was set as $<strong>/BM</strong></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_278.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_274.png" title="image"></a></p>
<p>To fix this problem I had to remove the mapping to the scripts folder, once this was done  the associated changes shown were only those for my production code area, and the source branch was listed correctly.</p>
<p>But what to do about running my script?</p>
<p>I don’t really want to have to copy a common script to each build, huge potential for error there, and versioning issues if I want a common script in all build. The best solution I found was to take the PowerShell script, in my case the <a href="https://msdn.microsoft.com/Library/vs/alm/Build/scripts/index">sample assembly versioning script provided for VSO</a>, and package it as a vNext build Task. It took no modification just required the addition of a manifest file. You can find the task on my <a href="https://github.com/rfennell/vNextBuild">vNextBuild Github repo</a></p>
<p>This custom task could then be uploaded to my TFS server and used in all my builds. As it picks up it variable from environment variables it required so configuration, extracting the version number for the build number format.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_279.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_275.png" title="image"></a></p>
<p>If you wish to use this task, you need to follow the same instructions to setup your <a href="https://github.com/Microsoft/vso-agent-tasks">development environment as for the VSO Tasks</a>, then</p>
<ol>
<li>Clone the repo <a href="https://github.com/rfennell/vNextBuild.git">https://github.com/rfennell/vNextBuild.git</a></li>
<li>In the root of the repo run gulp to build the task</li>
<li>Use <a href="https://www.npmjs.com/package/tfx-cli">tfx</a> to upload the task to your TFS instance</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>Versioning a VSIX package as part of the TFS vNext build (when the source is on GitHub)</title>
      <link>https://blog.richardfennell.net/posts/versioning-a-vsix-package-as-part-of-the-tfs-vnext-build-when-the-source-is-on-github/</link>
      <pubDate>Tue, 10 Nov 2015 21:56:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/versioning-a-vsix-package-as-part-of-the-tfs-vnext-build-when-the-source-is-on-github/</guid>
      <description>&lt;p&gt;I have recently added a CI build to my GitHub stored &lt;a href=&#34;https://github.com/rfennell/ParametersXmlAddin&#34;&gt;ParametersXmlAddin&lt;/a&gt; VSIX project. I did this using Visual Studio Online’s hosted build service, &lt;a href=&#34;https://msdn.microsoft.com/en-us/Library/vs/alm/Build/github/index&#34;&gt;did you know that this could used to build source from GitHub&lt;/a&gt;?&lt;/p&gt;
&lt;p&gt;As part of this build I wanted to version stamp the assemblies and the resultant VSIX package. To do the former I used the script documented on &lt;a href=&#34;https://msdn.microsoft.com/Library/vs/alm/Build/scripts/index&#34;&gt;MSDN&lt;/a&gt;, for the latter I also used the same basic method of extracting the version from the build number as used in the script for versioning assemblies. You can find my VSIX &lt;a href=&#34;https://github.com/rfennell/vNextBuild/blob/master/PowerShell/ApplyVersionToVSIX.ps1&#34;&gt;script stored in this repo&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently added a CI build to my GitHub stored <a href="https://github.com/rfennell/ParametersXmlAddin">ParametersXmlAddin</a> VSIX project. I did this using Visual Studio Online’s hosted build service, <a href="https://msdn.microsoft.com/en-us/Library/vs/alm/Build/github/index">did you know that this could used to build source from GitHub</a>?</p>
<p>As part of this build I wanted to version stamp the assemblies and the resultant VSIX package. To do the former I used the script documented on <a href="https://msdn.microsoft.com/Library/vs/alm/Build/scripts/index">MSDN</a>, for the latter I also used the same basic method of extracting the version from the build number as used in the script for versioning assemblies. You can find my VSIX <a href="https://github.com/rfennell/vNextBuild/blob/master/PowerShell/ApplyVersionToVSIX.ps1">script stored in this repo</a>.</p>
<p>I added both of these scripts in my <a href="https://github.com/rfennell/ParametersXmlAddin">ParametersXmlAddin</a> project repo’s <strong>Script</strong> folder and just call them at the start of my build with a pair of PowerShell tasks. As they both get the build number from the environment variables there is no need to pass any arguments.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_275.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_271.png" title="image"></a></p>
<p>I only wanted to publish the VSIX package. This was done by setting the contents filter on the <strong>Publish Build Artifacts</strong> task to <strong>***.vsix</strong></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_276.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_272.png" title="image"></a></p>
<p>The final step was to enable the badge for the build, this is done on the <strong>General</strong> tab. Once enabled, I copied the provided URL for the badge graphics that shows the build status and added this as an image to the <a href="https://github.com/rfennell/ParametersXmlAddin/blob/master/README.md">Readme.MD file on my repo’s wiki</a></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_277.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_273.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Why can’t I assign a VSO user as having ‘eligible MSDN’ using an AAD work account?</title>
      <link>https://blog.richardfennell.net/posts/why-cant-i-assign-a-vso-user-as-having-eligible-msdn-using-an-aad-work-account/</link>
      <pubDate>Wed, 04 Nov 2015 21:35:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-cant-i-assign-a-vso-user-as-having-eligible-msdn-using-an-aad-work-account/</guid>
      <description>&lt;p&gt;When access &lt;a href=&#34;http://tfs.visualstudio.com&#34;&gt;VSO&lt;/a&gt; you have two authentication options; either a LiveID (or an MSA using it’s newest name) or a Work Account ID (a domain account). The latter is used to provide extra security, so a domain admin can easily control who has access to a whole set of systems. It does assume you have used Azure Active Directory (AAD) that is sync’d with your on premises AD, and that this AAD is used to back your VSO instance. &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/20/Linking-VSO-to-your-Azure-Subscription-and-Azure-Active-Directory.aspx&#34;&gt;See my previous post on this subject.&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When access <a href="http://tfs.visualstudio.com">VSO</a> you have two authentication options; either a LiveID (or an MSA using it’s newest name) or a Work Account ID (a domain account). The latter is used to provide extra security, so a domain admin can easily control who has access to a whole set of systems. It does assume you have used Azure Active Directory (AAD) that is sync’d with your on premises AD, and that this AAD is used to back your VSO instance. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/20/Linking-VSO-to-your-Azure-Subscription-and-Azure-Active-Directory.aspx">See my previous post on this subject.</a></p>
<p>If you are doing this the issue you often see is that VSO does not pickup your MSDN subscription because it is linked to an MSA not a work account. This is all solvable, but there are hoops to jump through, more than there should be sometimes.</p>
<h3 id="basic-process">Basic Process</h3>
<p>First you need to link your MSDN account to a Work Account</p>
<ul>
<li>Login to <a href="https://msdn.micrsoft.com">https://msdn.micrsoft.com</a> with the MSA that is associated with your MSDN account.</li>
<li>Click on the MSDN subscriptions menu option.</li>
<li>Click on the Link to work account  and enter your work ID. Note that it will also set your Microsoft Azure linked work account</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_272.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_268.png" title="image"></a></p>
<p>Assuming your work account is listed in your AD/AAD, over in VSO you should now be able to …</p>
<ul>
<li>
<p>Login as the VSO administrator</p>
</li>
<li>
<p>Invite any user in the AAD to your VSO instance via the link <code>https://\[theaccount\].visualstudio.com/\_user</code> . A user can be invited as</p>
</li>
<li>
<p>Basic – you get 5 for free</p>
</li>
<li>
<p>Stakeholder – what we fall back to if there is an issue</p>
</li>
<li>
<p>MSDN Subscription – the one we want (in screenshot below the green box shows a user where MSDN has been validated, the red box is a user who has not logged in yet with an account associated with a valid MSDN subscription)</p>
</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_273.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_269.png" title="image"></a></p>
<ul>
<li>Once invited a user gets an email so they can login as shown below. Make sure you pick the work account login link (lower left. Note that this is mocked up in the screen shot below as which login options are shown appears in a context sensitive way, only being shown the first time a user connects and if the VSO is AAD backed. If you pick the main login fields (the wrong ones) it will try to login assuming the ID is an MSA, which will not work. This is particularly a confusing issue if you used the same email address for your MSA as your Work Account, more on this in the troubleshooting section</li>
</ul>
<p> <a href="/wp-content/uploads/sites/2/historic/image_274.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_270.png" title="image"></a></p>
<ul>
<li>On later connections only the work ID login will be shown</li>
<li>Once a user has logged in for the first time with the correct ID, the VSO admin should be able to see the MSDN subscription is validated</li>
</ul>
<h3 id="troubleshooting">Troubleshooting</h3>
<p>We have seen problem that though the user is in the domain and correctly added to VSO it will not register that the MSDN subscription is active. These steps can help.</p>
<ul>
<li>
<p>Make sure in the  <a href="https://msdn.microsoft.com">https://msdn.microsoft.com</a> portal you have actually linked your work ID. You still need to explicably do this even if your MSA and Work ID use the same email address e.g.   <a href="mailto:user@domain.com">user@domain.com</a>. Using the same email address for both IDs can get confusing, so I would recommend considering you setup your MSA email addresses to not clash with your work ID.</p>
</li>
<li>
<p>When you login to VSO <strong>MAKE SURE YOU USE THE WORK ID LOGIN LINK (LHS OF DIALOG UNDER VSO LOGO) TO LOGIN WITH A WORK ID AND NOT THE MAIN LIVEID FIELDS</strong>. I can’t stress this enough, especially if you use the same email address  for both the MSA and work account</p>
</li>
<li>
<p>If you still get issues with picking up the MSDN subscription</p>
</li>
<li>
<p>In VSO the admin should set the user to be a basic user</p>
</li>
<li>
<p>In  <a href="https://msdn.microsoft.com">https://msdn.microsoft.com</a> the user should make sure they did not make any typo&rsquo;s when linking the work account ID</p>
</li>
<li>
<p>The user should sign out of VSO and back in using their work ID, <strong>MAKE SURE THEYUSE THE CORRECT WORK ID LOGIN DIALOG.</strong> They should see the features available to a basic user</p>
</li>
<li>
<p>The VSO admin should change the role assignment in VSO to be MSDN eligible and it should flip over without a problem. There seems to be no need to logout and back in again.</p>
</li>
</ul>
<p>Note if you assign a new MSA to an MSDN subscription it can take a little while to propagate, if you get issues that activation emails don’t arrive, pause a while and try again later. You can’t do any of this until your can login to MSDN with your MSA.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SonarQube 5.2 released</title>
      <link>https://blog.richardfennell.net/posts/sonarqube-5-2-released/</link>
      <pubDate>Tue, 03 Nov 2015 17:56:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sonarqube-5-2-released/</guid>
      <description>&lt;p&gt;At my session at DDDNorth I mentioned that some of the settings you needed to configure in SonarQube 5.1, such as DB connection strings for SonarRunner, would not need to be made once 5.2 was release. &lt;a href=&#34;http://www.sonarsource.com/2015/11/02/sonarqube-5-2-released/&#34;&gt;Well it was released today&lt;/a&gt;. Most important changes for we are&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Server handles all DB connections&lt;/li&gt;
&lt;li&gt;LDAP support for user authentication&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Should make the  install process easier&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At my session at DDDNorth I mentioned that some of the settings you needed to configure in SonarQube 5.1, such as DB connection strings for SonarRunner, would not need to be made once 5.2 was release. <a href="http://www.sonarsource.com/2015/11/02/sonarqube-5-2-released/">Well it was released today</a>. Most important changes for we are</p>
<ul>
<li>Server handles all DB connections</li>
<li>LDAP support for user authentication</li>
</ul>
<p>Should make the  install process easier</p>
]]></content:encoded>
    </item>
    <item>
      <title>My DDDNorth session on Technical Debt and SonarQube</title>
      <link>https://blog.richardfennell.net/posts/my-dddnorth-session-on-technical-debt-and-sonarqube/</link>
      <pubDate>Sun, 25 Oct 2015 11:27:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-dddnorth-session-on-technical-debt-and-sonarqube/</guid>
      <description>&lt;p&gt;Thanks to everyone who came to my session at &lt;a href=&#34;http://www.dddnorth.co.uk&#34;&gt;DDDNorth&lt;/a&gt; on SonarQube, hope you found it useful. The links to resources for my session are&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sonarqube.org/&#34;&gt;SonarQube documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/tags/managing&amp;#43;technical&amp;#43;debt/&#34;&gt;Microsoft Product Team posts on Technical Debt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;https://github.com/SonarSource/sonar-.net-documentation&#34;&gt;ALM Rangers Guide on SonarQube (source)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalmrangers/archive/tags/vsartechnicaldebt/&#34;&gt;ALM Rangers Guide on SonarQube (explanatory post)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://vs.sonarlint.org/&#34;&gt;SonarLint&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And you can find my slides on my GitHub repo &lt;a href=&#34;https://github.com/rfennell/Presentations&#34;&gt;https://github.com/rfennell/Presentations&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who came to my session at <a href="http://www.dddnorth.co.uk">DDDNorth</a> on SonarQube, hope you found it useful. The links to resources for my session are</p>
<ul>
<li><a href="http://www.sonarqube.org/">SonarQube documentation</a></li>
<li><a href="http://blogs.msdn.com/b/visualstudioalm/archive/tags/managing&#43;technical&#43;debt/">Microsoft Product Team posts on Technical Debt</a></li>
<li><a href="https://github.com/SonarSource/sonar-.net-documentation">ALM Rangers Guide on SonarQube (source)</a></li>
<li><a href="http://blogs.msdn.com/b/visualstudioalmrangers/archive/tags/vsartechnicaldebt/">ALM Rangers Guide on SonarQube (explanatory post)</a></li>
<li><a href="http://vs.sonarlint.org/">SonarLint</a></li>
</ul>
<p>And you can find my slides on my GitHub repo <a href="https://github.com/rfennell/Presentations">https://github.com/rfennell/Presentations</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Patterns &amp;amp; Practices Architecture Track at Future Decoded</title>
      <link>https://blog.richardfennell.net/posts/patterns-practices-architecture-track-at-future-decoded/</link>
      <pubDate>Mon, 12 Oct 2015 15:33:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/patterns-practices-architecture-track-at-future-decoded/</guid>
      <description>&lt;p&gt;In case you had not noticed, our MD and Connected Systems MVP Robert Hogg posted about the new &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/boss/post/2015/10/07/wraps-are-off-patterns-and-practices-architecture-track-a-future-decoded.aspx&#34;&gt;Patterns &amp;amp; Practices Architecture Track he is hosting on Day 1 of the Microsoft Future Decoded event&lt;/a&gt;  next month in London.&lt;/p&gt;
&lt;p&gt;This track is an additional track to the now full Future Decoded. If you are interested in attending then get in touch with &lt;a href=&#34;mailto:enquiries@blackmarble.com&#34;&gt;enquiries@blackmarble.com&lt;/a&gt;, for the attention of Linda, and she can help you out with a special code (if there are still any left). This code will not only give you access to the excellent p&amp;amp;p track on Day One, but also the Keynotes, so please select Day One when you register!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In case you had not noticed, our MD and Connected Systems MVP Robert Hogg posted about the new <a href="http://blogs.blackmarble.co.uk/blogs/boss/post/2015/10/07/wraps-are-off-patterns-and-practices-architecture-track-a-future-decoded.aspx">Patterns &amp; Practices Architecture Track he is hosting on Day 1 of the Microsoft Future Decoded event</a>  next month in London.</p>
<p>This track is an additional track to the now full Future Decoded. If you are interested in attending then get in touch with <a href="mailto:enquiries@blackmarble.com">enquiries@blackmarble.com</a>, for the attention of Linda, and she can help you out with a special code (if there are still any left). This code will not only give you access to the excellent p&amp;p track on Day One, but also the Keynotes, so please select Day One when you register!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Release Manager - New deployment is not allowed as an another deployment is in progress</title>
      <link>https://blog.richardfennell.net/posts/release-manager-new-deployment-is-not-allowed-as-an-another-deployment-is-in-progress/</link>
      <pubDate>Wed, 30 Sep 2015 21:08:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/release-manager-new-deployment-is-not-allowed-as-an-another-deployment-is-in-progress/</guid>
      <description>&lt;p&gt;Whilst working with a vNext Release Management pipeline I started seeing the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Microsoft.TeamFoundation.Release.Common.Helpers.OperationFailedException:&lt;br&gt;
New deployment is not allowed as an another deployment is in progress.&lt;br&gt;
Retry the deployment after sometime.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Problem was I could not see any blocked or paused deployment releases. All Internet searches mentioned multiple pipelines that share components, but this was not the issue.&lt;/p&gt;
&lt;p&gt;Eventually I found the issue, my release pipeline included a step that ran &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/04/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline.aspx&#34;&gt;CodedUI tests via TCM&lt;/a&gt;, hence a previous running of this template had triggered the test via TCM, but they had stalled. I found this by looking in MTM.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst working with a vNext Release Management pipeline I started seeing the error</p>
<blockquote>
<p><em>Microsoft.TeamFoundation.Release.Common.Helpers.OperationFailedException:<br>
New deployment is not allowed as an another deployment is in progress.<br>
Retry the deployment after sometime.</em></p></blockquote>
<p>Problem was I could not see any blocked or paused deployment releases. All Internet searches mentioned multiple pipelines that share components, but this was not the issue.</p>
<p>Eventually I found the issue, my release pipeline included a step that ran <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/04/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline.aspx">CodedUI tests via TCM</a>, hence a previous running of this template had triggered the test via TCM, but they had stalled. I found this by looking in MTM.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_271.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_267.png" title="image"></a></p>
<p>Release Management was just saying the release was rejected with the above error message, no clue about the unfinished test run. Not that helpful.</p>
<p>You might have expect Release Management to return only after the test had timed out, but that is only down to whether you <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/11/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline-Part-2.aspx">set the release pipeline to wait or not</a>, I had set mine not to wait.</p>
<p>Once I stopped this test run via MTM all was OK.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Agenda for Black Marble’s annual Architecture Forum is firming up with a keynote from Martin Woodward</title>
      <link>https://blog.richardfennell.net/posts/agenda-for-black-marbles-annual-architecture-forum-is-firming-up-with-a-keynote-from-martin-woodward/</link>
      <pubDate>Mon, 28 Sep 2015 19:46:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/agenda-for-black-marbles-annual-architecture-forum-is-firming-up-with-a-keynote-from-martin-woodward/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://blackmarble.com/news/architecture-forum-2015-agenda/&#34;&gt;agenda&lt;/a&gt; for our 8th annual &lt;a href=&#34;http://blackmarble.com/events/Architecture%20Forum%20in%20the%20North%20-%208&#34;&gt;Black Marble Architecture Forum&lt;/a&gt; is firming up. Just confirmed is our keynote from &lt;a href=&#34;https://twitter.com/martinwoodward&#34;&gt;Martin Woodward&lt;/a&gt; the Executive Director of the &lt;a href=&#34;http://www.dotnetfoundation.org/&#34;&gt;.NET Foundation&lt;/a&gt;, discussing open source adoption within Microsoft&lt;/p&gt;
&lt;p&gt;There are still spaces for this free event, so why not &lt;a href=&#34;http://blackmarble.com/events/Architecture%20Forum%20in%20the%20North%20-%208&#34;&gt;register&lt;/a&gt;?&lt;/p&gt;
&lt;p&gt;The event is on the 15th of December in Leeds.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://blackmarble.com/news/architecture-forum-2015-agenda/">agenda</a> for our 8th annual <a href="http://blackmarble.com/events/Architecture%20Forum%20in%20the%20North%20-%208">Black Marble Architecture Forum</a> is firming up. Just confirmed is our keynote from <a href="https://twitter.com/martinwoodward">Martin Woodward</a> the Executive Director of the <a href="http://www.dotnetfoundation.org/">.NET Foundation</a>, discussing open source adoption within Microsoft</p>
<p>There are still spaces for this free event, so why not <a href="http://blackmarble.com/events/Architecture%20Forum%20in%20the%20North%20-%208">register</a>?</p>
<p>The event is on the 15th of December in Leeds.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running nUnit and Jasmine.JS unit tests in TFS/VSO vNext build</title>
      <link>https://blog.richardfennell.net/posts/running-nunit-and-jasmine-js-unit-tests-in-tfsvso-vnext-build/</link>
      <pubDate>Wed, 23 Sep 2015 13:28:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-nunit-and-jasmine-js-unit-tests-in-tfsvso-vnext-build/</guid>
      <description>&lt;p&gt;&lt;em&gt;This article was first published on the Microsoft’s UK Developers site as&lt;/em&gt; &lt;a href=&#34;http://www.microsoft.com/en-gb/developers/articles/week04aug15/nunit-and-jasmine-js-unit-tests-in-tfs-vso-vnext-build/&#34;&gt;Running nUnit and Jasmine.JS unit tests in TFS/VSO vNext build&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;With the advent of &lt;a href=&#34;https://msdn.microsoft.com/en-us/Library/vs/alm/Build/overview&#34;&gt;vNext build&lt;/a&gt; in &lt;a href=&#34;https://www.visualstudio.com/en-us/downloads/visual-studio-2015-downloads-vs.aspx&#34;&gt;TFS 2015&lt;/a&gt; and &lt;a href=&#34;https://www.visualstudio.com/products/what-is-visual-studio-online-vs&#34;&gt;Visual Studio Online&lt;/a&gt; running unit tests that are not MSTest based within your build process is far more straightforward than it used to be. No longer do you have to use custom XAML build activities or tell all your TFS build controllers where the test runner assemblies are. The ‘out the box’ vNext build Visual Studio Test task will automatically load any test adaptors it finds in the path specified for test runners in its advanced properties, a path that can be populated via NuGet.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>This article was first published on the Microsoft’s UK Developers site as</em> <a href="http://www.microsoft.com/en-gb/developers/articles/week04aug15/nunit-and-jasmine-js-unit-tests-in-tfs-vso-vnext-build/">Running nUnit and Jasmine.JS unit tests in TFS/VSO vNext build</a></p>
<p>With the advent of <a href="https://msdn.microsoft.com/en-us/Library/vs/alm/Build/overview">vNext build</a> in <a href="https://www.visualstudio.com/en-us/downloads/visual-studio-2015-downloads-vs.aspx">TFS 2015</a> and <a href="https://www.visualstudio.com/products/what-is-visual-studio-online-vs">Visual Studio Online</a> running unit tests that are not MSTest based within your build process is far more straightforward than it used to be. No longer do you have to use custom XAML build activities or tell all your TFS build controllers where the test runner assemblies are. The ‘out the box’ vNext build Visual Studio Test task will automatically load any test adaptors it finds in the path specified for test runners in its advanced properties, a path that can be populated via NuGet.</p>
<h3 id="running-nunit-tests">Running nUnit tests</h3>
<p>All this means that to find and run MSTest and nUnit tests as part of your build all you have to do is as follows</p>
<ol>
<li>
<p>Create a solution that contains a project with MStest and nUnit tests, in my sample this is a MVC web application project with its automatically created MSTest unit tests project.</p>
</li>
<li>
<p>In the test project add some nUnit tests. Use <a href="https://www.nuget.org/packages/NUnit/">NuGet to add the references to nUnit</a> to the test project so it compiles.</p>
</li>
<li>
<p><strong>Historically</strong> in your local Visual Studio instance you needed to install the <a href="https://visualstudiogallery.msdn.microsoft.com/6ab922d0-21c0-4f06-ab5f-4ecd1fe7175d">nUnit Test Runner VSIX package from Visual Studio Gallery</a> – this allows Visual Studio to discover your nUnit tests, as well as any MSTest ones, and run them via the built in Test Explorer</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_260.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_256.png" title="image"></a><br>
**<br>
IMPORTANT Change –** However installing this VSIX package is no longer required. If you use <a href="https://www.nuget.org/packages/NUnitTestAdapter/">Nuget to add the nUnit Test Runner</a> to the solution, as well as the <a href="https://www.nuget.org/packages/NUnit/2.6.4">nUnit package</a> itself, then Visual Studio can find the nUnit tests without the VSIX package. This is useful but not world changing on your development PC, but when on the build box it means the NuGet restore will make sure the nUnit test adapter assemblies are pulled down onto the local build boxes file system and used to find tests with no extra work.<br>
**<br>
Note**: If you still want to install the VSIX package on your local Visual Studio instance you can, it is just you don’t have to.</p>
</li>
<li>
<p>Check in your solution into TFS/VSO source control. It does not matter if it is TFVC or Git based</p>
</li>
<li>
<p>Create a new vNext build using the Visual Studio template</p>
</li>
<li>
<p>You can leave most of the parameters on default setting. But you do need to edit the Visual Studio Test task’s advanced settings to point at the NuGet packages folder for your solution (which will be populated via NuGet restore) so the custom nUnit test adaptor can be found i.e. usually setting it to  <strong>$(Build.SourcesDirectory)packages</strong></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_261.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_257.png" title="image"></a></p>
</li>
<li>
<p>The build should run and find your tests, the MStest ones because they are built in and the nUnit ones because it found the custom test adaptor due to the NuGet restore being done prior to the build. The test results can be found on the build summary page</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_262.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_258.png" title="image"></a></p>
</li>
</ol>
<h3 id="but-what-if-you-want-run-jasminejs-test">But what if you want run Jasmine.JS test?</h3>
<p>If you want to run <a href="http://jasmine.github.io/">Jasmine JavaScript unit tests</a> the process is basically the same. The only major difference is that you do still need to install the <a href="https://visualstudiogallery.msdn.microsoft.com/f8741f04-bae4-4900-81c7-7c9bfb9ed1fe">Chutzpah Test runner</a> on your local Visual Studio as a VSIX package to run the tests locally. There is a <a href="https://www.nuget.org/packages/Chutzpah/">NuGet package for the Chutzpah test runner</a> so you can avoid having to manually unpack the VSIX and get it into source control to deploy it to the build host (<a href="http://blogs.msdn.com/b/visualstudioalm/archive/2012/07/09/javascript-unit-tests-on-team-foundation-service-with-chutzpah.aspx">unless you really want to follow this process</a>), but this package does not currently enable Visual Studio to find the Jasmine tests without the VSIX extension being installed, or at least it didn’t for me.</p>
<p>Using the solution I used before</p>
<ol>
<li>
<p>Use <a href="https://www.nuget.org/packages/jasmine-js/">NuGet to add Jasmine.JS</a> to the test project</p>
</li>
<li>
<p>Add a test file to the test project e.g. mycode.tests.js (adding any JavaScript references needed to find any script code under test in the main WebApp project)</p>
</li>
<li>
<p>Install the <a href="https://visualstudiogallery.msdn.microsoft.com/f8741f04-bae4-4900-81c7-7c9bfb9ed1fe">Chutzpah Test runner in your local Visual Studio</a> as a VSIX extension, restart Visual Studio</p>
</li>
<li>
<p>You should now be able to see and run the Jasmine test run in the test runner as well as the MSTest and nUnit tests.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_263.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_259.png" title="image"></a></p>
</li>
<li>
<p>Add the <a href="https://www.nuget.org/packages/Chutzpah/">NuGet package for the Chutzpah test runner</a> to your solution, this is a solution level package, so does not need to be associated with any project.</p>
</li>
<li>
<p>Check the revised code into source control</p>
</li>
<li>
<p>In your vNext build add another Visual Studio Test task, set the test assembly to match your javascript test naming convention e.g. <strong>***.tests.js</strong> and the path to the custom test adaptor to <strong>$(Build.SourcesDirectory)packages</strong> (as before)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_264.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_260.png" title="image"></a></p>
</li>
<li>
<p>Run the revised build.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_265.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_261.png" title="image"></a></p>
</li>
<li>
<p>You should see the two test tasks run and a pair of test results in the summary for the build.</p>
</li>
</ol>
<p>So now hopefully you should find this a more straight forward way to added testing to your vNext builds. Allowing easy use of both your own build boxes and the hosted build service for VSO with testing frameworks they do not support ‘out the box’</p>
]]></content:encoded>
    </item>
    <item>
      <title>Powershell to help plot graphs of how long TFS upgrades take</title>
      <link>https://blog.richardfennell.net/posts/powershell-to-help-plot-graphs-of-how-long-tfs-upgrades-take/</link>
      <pubDate>Thu, 17 Sep 2015 16:08:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/powershell-to-help-plot-graphs-of-how-long-tfs-upgrades-take/</guid>
      <description>&lt;p&gt;When doing TFS upgrades it is useful to know roughly how long they will take. The upgrade programs give a number of steps, but not all steps are equal. Some are quick, some are slow. &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/21/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take.aspx&#34;&gt;I have found it useful to graph past updates&lt;/a&gt; so I can get a feel of how long an update will take given it got to ‘step x in y minutes’. You can do this by hand, noting down time as specific steps are reached. However for a long upgrade it usually means pulling data out of the TFS TPC upgrade logs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When doing TFS upgrades it is useful to know roughly how long they will take. The upgrade programs give a number of steps, but not all steps are equal. Some are quick, some are slow. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/21/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take.aspx">I have found it useful to graph past updates</a> so I can get a feel of how long an update will take given it got to ‘step x in y minutes’. You can do this by hand, noting down time as specific steps are reached. However for a long upgrade it usually means pulling data out of the TFS TPC upgrade logs.</p>
<p>To make this process easier I put together this script to find the step completion rows in the log file and format them out such that they are easy to graph in Excel</p>
<pre tabindex="0"><code>param  
(  
    $logfile = &#34;TPC\_ApplyPatch.log&#34;,  
    $outfile = &#34;out.csv&#34;  
) 

  
 # A function to covert the start and end times to a number of minutes  
 # Can&#39;t use simple timespan as we only have the time portion not the whole datetime  
 # Hence the hacky added a day-1 second  
 function CalcDuration  
 {  
    param  
    (  
        $startTime,  
        $endTime  
    )

 

    $diff = \[dateTime\]$endTime - $startTime  
    if (\[dateTime\]$endTime -lt $startTime)   
    {   
       $diff += &#34;23:59&#34; # add a day as we past midnight  
    }

 

    \[int\]$diff.Hours \*60 + $diff.Minutes  
 }

 

 Write-Host &#34;Importing $logfile for processing&#34;  
 # pull out the lines we are interested in using a regular expression to extract the columns  
 # the (.{8} handle the fixed width, exact matches are used for the test  
 $lines = Get-Content -Path $logfile | Select-String &#34;  Executing step:&#34;  | Where{$\_ -match &#34;^(.)(.{8})(.{8})(Executing step:)(.{2})(.\*)(&#39;)(.\*)(\[(\])(.\*)(\[ \])(\[of\])(.\*)&#34;} | ForEach{  
    \[PSCustomObject\]@{  
        &#39;Step&#39; = $Matches\[10\]  
        &#39;TimeStamp&#39; = $Matches\[2\]  
        &#39;Action&#39; = $Matches\[6\]  
    }  
 }  
   
\# We assume the upgrade started at the timestamp of the 0th step  
\# Not true but very close  
\[DateTime\]$start = $lines\[0\].TimeStamp

 

Write-Host &#34;Writing results to $outfile&#34;  
\# Work out the duration  
 $steps = $lines | ForEach{  
    \[PSCustomObject\]@{  
        &#39;Step&#39; = $\_.Step  
        &#39;TimeStamp&#39; = $\_.TimeStamp  
        &#39;EplasedTime&#39; = CalcDuration -startTime $start -endTime $\_.TimeStamp   
        &#39;Action&#39; = $\_.Action  
          
    }  
 }   
 $steps | export-csv $outfile -NoTypeInformation 

 

\# and list to screen  
$steps
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Session accepted for DDDNorth</title>
      <link>https://blog.richardfennell.net/posts/session-accepted-for-dddnorth/</link>
      <pubDate>Mon, 14 Sep 2015 15:54:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/session-accepted-for-dddnorth/</guid>
      <description>&lt;p&gt;Pleased to say &lt;a href=&#34;http://www.dddnorth.co.uk/Sessions/Details/161&#34;&gt;my session on SonarQube has been accepted for DDDNorth&lt;/a&gt;. And it seems that registration has opened and closed today, &lt;a href=&#34;http://www.dddnorth.co.uk/Home/Register&#34;&gt;there is a wait list up now&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Good to see the DDD events still popular after 10 long years&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD North Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddnorth.co.uk/Content/images/logo.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Pleased to say <a href="http://www.dddnorth.co.uk/Sessions/Details/161">my session on SonarQube has been accepted for DDDNorth</a>. And it seems that registration has opened and closed today, <a href="http://www.dddnorth.co.uk/Home/Register">there is a wait list up now</a></p>
<p>Good to see the DDD events still popular after 10 long years</p>
<p><img alt="DDD North Logo" loading="lazy" src="http://www.dddnorth.co.uk/Content/images/logo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Is the Microsoft Band any good for Triathlon? Training Yes, racing No</title>
      <link>https://blog.richardfennell.net/posts/is-the-microsoft-band-any-good-for-triathlon-training-yes-racing-no/</link>
      <pubDate>Mon, 14 Sep 2015 15:51:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/is-the-microsoft-band-any-good-for-triathlon-training-yes-racing-no/</guid>
      <description>&lt;p&gt;The title says it all, I have been using a Microsoft Band for a few months now and have found it a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/04/05/After-a-few-days-living-with-a-Microsoft-Band.aspx&#34;&gt;great tool for running and cycling as long as you are going out for less than about 5 hours&lt;/a&gt;. I tried to use for the first time Triathlon race at at the Leeds Triathlon over the weekend.&lt;/p&gt;
&lt;p&gt;As it it not water proof it was not an option for the swim (unlike my old Polar HR monitor), so I put it on in T1 (swim to bike), don’t think it wasted too much time! This is where I hit the first issue (or second if you count that it is not waterproof) that my finger was too wet to operate the touch screen. I have seen this issue on runs on rainy days. So I did not manage to switch it to cycle mode, and did not bother to try again whilst cycling after I had dried out – a had other things on my mind like being a in good aero position and get moving faster.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The title says it all, I have been using a Microsoft Band for a few months now and have found it a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/04/05/After-a-few-days-living-with-a-Microsoft-Band.aspx">great tool for running and cycling as long as you are going out for less than about 5 hours</a>. I tried to use for the first time Triathlon race at at the Leeds Triathlon over the weekend.</p>
<p>As it it not water proof it was not an option for the swim (unlike my old Polar HR monitor), so I put it on in T1 (swim to bike), don’t think it wasted too much time! This is where I hit the first issue (or second if you count that it is not waterproof) that my finger was too wet to operate the touch screen. I have seen this issue on runs on rainy days. So I did not manage to switch it to cycle mode, and did not bother to try again whilst cycling after I had dried out – a had other things on my mind like being a in good aero position and get moving faster.</p>
<p>I did however manage to switch to run mode as I ran out of T2 (bike to run) and it worked OK there.</p>
<p>So my wish list</p>
<ul>
<li>Make it water proof, enough for open water swimming</li>
<li>Add a way to sequence different activities (swim, bike, run) and have a simple button what works with wet fingers to switch between them – maybe a de project for myself</li>
<li>And of course better battery life</li>
</ul>
<p>So I still think it is a good product, just not 100% perfect for me as yet</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Typemock Isolator based tests in TFS vNext build</title>
      <link>https://blog.richardfennell.net/posts/running-typemock-isolator-based-tests-in-tfs-vnext-build/</link>
      <pubDate>Tue, 08 Sep 2015 20:02:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-typemock-isolator-based-tests-in-tfs-vnext-build/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 22 Mar 2016&lt;/strong&gt; This task is available in the &lt;a href=&#34;https://blog.richardfennell.net/marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-TypeMockRunner-Task&#34;&gt;VSTS Marketplace&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.typemock.com/isolator-product-page&#34;&gt;Typemock Isolator&lt;/a&gt; provides a way to ‘mock the un-mockable’, such as sealed private classes in .NET, so can be a invaluable tool in unit testing. To allow this mocking Isolator interception has to be started before any unit tests are run and stopped when completed. For a developer this is done automatically within the Visual Studio IDE, but on build systems you have to run something to do this as part of your build process. &lt;a href=&#34;http://www.typemock.com/docs?book=Isolator&amp;amp;page=Documentation%2FHtmlDocs%2Fintegratingwiththeserver.htm&#34;&gt;Typemock provide documentation&lt;/a&gt; and tools for common build systems such as MSBuild, Jenkins, Team City and TFS XAML builds. However, they don’t provide tools or documentation on getting it working with TFS vNext build, so I had to write my own vNext build Task to do the job, wrapping &lt;strong&gt;Tmockrunner.exe&lt;/strong&gt; provided by Typemock which handles the starting and stopping of mocking whilst calling any EXE of your choice.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 22 Mar 2016</strong> This task is available in the <a href="/marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-TypeMockRunner-Task">VSTS Marketplace</a>)</p>
<p><a href="http://www.typemock.com/isolator-product-page">Typemock Isolator</a> provides a way to ‘mock the un-mockable’, such as sealed private classes in .NET, so can be a invaluable tool in unit testing. To allow this mocking Isolator interception has to be started before any unit tests are run and stopped when completed. For a developer this is done automatically within the Visual Studio IDE, but on build systems you have to run something to do this as part of your build process. <a href="http://www.typemock.com/docs?book=Isolator&amp;page=Documentation%2FHtmlDocs%2Fintegratingwiththeserver.htm">Typemock provide documentation</a> and tools for common build systems such as MSBuild, Jenkins, Team City and TFS XAML builds. However, they don’t provide tools or documentation on getting it working with TFS vNext build, so I had to write my own vNext build Task to do the job, wrapping <strong>Tmockrunner.exe</strong> provided by Typemock which handles the starting and stopping of mocking whilst calling any EXE of your choice.</p>
<pre tabindex="0"><code>tmockrunner &lt;name of the test tool to run&gt; &lt;and parameters for the test tool&gt;
</code></pre><p>Microsoft provide a vNext build task to run the <strong>vstest.console.exe.</strong> This task generates all the command line parameters needed depending on the arguments provided for the build task. The source for this can be found on any build VM (in the <strong>[build agent folder]tasks</strong> folder after a build has run) or on <a href="https://github.com/Microsoft/vso-agent-tasks/blob/master/Tasks/VsTest/VSTest.ps1">Microsoft’s vso agent github repo</a>. I decided to use this as my starting point, swapping the logic to generate the <strong>tmockrunner.exe</strong> command line as opposed to the one for <strong>vstest.console.exe</strong>. You can find my task on my <a href="https://github.com/rfennell/vNextBuild">github</a>. It has been developed in the same manner as the <a href="https://github.com/Microsoft/vso-agent-tasks">Microsoft provided tasks</a>, this means the process to build and use the task is</p>
<ol>
<li>Clone the repo <a href="https://github.com/rfennell/vNextBuild.git" title="https://github.com/rfennell/vNextBuild.git">https://github.com/rfennell/vNextBuild.git</a></li>
<li>In the root of the repo use gulp to build the task</li>
<li>Use <a href="https://www.npmjs.com/package/tfx-cli">tfx</a> to upload the task to your TFS or VSO instance</li>
</ol>
<p>See <a href="http://realalm.com/2015/07/31/uploading-a-custom-build-vnext-task/" title="http://realalm.com/2015/07/31/uploading-a-custom-build-vnext-task/">http://realalm.com/2015/07/31/uploading-a-custom-build-vnext-task/</a> and <a href="http://blog.devmatter.com/custom-build-tasks-in-vso/" title="http://blog.devmatter.com/custom-build-tasks-in-vso/">http://blog.devmatter.com/custom-build-tasks-in-vso/</a> for a good walkthroughs of building tasks, the process is the same for mine and Microsoft’s tasks.</p>
<p><strong>IMPORTANT NOTE</strong>: This task is only for on premises TFS vNext build instances connected to either an on premises TFS or VSO. Typemock at the time of writing this post does not support VSO’s host build agents. This is because the registration of Typemock requires admin rights on the build agent which you only get if you ‘own’ the build agent VM</p>
<p>Once the task is installed on your TFS/VSO server you can use it in vNext builds. You will note that it takes all the same parameters as the standard VSTest task (it will usually be used as a replacement when there are Typemock Isolator based tests in a solution). The only addition to the parameters are the three parameters for Typemock licensing and deployment location.</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_268.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_264.png" title="image"></a></p>
<p>Using the task allows tests that require Typemock Isolator to pass. So test that if run with the standard VSTest task give</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_269.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_265.png" title="image"></a></p>
<p>With the new task gives</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_270.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_266.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Voting for DDD North Sessions is now open</title>
      <link>https://blog.richardfennell.net/posts/voting-for-ddd-north-sessions-is-now-open/</link>
      <pubDate>Mon, 07 Sep 2015 11:09:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/voting-for-ddd-north-sessions-is-now-open/</guid>
      <description>&lt;p&gt;Voting for DDD North Sessions is now open - &lt;a href=&#34;http://bit.ly/DDDNorth15Sessions&#34;&gt;http://bit.ly/DDDNorth15Sessions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Vote on what you would like to see at this community conference&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD North Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddnorth.co.uk/Content/images/logo.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Voting for DDD North Sessions is now open - <a href="http://bit.ly/DDDNorth15Sessions">http://bit.ly/DDDNorth15Sessions</a></p>
<p>Vote on what you would like to see at this community conference</p>
<p><img alt="DDD North Logo" loading="lazy" src="http://www.dddnorth.co.uk/Content/images/logo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>WebDeploy, parameters.xml transforms and nLog settings</title>
      <link>https://blog.richardfennell.net/posts/webdeploy-parameters-xml-transforms-and-nlog-settings/</link>
      <pubDate>Tue, 01 Sep 2015 15:12:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/webdeploy-parameters-xml-transforms-and-nlog-settings/</guid>
      <description>&lt;p&gt;I have been trying to parameterise the SQL DB connection string used by nLog when it is defined in a web.config file of a web site being deployed via &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/09/18/Using-MSDEPLOY-from-Release-Management-to-deploy-Azure-web-sites.aspx&#34;&gt;Release Management and  WebDeploy&lt;/a&gt; i.e. I wanted to select and edit the bit highlighted of my web.config file&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&amp;lt;configuration&amp;gt;  
    &amp;lt;nlog xmlns=&amp;#34;[http://www.nlog-project.org/schemas/NLog.xsd&amp;#34;](http://www.nlog-project.org/schemas/NLog.xsd&amp;#34;) xmlns:xsi=&amp;#34;[http://www.w3.org/2001/XMLSchema-instance&amp;#34;](http://www.w3.org/2001/XMLSchema-instance&amp;#34;)\&amp;gt; 

    &amp;lt;targets async=&amp;#34;true&amp;#34;&amp;gt;  
      &amp;lt;target xsi:type=&amp;#34;Database&amp;#34; name=&amp;#34;SQL&amp;#34; dbProvider=&amp;#34;System.Data.SqlClient&amp;#34; connectionString=&amp;#34;Data Source=myserver;Database=mydb;Persist Security Info=True;Pooling=False&amp;#34; keepConnection=&amp;#34;true&amp;#34; commandText=&amp;#34;INSERT INTO \[Logs\](ID, TimeStamp, Message, Level, Logger, Details, Application, MachineName, Username) VALUES(newid(), getdate(), @message, @level, @logger, @exception, @application, @machineName, @username)&amp;#34;&amp;gt;  

        &amp;lt;parameter layout=&amp;#34;${message}&amp;#34; name=&amp;#34;@message&amp;#34;&amp;gt;&amp;lt;/parameter&amp;gt;  
        …….  
 
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The problem I had was that the xpath query I was using was not returning the nLog node because the nLog node has a namespace defined. This means we can’t just use a query in the form&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been trying to parameterise the SQL DB connection string used by nLog when it is defined in a web.config file of a web site being deployed via <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/09/18/Using-MSDEPLOY-from-Release-Management-to-deploy-Azure-web-sites.aspx">Release Management and  WebDeploy</a> i.e. I wanted to select and edit the bit highlighted of my web.config file</p>
<pre tabindex="0"><code>&lt;configuration&gt;  
    &lt;nlog xmlns=&#34;[http://www.nlog-project.org/schemas/NLog.xsd&#34;](http://www.nlog-project.org/schemas/NLog.xsd&#34;) xmlns:xsi=&#34;[http://www.w3.org/2001/XMLSchema-instance&#34;](http://www.w3.org/2001/XMLSchema-instance&#34;)\&gt; 

    &lt;targets async=&#34;true&#34;&gt;  
      &lt;target xsi:type=&#34;Database&#34; name=&#34;SQL&#34; dbProvider=&#34;System.Data.SqlClient&#34; connectionString=&#34;Data Source=myserver;Database=mydb;Persist Security Info=True;Pooling=False&#34; keepConnection=&#34;true&#34; commandText=&#34;INSERT INTO \[Logs\](ID, TimeStamp, Message, Level, Logger, Details, Application, MachineName, Username) VALUES(newid(), getdate(), @message, @level, @logger, @exception, @application, @machineName, @username)&#34;&gt;  

        &lt;parameter layout=&#34;${message}&#34; name=&#34;@message&#34;&gt;&lt;/parameter&gt;  
        …….  
 
</code></pre><p>The problem I had was that the xpath query I was using was not returning the nLog node because the nLog node has a namespace defined. This means we can’t just use a query in the form</p>
<pre tabindex="0"><code>&lt;parameter name=&#34;NLogConnectionString&#34; description=&#34;Description for NLogConnectionString&#34; defaultvalue=&#34;\_\_NLogConnectionString\_\_&#34; tags=&#34;&#34;&gt;  
  &lt;parameterentry kind=&#34;XmlFile&#34; scope=&#34;\\web.config$&#34; match=&#34;/configuration/nlog/targets/target\[@name=&#39;SQL&#39;\]/@connectionString&#34; /&gt;  
&lt;/parameter&gt;
</code></pre><p>I needed to use</p>
<pre tabindex="0"><code>&lt;parameter name=&#34;NLogConnectionString&#34; description=&#34;Description for NLogConnectionString&#34; defaultvalue=&#34;\_\_NLogConnectionString\_\_&#34; tags=&#34;&#34;&gt;  
  &lt;parameterentry kind=&#34;XmlFile&#34; scope=&#34;\\web.config$&#34; match=&#34;/configuration/\*\[local-name() = &#39;nlog&#39;\]/\*\[local-name() = &#39;targets&#39;\]/\*\[local-name() = &#39;target&#39; and @name=&#39;SQL&#39;\]/@connectionString&#34; /&gt;  
&lt;/parameter&gt;
</code></pre><p>So more complex, but it does work. Hopefully this will save others the time I wasted working it out today</p>
]]></content:encoded>
    </item>
    <item>
      <title>An alternative to setting a build quality on a TFS vNext build</title>
      <link>https://blog.richardfennell.net/posts/an-alternative-to-setting-a-build-quality-on-a-tfs-vnext-build/</link>
      <pubDate>Fri, 28 Aug 2015 16:00:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/an-alternative-to-setting-a-build-quality-on-a-tfs-vnext-build/</guid>
      <description>&lt;p&gt;TFS vNext builds do not have a concept of build quality unlike the old XAML based builds. This is an issue for us as we used the changing of the build quality as signal to test a build, or to mark it as released to a client (this was all managed with my &lt;a href=&#34;https://tfsalertsdsl.codeplex.com/wikipage?title=Sample%20DSL%20Script&#34;&gt;TFS Alerts DSL&lt;/a&gt; to make sure suitable emails and build retention were used).&lt;/p&gt;
&lt;p&gt;So how to get around this problem with vNext?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>TFS vNext builds do not have a concept of build quality unlike the old XAML based builds. This is an issue for us as we used the changing of the build quality as signal to test a build, or to mark it as released to a client (this was all managed with my <a href="https://tfsalertsdsl.codeplex.com/wikipage?title=Sample%20DSL%20Script">TFS Alerts DSL</a> to make sure suitable emails and build retention were used).</p>
<p>So how to get around this problem with vNext?</p>
<p>I have used Tag on builds, set using the same REST API style calls as <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/21/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts-A-better-script.aspx">detailed in my post on Release Management vNext templates</a>. I also use the REST API to set the retention on the build, so I actually now don’t need to manage this via the alerts DSL.</p>
<p>The following script, if used to wrapper the calling of integration tests via TCM, should set the tags and retention on a build</p>
<pre tabindex="0"><code>  
function Get-BuildDetailsByNumber  
{  
    param  
    (  
        $tfsUri ,  
        $buildNumber,  
        $username,   
        $password 

    )

 

    $uri = &#34;$($tfsUri)/\_apis/build/builds?api-version=2.0&amp;buildnumber=$buildNumber&#34;

 

    $wc = New-Object System.Net.WebClient  
    if ($username -eq $null)  
    {  
        $wc.UseDefaultCredentials = $true  
    } else   
    {  
        $wc.Credentials = new-object System.Net.NetworkCredential($username, $password)  
    }  
    write-verbose &#34;Getting ID of $buildNumber from $tfsUri &#34;

    $jsondata = $wc.DownloadString($uri) | ConvertFrom-Json   
    $jsondata.value\[0\]  
    
}

 

function Set-BuildTag  
{  
    param  
    (  
        $tfsUri ,  
        $buildID,  
        $tag,  
        $username,   
        $password

 

    )

 

    
    $wc = New-Object System.Net.WebClient  
    $wc.Headers\[&#34;Content-Type&#34;\] = &#34;application/json&#34;  
    if ($username -eq $null)  
    {  
        $wc.UseDefaultCredentials = $true  
    } else   
    {  
        $wc.Credentials = new-object System.Net.NetworkCredential($username, $password)  
    }  
      
    write-verbose &#34;Setting BuildID $buildID with Tag $tag via $tfsUri &#34;

    $uri = &#34;$($tfsUri)/\_apis/build/builds/$($buildID)/tags/$($tag)?api-version=2.0&#34;

    $data = @{value = $tag } | ConvertTo-Json

    $wc.UploadString($uri,&#34;PUT&#34;, $data)   
      
}

 

function Set-BuildRetension  
{  
    param  
    (  
        $tfsUri ,  
        $buildID,  
        $keepForever,  
        $username,   
        $password

    )

    
    $wc = New-Object System.Net.WebClient  
    $wc.Headers\[&#34;Content-Type&#34;\] = &#34;application/json&#34;  
    if ($username -eq $null)  
    {  
        $wc.UseDefaultCredentials = $true  
    } else   
    {  
        $wc.Credentials = new-object System.Net.NetworkCredential($username, $password)  
    }  
      
    write-verbose &#34;Setting BuildID $buildID with retension set to $keepForever via $tfsUri &#34;

    $uri = &#34;$($tfsUri)/\_apis/build/builds/$($buildID)?api-version=2.0&#34;  
    $data = @{keepForever = $keepForever} | ConvertTo-Json  
    $response = $wc.UploadString($uri,&#34;PATCH&#34;, $data)   
      
}

 

  
\# Output execution parameters.  
$VerbosePreference =&#39;Continue&#39; # equiv to -verbose  

$ErrorActionPreference = &#39;Continue&#39; # this controls if any test failure cause the script to stop 

$folder = Split-Path -Parent $MyInvocation.MyCommand.Definition  

write-verbose &#34;Running $folderTcmExec.ps1&#34; 

 

&amp; &#34;$folderTcmExec.ps1&#34; -Collection $Collection -Teamproject $Teamproject -PlanId $PlanId  -SuiteId $SuiteId -ConfigId $ConfigId -BuildDirectory $PackageLocation -TestEnvironment $TestEnvironment -SettingsName $SettingsName write-verbose &#34;TCM exited with code &#39;$LASTEXITCODE&#39;&#34;  
$newquality = &#34;Test Passed&#34;  
$tag = &#34;Deployed to Lab&#34;  
$keep = $true  
if ($LASTEXITCODE -gt 0 )  
{  
    $newquality = &#34;Test Failed&#34;  
    $tag = &#34;Lab Deployed failed&#34;  
    $keep = $false  
}  
write-verbose &#34;Setting build tag to &#39;$tag&#39; for build $BuildNumber&#34;

 

  
$url = &#34;$Collection/$Teamproject&#34;  
$jsondata = Get-BuildDetailsByNumber -tfsUri $url -buildNumber $BuildNumber #-username $TestUserUid -password $TestUserPwd  
$buildId = $jsondata.id  
write-verbose &#34;The build $BuildNumber has ID of $buildId&#34;  
   
write-verbose &#34;The build tag set to &#39;$tag&#39; and retention set to &#39;$key&#39;&#34;  
Set-BuildTag -tfsUri $url  -buildID $buildId -tag $tag #-username $TestUserUid -password $TestUserPwd  
Set-BuildRetension -tfsUri $url  -buildID $buildId  -keepForever $keep #-username $TestUserUid -password $TestUserPwd

 

\# now fail the stage after we have sorted the logging  
if ($LASTEXITCODE -gt 0 )  
{  
    Write-error &#34;Test have failed&#34;  
}
</code></pre><p>If all the tests pass we see the Tag being added and the retention being set, if they fail just a tag should be set</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_267.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_263.png" title="image"></a></p>
<p>$ErrorActionPreference = &lsquo;Continue&rsquo;</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot create an MSDeploy package for an Azure Web Job project as part of an automated build/</title>
      <link>https://blog.richardfennell.net/posts/cannot-create-an-msdeploy-package-for-an-azure-web-job-project-as-part-of-an-automated-build/</link>
      <pubDate>Fri, 21 Aug 2015 14:28:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-create-an-msdeploy-package-for-an-azure-web-job-project-as-part-of-an-automated-build/</guid>
      <description>&lt;p&gt;I like web deploy as a means to package up websites for deployment. I like the way I only need to add&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;/p:DeployOnBuild=True;PublishProfile=Release
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;as an MSBuild argument to get the package produced as part of an automated build. This opening up loads of deployment options&lt;/p&gt;
&lt;p&gt;I recently hit an issue packaging up a solution that contained an Azure WebSite and an Azure Web Job (to be hosted in the web site). &lt;a href=&#34;https://azure.microsoft.com/en-gb/documentation/articles/websites-dotnet-deploy-webjobs/&#34;&gt;It is easy to add the web job so that it is included in the Web Deploy package&lt;/a&gt;. Once this was done we could deploy from Visual Studio, or package to the local file system and see the web job EXE in the &lt;strong&gt;app_datajobs&lt;/strong&gt; folder as expected.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I like web deploy as a means to package up websites for deployment. I like the way I only need to add</p>
<pre tabindex="0"><code>/p:DeployOnBuild=True;PublishProfile=Release
</code></pre><p>as an MSBuild argument to get the package produced as part of an automated build. This opening up loads of deployment options</p>
<p>I recently hit an issue packaging up a solution that contained an Azure WebSite and an Azure Web Job (to be hosted in the web site). <a href="https://azure.microsoft.com/en-gb/documentation/articles/websites-dotnet-deploy-webjobs/">It is easy to add the web job so that it is included in the Web Deploy package</a>. Once this was done we could deploy from Visual Studio, or package to the local file system and see the web job EXE in the <strong>app_datajobs</strong> folder as expected.</p>
<p>The problems occurred when we tried to get TFS build to create the deployment package using the arguments shown above. I got the error</p>
<pre tabindex="0"><code>The value for PublishProfile is set to &#39;Release&#39;, expected to find the file at &#39;C:vNextBuild\_work4253ff91BMSrcMyWebJobPropertiesPublishProfilesRelease.pubxml&#39; but it could not be found.
</code></pre><p>The issue is that there is a Publish target for the web jobs project type, but if run from Visual Studio it actually creates a ClickOnce package. This wizard provides no means create an MSDeploy style package.</p>
<p>MSBuild is getting confused as it expects there to be this MSDeploy style package definition for the web job projects, even though it won’t actually use it as the Web Job EXE will be copied into the web site deployment package.</p>
<p>The solution was to add a dummy <strong>PublishProfilesRelease.pubxml</strong> file into the properties folder of the web jobs project.</p>
<pre tabindex="0"><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt;  
&lt;Project ToolsVersion=&#34;4.0&#34; xmlns=&#34;[http://schemas.microsoft.com/developer/msbuild/2003&#34;](http://schemas.microsoft.com/developer/msbuild/2003&#34;)\&gt;  
  &lt;PropertyGroup&gt;  
    &lt;WebPublishMethod&gt;Package&lt;/WebPublishMethod&gt;  
    &lt;LastUsedBuildConfiguration&gt;Release&lt;/LastUsedBuildConfiguration&gt;  
    &lt;LastUsedPlatform&gt;Any CPU&lt;/LastUsedPlatform&gt;  
    &lt;SiteUrlToLaunchAfterPublish /&gt;  
    &lt;LaunchSiteAfterPublish&gt;True&lt;/LaunchSiteAfterPublish&gt;  
    &lt;ExcludeApp\_Data&gt;False&lt;/ExcludeApp\_Data&gt;  
    &lt;DesktopBuildPackageLocation /&gt;  
    &lt;PackageAsSingleFile&gt;true&lt;/PackageAsSingleFile&gt;  
    &lt;DeployIisAppPath /&gt;  
    &lt;PublishDatabaseSettings/&gt;  
    &lt;/PropertyGroup&gt;  
&lt;/Project&gt;
</code></pre><p><strong>Note:</strong> I had to add this file to source control via the TFS Source Code Explorer as Visual Studio does not allow you add folders/files manually under the properties folder.</p>
<p>Once this file was added my automated build worked OK, and I got my web site package including the web job.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Release Management vNext templates when you don’t want to use DSC scripts – A better script</title>
      <link>https://blog.richardfennell.net/posts/using-release-management-vnext-templates-when-you-dont-want-to-use-dsc-scripts-a-better-script/</link>
      <pubDate>Fri, 21 Aug 2015 10:20:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-release-management-vnext-templates-when-you-dont-want-to-use-dsc-scripts-a-better-script/</guid>
      <description>&lt;p&gt;A couple of months ago I wrote a post &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx&#34; title=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx&#34;&gt;on using PowerShell scripts to deploy web sites in Release Management vNext templates as opposed to DSC&lt;/a&gt;. In that post I provided a script to help with the translation of Release Management configuration variables to entries in the &lt;strong&gt;[MSDELPOY].setparameters.xml&lt;/strong&gt; file for web sites.&lt;/p&gt;
&lt;p&gt;The code I provided in that post required you to hard code the variables to translate. This quickly become a problem for maintenance. However, there is a simple solution.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A couple of months ago I wrote a post <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx" title="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx">on using PowerShell scripts to deploy web sites in Release Management vNext templates as opposed to DSC</a>. In that post I provided a script to help with the translation of Release Management configuration variables to entries in the <strong>[MSDELPOY].setparameters.xml</strong> file for web sites.</p>
<p>The code I provided in that post required you to hard code the variables to translate. This quickly become a problem for maintenance. However, there is a simple solution.</p>
<p>If we use a naming convention for our RM configuration variables that map to web.config entries (I chose __NAME__ to be consistent to the old RM Agent based deployment standards) we can let PowerShell do the work.</p>
<p>So the revised script is</p>
<pre tabindex="0"><code>$VerbosePreference =&#39;Continue&#39; # equiv to -verbose 

function Update-ParametersFile  
{  
    param  
    (  
        $paramFilePath,  
        $paramsToReplace  
    )

 

    write-verbose &#34;Updating parameters file &#39;$paramFilePath&#39;&#34; -verbose  
    $content = get-content $paramFilePath  
    $paramsToReplace.GetEnumerator() | % {  
        Write-Verbose &#34;Replacing value for key &#39;$($\_.Name)&#39;&#34; -Verbose  
        $content = $content.Replace($\_.Name, $\_.Value)  
    }  
    set-content -Path $paramFilePath -Value $content

 

}

 

\# the script folder  
$folder = Split-Path -parent $MyInvocation.MyCommand.Definition  
write-verbose &#34;Deploying Website &#39;$package&#39; using script in &#39;$folder&#39;&#34; 

 

\# work out the variables to replace using a naming convention

\# we make sure that the value is stored in an array even if it is single item  
$parameters = @(Get-Variable -include &#34;\_\_\*\_\_&#34; )  
write-verbose &#34;Discovered replacement parameters that match the convention &#39;\_\_\*\_\_&#39;: $($parameters | Out-string)&#34;   
Update-ParametersFile -paramFilePath &#34;$ApplicationPath$packagePath$package.SetParameters.xml&#34; -paramsToReplace $parameters

 

write-verbose &#34;Calling &#39;$ApplicationPath$packagePath$package.deploy.cmd&#39;&#34;   
&amp; &#34;$ApplicationPath$packagePath$package.deploy.cmd&#34; /Y  /m:&#34;$PublishUrl&#34; -allowUntrusted /u:&#34;$PublishUser&#34; /p:&#34;$PublishPassword&#34; /a:Basic | Write-Verbose 
</code></pre><p><strong>Note</strong>: This script allow the deployment to a remote IIS server, so useful for Azure Web Sites. If you are running it locally on an IIS server just trim everything after the /Y on the last line</p>
<p>So now I provide</p>
<ul>
<li>
<p>$PackagePath – path to our deployment on the deployment VM(relative to the $ApplicationPath local working folder)</p>
</li>
<li>
<p>$Package – name of the MSdeploy package</p>
</li>
<li>
<p>The publish settings you can get from the Azure Portal</p>
</li>
<li>
<p>$PublishUser – The login name</p>
</li>
<li>
<p>$PublishPassword – The login password</p>
</li>
<li>
<p>$PublishUrl  – The URL  e.g. <code>https://\[your.site.azure.com\]:433/msdeploy.axd</code></p>
</li>
<li>
<p>$__PARAM1__ –  a value to swap in the web.config</p>
</li>
<li>
<p>$__PARAM2__ –  another value to swap in the web.config</p>
</li>
</ul>
<p>In RM it will look like this.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_266.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_262.png" title="image"></a></p>
<p>So now you can use a single script for all your web deployments.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Guest post at Microsoft - nUnit and Jasmine.JS unit tests in TFS/VSO vNext build</title>
      <link>https://blog.richardfennell.net/posts/guest-post-at-microsoft-nunit-and-jasmine-js-unit-tests-in-tfsvso-vnext-build/</link>
      <pubDate>Wed, 19 Aug 2015 13:26:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/guest-post-at-microsoft-nunit-and-jasmine-js-unit-tests-in-tfsvso-vnext-build/</guid>
      <description>&lt;p&gt;I have just had a guest post published on the Microsoft UK developers site &lt;a href=&#34;http://www.microsoft.com/en-gb/developers/articles/week04aug15/nunit-and-jasmine-js-unit-tests-in-tfs-vso-vnext-build/&#34;&gt;nUnit and Jasmine.JS unit tests in TFS/VSO vNext build&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just had a guest post published on the Microsoft UK developers site <a href="http://www.microsoft.com/en-gb/developers/articles/week04aug15/nunit-and-jasmine-js-unit-tests-in-tfs-vso-vnext-build/">nUnit and Jasmine.JS unit tests in TFS/VSO vNext build</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>DDDNorth 2015 submissions are open</title>
      <link>https://blog.richardfennell.net/posts/dddnorth-2015-submissions-are-open/</link>
      <pubDate>Wed, 19 Aug 2015 08:41:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dddnorth-2015-submissions-are-open/</guid>
      <description>&lt;p&gt;DDDNorth is on again this year, back in it’s more northern base of the Sunderland University on the 24th of October&lt;/p&gt;
&lt;p&gt;You can submit your &lt;a href=&#34;http://www.dddnorth.co.uk/Sessions&#34;&gt;session proposal in here&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>DDDNorth is on again this year, back in it’s more northern base of the Sunderland University on the 24th of October</p>
<p>You can submit your <a href="http://www.dddnorth.co.uk/Sessions">session proposal in here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TF30063 Errors accessing a TFS 2015 server via the C# API after upgrade from 2013</title>
      <link>https://blog.richardfennell.net/posts/tf30063-errors-accessing-a-tfs-2015-server-via-the-c-api-after-upgrade-from-2013/</link>
      <pubDate>Thu, 13 Aug 2015 10:49:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf30063-errors-accessing-a-tfs-2015-server-via-the-c-api-after-upgrade-from-2013/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;We  upgraded our production TFS 2013.4 server to TFS 2015 RTM this week. As opposed to an in-place upgrade we chose to make a few change on the way; so whilst leaving our DBs on our SQL 2012 cluster&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;We moved to a new VM for our AT (to upgrade from Windows 2008R2 to 2012R2)&lt;/li&gt;
&lt;li&gt;Split the SSRS instance off the AT to a separate VM with a new SSAS server (again to move to 2012R2 and to ease management, getting all the reporting bits in one place)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But we do not touch&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>We  upgraded our production TFS 2013.4 server to TFS 2015 RTM this week. As opposed to an in-place upgrade we chose to make a few change on the way; so whilst leaving our DBs on our SQL 2012 cluster</p>
<ul>
<li>We moved to a new VM for our AT (to upgrade from Windows 2008R2 to 2012R2)</li>
<li>Split the SSRS instance off the AT to a separate VM with a new SSAS server (again to move to 2012R2 and to ease management, getting all the reporting bits in one place)</li>
</ul>
<p>But we do not touch</p>
<ul>
<li>Our XAML Build systems leaving them at 2013 as we intend to migrate to <a href="http://go.microsoft.com/fwlink/?LinkId=619385">vNext build</a> ASAP</li>
<li>Our Test Controller/Release Management/Lab Environment leaving it at 2013 for now, as we have other projects on the go to update the hardware/cloud solutions underpinning theses.</li>
</ul>
<p>All went well, no surprises, the running of the upgrade tool took about 1 hour.</p>
<h3 id="the-problem">The Problem</h3>
<p>The only problem we have had was to do with my <a href="https://tfsalertsdsl.codeplex.com/">TFS Alerts DSL Processor, which listens for TFS Alerts and runs custom scripts</a> . I host this on the TFS AT, and I would expect it to set build retention and send emails when a TFS XAML Build quality changes. This did not occur, in the Windows error log  I was seeing</p>
<pre tabindex="0"><code>2015-08-12 21:04:02.4195 ERROR TFSEventsProcessor.DslScriptService: TF30063: You are not authorized to access [https://tfs.blackmarble.co.uk/tfs/DefaultCollection](https://tfs.blackmarble.co.uk/tfs/DefaultCollection).
</code></pre><p>After much fiddling, including writing a small command line test client, I confirmed that the issue was specific to the production server. The tool ran fine on other PCs, but on the live server a Window authentication dialog was shown which would not accept any valid credentials</p>
<p>It was not as I had feared a change in the TFS API, in fact there is no reason my 2012 or 2013 API targeted version of the TFS Alert DSL should not be able to talk to a TFS 2015 server as long as the correct version of the TFS API is installed on the machine hosting the DSL.</p>
<h3 id="the-solution">The Solution</h3>
<p>The issue was due to <a href="https://support.microsoft.com/en-us/kb/926642">Windows loopback protection</a>. This had been disabled on our old old TFS AT, but not on the new one. As we wanted to avoid changing the global loopback protection setting we set the following via Regedit to allow it for a single CName</p>
<p>**HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlLsaMSV1_0<br>
    ValueName - BackConnectionHostNames<br>
    Type - multistring<br>
**    <strong>Data  - tfs.blackmarble.co.uk</strong></p>
<p>Once this was done(and without a reboot) my alerts processing work without any problems.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Microsoft Test Manager Test Suites as part of a vNext Release pipeline - Part 2</title>
      <link>https://blog.richardfennell.net/posts/running-microsoft-test-manager-test-suites-as-part-of-a-vnext-release-pipeline-part-2/</link>
      <pubDate>Tue, 11 Aug 2015 20:56:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-microsoft-test-manager-test-suites-as-part-of-a-vnext-release-pipeline-part-2/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/04/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline.aspx&#34;&gt;In my last post&lt;/a&gt; I discussed how you could wire TCM tests into a Release Management vNext pipeline. The problem with the script I provided, as I noted, was that the deployment was triggered synchronously by the build i.e. the build/release process was:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;TFS Build
&lt;ol&gt;
&lt;li&gt;Gets the source&lt;/li&gt;
&lt;li&gt;Compiled the code&lt;/li&gt;
&lt;li&gt;Run the unit tests&lt;/li&gt;
&lt;li&gt;Trigger the RM pipeline&lt;/li&gt;
&lt;li&gt;Wait while the RM pipeline completed&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;RM then
&lt;ol&gt;
&lt;li&gt;Deploys the code&lt;/li&gt;
&lt;li&gt;Runs the integration tests&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;When RM completed the TFS build completes&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This process raised a couple of problems&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/04/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline.aspx">In my last post</a> I discussed how you could wire TCM tests into a Release Management vNext pipeline. The problem with the script I provided, as I noted, was that the deployment was triggered synchronously by the build i.e. the build/release process was:</p>
<ol>
<li>TFS Build
<ol>
<li>Gets the source</li>
<li>Compiled the code</li>
<li>Run the unit tests</li>
<li>Trigger the RM pipeline</li>
<li>Wait while the RM pipeline completed</li>
</ol>
</li>
<li>RM then
<ol>
<li>Deploys the code</li>
<li>Runs the integration tests</li>
</ol>
</li>
<li>When RM completed the TFS build completes</li>
</ol>
<p>This process raised a couple of problems</p>
<ul>
<li>You cannot associate the integration tests with the build as TCM only allow association with completed successful builds. When TCM finishes in this model the build is still in progress.</li>
<li>You have to target only the first automated stage of the pipeline, else the build will be held as ‘in progress’ until all the release stages have complete, which may be days if there are manual approvals involved</li>
</ul>
<h2 id="the-script-initiatereleasefrombuild">The script InitiateReleaseFromBuild</h2>
<p>These problems can all be fixed by altering the PowerShell that triggers the RM pipeline so that it does not wait for the deployment to complete, so the TFS build completes as soon as possible.</p>
<p>This is done by passing in an extra parameter which is set in TFS build</p>
<pre tabindex="0"><code>param(  
    \[string\]$rmserver = $Args\[0\],  
    \[string\]$port = $Args\[1\],    
    \[string\]$teamProject = $Args\[2\],     
    \[string\]$targetStageName = $Args\[3\],  
    \[string\]$waitForCompletion = $Args\[4\]  
) 

cls  
$teamFoundationServerUrl = $env:TF\_BUILD\_COLLECTIONURI  
$buildDefinition = $env:TF\_BUILD\_BUILDDEFINITIONNAME  
$buildNumber = $env:TF\_BUILD\_BUILDNUMBER

 

  
&#34;Executing with the following parameters:\`n&#34;  
&#34;  RMserver Name: $rmserver&#34;  
&#34;  Port number: $port&#34;  
&#34;  Team Foundation Server URL: $teamFoundationServerUrl&#34;  
&#34;  Team Project: $teamProject&#34;  
&#34;  Build Definition: $buildDefinition&#34;  
&#34;  Build Number: $buildNumber&#34;  
&#34;  Target Stage Name: $targetStageName\`n&#34;  
&#34;  Wait for RM completion: $waitForCompletion\`n&#34;

 

$wait = \[System.Convert\]::ToBoolean($waitForCompletion)  
$exitCode = 0

 

trap  
{  
  $e = $error\[0\].Exception  
  $e.Message  
  $e.StackTrace  
  if ($exitCode -eq 0) { $exitCode = 1 }  
}

 

$scriptName = $MyInvocation.MyCommand.Name  
$scriptPath = Split-Path -Parent (Get-Variable MyInvocation -Scope Script).Value.MyCommand.Path

 

Push-Location $scriptPath    

 

$server = \[System.Uri\]::EscapeDataString($teamFoundationServerUrl)  
$project = \[System.Uri\]::EscapeDataString($teamProject)  
$definition = \[System.Uri\]::EscapeDataString($buildDefinition)  
$build = \[System.Uri\]::EscapeDataString($buildNumber)  
$targetStage = \[System.Uri\]::EscapeDataString($targetStageName)

 

$serverName = $rmserver + &#34;:&#34; + $port  
$orchestratorService = &#34;[http://$serverName/account/releaseManagementService/\_apis/releaseManagement/OrchestratorService&#34;](http://$serverName/account/releaseManagementService/_apis/releaseManagement/OrchestratorService&#34;)

 

$status = @{  
    &#34;2&#34; = &#34;InProgress&#34;;  
    &#34;3&#34; = &#34;Released&#34;;  
    &#34;4&#34; = &#34;Stopped&#34;;  
    &#34;5&#34; = &#34;Rejected&#34;;  
    &#34;6&#34; = &#34;Abandoned&#34;;  
}

 

$uri = &#34;$orchestratorService/InitiateReleaseFromBuild?teamFoundationServerUrl=$server&amp;teamProject=$project&amp;buildDefinition=$definition&amp;buildNumber=$build&amp;targetStageName=$targetStage&#34;  
&#34;Executing the following API call:\`n\`n$uri&#34;

 

$wc = New-Object System.Net.WebClient  
$wc.UseDefaultCredentials = $true  
\# rmuser should be part rm users list and he should have permission to trigger the release.

 

#$wc.Credentials = new-object System.Net.NetworkCredential(&#34;rmuser&#34;, &#34;rmuserpassword&#34;, &#34;rmuserdomain&#34;)

 

try  
{  
    $releaseId = $wc.DownloadString($uri)

 

    $url = &#34;$orchestratorService/ReleaseStatus?releaseId=$releaseId&#34;

 

    $releaseStatus = $wc.DownloadString($url)

 

  
    if ($wait -eq $true)  
    {  
        Write-Host -NoNewline &#34;\`nReleasing ...&#34;

 

        while($status\[$releaseStatus\] -eq &#34;InProgress&#34;)  
        {  
            Start-Sleep -s 5  
            $releaseStatus = $wc.DownloadString($url)  
            Write-Host -NoNewline &#34;.&#34;  
        }

 

        &#34; done.\`n\`nRelease completed with {0} status.&#34; -f $status\[$releaseStatus\]  
    } else {

 

        Write-Host -NoNewline &#34;\`nTriggering Release and exiting&#34;  
    }

 

}  
catch \[System.Exception\]  
{  
    if ($exitCode -eq 0) { $exitCode = 1 }  
    Write-Host &#34;\`n$\_\`n&#34; -ForegroundColor Red  
}

 

if ($exitCode -eq 0)  
{  
    if ($wait -eq $true)  
    {  
        if ($releaseStatus -eq 3)  
        {  
          &#34;\`nThe script completed successfully. Product deployed without error\`n&#34;  
        } else {  
            Write-Host &#34;\`nThe script completed successfully. Product failed to deploy\`n&#34; -ForegroundColor Red  
            $exitCode = -1 # reset the code to show the error  
        }  
    } else {  
        &#34;\`nThe script completed successfully. Product deploying\`n&#34;  
    }  
}  
else  
{  
  $err = &#34;Exiting with error: &#34; + $exitCode + &#34;\`n&#34;  
  Write-Host $err -ForegroundColor Red  
}

 

Pop-Location

 

exit $exitCode  
</code></pre><h2 id="the-script-tcmexecwrapper">The Script TcmExecWrapper</h2>
<p>A change is also required in the wrapper script I use to trigger the TCM test run. We need to check the exit code from the inner TCM PowerShell script and update the TFS build quality appropriately.</p>
<p>To this I use the new <a href="https://www.visualstudio.com/en-us/integrate/api/overview">REST API in TFS 2015</a> as this is far easier than using the older .NET client API. No DLLs to distribute.</p>
<p>It is worth noticing that</p>
<ul>
<li>I pass the credentials into the script from RM that are used to talk to the TFS server. This is because I am running my tests in a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">network isolated TFS Lab Environment</a>, this means I am in the wrong domain to see the TFS server without providing login details. If you are not working cross domain you could just use Default Credentials.</li>
<li><a href="https://msdn.microsoft.com/en-us/library/dn834972%28v=vs.120%29.aspx">RM only passes the BuildNumber</a> into the script e.g. MyBuild_1.2.3.4, but the REST API need the build id to set the quality. Hence the need for function Get-BuildDetailsByNumber to get the id from the name</li>
</ul>
<pre tabindex="0"><code>\# Output execution parameters.  
$VerbosePreference =&#39;Continue&#39; # equiv to -verbose  
function Get-BuildDetailsByNumber  
{  
    param  
    (  
        $tfsUri ,  
        $buildNumber,  
        $username,   
        $password  
    )  
    $uri = &#34;$($tfsUri)/\_apis/build/builds?api-version=2.0&amp;buildnumber=$buildNumber&#34;  
    $wc = New-Object System.Net.WebClient  
    #$wc.UseDefaultCredentials = $true  
    $wc.Credentials = new-object System.Net.NetworkCredential($username, $password)  
      
    write-verbose &#34;Getting ID of $buildNumber from $tfsUri &#34;  
    $jsondata = $wc.DownloadString($uri) | ConvertFrom-Json   
    $jsondata.value\[0\]  
    
}  
function Set-BuildQuality  
{  
    param  
    (  
        $tfsUri ,  
        $buildID,  
        $quality,  
        $username,   
        $password  
    )  
    $uri = &#34;$($tfsUri)/\_apis/build/builds/$($buildID)?api-version=1.0&#34;  
    $data = @{quality = $quality} | ConvertTo-Json  
    $wc = New-Object System.Net.WebClient  
    $wc.Headers\[&#34;Content-Type&#34;\] = &#34;application/json&#34;  
    #$wc.UseDefaultCredentials = $true  
    $wc.Credentials = new-object System.Net.NetworkCredential($username, $password)  
      
    write-verbose &#34;Setting BuildID $buildID to quality $quality via $tfsUri &#34;  
    $wc.UploadString($uri,&#34;PATCH&#34;, $data)   
      
}  
$folder = Split-Path -Parent $MyInvocation.MyCommand.Definition  
write-verbose &#34;Running $folderTcmExecWithLogin.ps1&#34;   
&amp; &#34;$folderTcmExecWithLogin.ps1&#34; -Collection $Collection -Teamproject $Teamproject -PlanId $PlanId  -SuiteId $SuiteId -ConfigId $ConfigId -BuildDirectory $PackageLocation -TestEnvironment $TestEnvironment -LoginCreds &#34;$TestUserUid,$TestUserPwd&#34; -SettingsName $SettingsName -BuildNumber $BuildNumber -BuildDefinition $BuildDefinition  
write-verbose &#34;Got the exit code from the TCM run of $LASTEXITCODE&#34;  
$url = &#34;$Collection/$Teamproject&#34;  
$jsondata = Get-BuildDetailsByNumber -tfsUri $url -buildNumber $BuildNumber -username $TestUserUid -password $TestUserPwd  
$buildId = $jsondata.id  
write-verbose &#34;The build ID is $buildId&#34;  
$newquality = &#34;Test Passed&#34;  
if ($LASTEXITCODE -gt 0 )  
{  
    $newquality = &#34;Test Failed&#34;  
}  
   
write-verbose &#34;The build quality is $newquality&#34;  
Set-BuildQuality -tfsUri $url  -buildID $buildId -quality $newquality -username $TestUserUid -password $TestUserPwd
</code></pre><p><strong>Note</strong>: TcmExecWithLogin.ps1 is the same as in the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/04/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline.aspx">In my last post</a></p>
<h2 id="summary">Summary</h2>
<p>So with these changes the process is now</p>
<ol>
<li>TFS Build
<ol>
<li>Gets the source</li>
<li>Compiled the code</li>
<li>Run the unit tests</li>
<li>Trigger the RM pipeline</li>
<li>Build ends</li>
</ol>
</li>
<li>RM then
<ol>
<li>Deploys the code</li>
<li>Runs the integration tests</li>
<li>When the test complete we set the TFS build quality</li>
</ol>
</li>
</ol>
<p>This means we can associate both unit and integration tests with a build and target our release at any stage in the pipeline, it pausing at the points manual approval is required without blocking the initiating build.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Microsoft Test Manager Test Suites as part of a vNext Release pipeline</title>
      <link>https://blog.richardfennell.net/posts/running-microsoft-test-manager-test-suites-as-part-of-a-vnext-release-pipeline/</link>
      <pubDate>Tue, 04 Aug 2015 20:41:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-microsoft-test-manager-test-suites-as-part-of-a-vnext-release-pipeline/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/11/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline-Part-2.aspx&#34;&gt;Also see Part 2 on how to address gotcha&amp;rsquo;s in this process&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;When using Release Management there is a good chance you will want to run test suites as part of your automated deployment pipeline. If you are using a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx&#34;&gt;vNext PowerShell based pipeline&lt;/a&gt; you need a way to trigger the tests via PowerShell as there is no out the box agent to do the job.&lt;/p&gt;
&lt;h2 id=&#34;step-1---install-a-test-agent&#34;&gt;Step 1 - Install a Test Agent&lt;/h2&gt;
&lt;p&gt;The first step is to make sure that the Visual Studio Test Agent is installed on the box you wish to run the test on. if you don’t already have a MTM Environment in place with a test agent then this can be done by creating a &lt;a href=&#34;https://msdn.microsoft.com/en-us/library/ee390842.aspx&#34;&gt;standard environment in Microsoft Test Manager&lt;/a&gt;. Remember you only need this environment to include the VM you want to run the test on, unless you want to also gather logs and events from our machines in the system. The complexity is up to you.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/11/Running-Microsoft-Test-Manager-Test-Suites-as-part-of-a-vNext-Release-pipeline-Part-2.aspx">Also see Part 2 on how to address gotcha&rsquo;s in this process</a></p>
<p>When using Release Management there is a good chance you will want to run test suites as part of your automated deployment pipeline. If you are using a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx">vNext PowerShell based pipeline</a> you need a way to trigger the tests via PowerShell as there is no out the box agent to do the job.</p>
<h2 id="step-1---install-a-test-agent">Step 1 - Install a Test Agent</h2>
<p>The first step is to make sure that the Visual Studio Test Agent is installed on the box you wish to run the test on. if you don’t already have a MTM Environment in place with a test agent then this can be done by creating a <a href="https://msdn.microsoft.com/en-us/library/ee390842.aspx">standard environment in Microsoft Test Manager</a>. Remember you only need this environment to include the VM you want to run the test on, unless you want to also gather logs and events from our machines in the system. The complexity is up to you.</p>
<p>In my case I was using a network isolated environment so all this was already set up.</p>
<h2 id="step-2---setup-the-test-suite">Step 2 - Setup the Test Suite</h2>
<p>Once you have an environment you can <a href="https://msdn.microsoft.com/en-us/library/dd380741.aspx">setup your test suite and test plan</a> in MTM to include the tests you wish to run. These can be unit test style integration tests or Coded UI it is up to you.</p>
<p>If you have a lot of unit tests to <a href="https://msdn.microsoft.com/en-us/library/dd465191%28v=vs.110%29.aspx">associate for automation remember the TCM.EXE command can make your life a lot easier</a></p>
<p>This post does not aim to be a tutorial on setting up test plans, have a look at the <a href="https://vsartesttoolingguide.codeplex.com/">ALM Rangers guides for more details.</a></p>
<h2 id="step-3---the-release-management-environment">Step 3 -  The Release Management environment</h2>
<p>This is where it gets a bit confusing, you have already set up a Lab Management environment, but you still need to setup the Release Management vNext environment. As I was using a network isolated Lab management environment this gets even more complex, but <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/12/24/vNext-Release-Management-and-Network-Isolation.aspx">RM provides some tools to help</a></p>
<p>Again this is not a detailed tutorial. The key steps if you are using network isolation are</p>
<ol>
<li>Make sure that PowerShell on the VM is setup for remote access by running  <em>winrm quickconfig</em></li>
<li>In RM create a vNext environment</li>
<li>Add each a new server, using it’s corporate LAN name from Lab Management with the PowerShell remote access port e.g. VSLM-1002-e7858e28-77cf-4163-b6ba-1df2e91bfcab.lab.blackmarble.co.uk:5985</li>
<li>Make sure the server is set to use a shared UNC path for deployment.</li>
<li>Remember you will login to this VM with the credentials for the test domain.</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_256.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_252.png" title="image"></a></p>
<p>By this point you might be a bit confused as to what you have, well here is a diagram</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_257.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_253.png" title="image"></a></p>
<h2 id="step-4---wiring-the-test-into-the-pipeline">Step 4  - Wiring the test into the pipeline</h2>
<p>The final step is get the release pipeline to trigger the tests. This is done by calling the TCM.EXE command line to instruct the Test Controller trigger the tests. Now the copy of TCM does not have to be in Lab Management environment, but it does need to be on a VM known to RM vNext environment. This will usually mean a VM with Visual Studio Test Manager or Premium (or Enterprise for 2015) installed. In my case this was a dedicated test VM within the environment.</p>
<p>The key to the process is to run a script similar to the one used by the older RM agent based system to trigger the tests. You can extract this PowerShell script from an old release pipeline, but for ease I show my modified version here. The key changes are that I pass in the login credentials required for the call to the TFS server from TCM.EXE to be made from inside the network isolated environment and do a little extra checking of the test results so I can fail the build if the tests fail. These edits might not be required if you trigger TCM from a VM that is in the same domain as your TFS server, or have different success criteria.</p>
<pre tabindex="0"><code>param  
(  
    \[string\]$BuildDirectory = $null,  
    \[string\]$BuildDefinition = $null,  
    \[string\]$BuildNumber = $null,  
    \[string\]$TestEnvironment = $null,  
    \[string\]$LoginCreds = $null,  
    \[string\]$Collection = $(throw &#34;The collection URL must be provided.&#34;),  
    \[string\]$TeamProject = $(throw &#34;The team project must be provided.&#34;),  
    \[Int\]$PlanId = $(throw &#34;The test plan ID must be provided.&#34;),  
    \[Int\]$SuiteId = $(throw &#34;The test suite ID must be provided.&#34;),  
    \[Int\]$ConfigId = $(throw &#34;The test configuration ID must be provided.&#34;),  
    \[string\]$Title = &#39;Automated UI Tests&#39;,  
    \[string\]$SettingsName = $null,  
    \[Switch\]$InconclusiveFailsTests = $false,  
    \[Switch\]$RemoveIncludeParameter = $false,  
    \[Int\]$TestRunWaitDelay = 10  
) 

##################################################################################  
\# Output the logo.  
write-verbose &#34;Based on the Microsoft Release Management TcmExec PowerShell Script v12.0&#34;  
write-verbose &#34;Copyright (c) 2013 Microsoft. All rights reserved.\`n&#34;  

  
 

##################################################################################  
\# Initialize the default script exit code.  
$exitCode = 1

 

##################################################################################  
\# Output execution parameters.  
write-verbose &#34;Executing with the following parameters:&#34;  
write-verbose &#34;  Build Directory: $BuildDirectory&#34;  
write-verbose &#34;  Build Definition: $BuildDefinition&#34;  
write-verbose &#34;  Build Number: $BuildNumber&#34;  
write-verbose &#34;  Test Environment: $TestEnvironment&#34;  
write-verbose &#34;  Collection: $Collection&#34;  
write-verbose &#34;  Team project: $TeamProject&#34;  
write-verbose &#34;  Plan ID: $PlanId&#34;  
write-verbose &#34;  Suite ID: $SuiteId&#34;  
write-verbose &#34;  Configuration ID: $ConfigId&#34;  
write-verbose &#34;  Title: $Title&#34;  
write-verbose &#34;  Settings Name: $SettingsName&#34;  
write-verbose &#34;  Inconclusive result fails tests: $InconclusiveFailsTests&#34;  
write-verbose &#34;  Remove /include parameter from /create command: $RemoveIncludeParameter&#34;  
write-verbose &#34;  Test run wait delay: $TestRunWaitDelay&#34;

 

##################################################################################  
\# Define globally used variables and constants.  
\# Visual Studio 2013  
$vscommtools = \[System.Environment\]::GetEnvironmentVariable(&#34;VS120COMNTOOLS&#34;)  
if ($vscommtools -eq $null)  
{  
    # Visual Studio 2012  
    $vscommtools = \[System.Environment\]::GetEnvironmentVariable(&#34;VS110COMNTOOLS&#34;)  
}  
if ($vscommtools -eq $null)  
{  
    # Visual Studio 2010  
    $vscommtools = \[System.Environment\]::GetEnvironmentVariable(&#34;VS100COMNTOOLS&#34;)  
    if ($vscommtools -ne $null)  
    {  
        if (\[string\]::IsNullOrEmpty($BuildDirectory))  
        {  
            $(throw &#34;The build directory must be provided.&#34;)  
        }  
        if (!\[string\]::IsNullOrEmpty($BuildDefinition) -or !\[string\]::IsNullOrEmpty($BuildNumber))  
        {  
            $(throw &#34;The build definition and build number parameters may be used only under Visual Studio 2012/2013.&#34;)  
        }  
    }  
}  
else  
{  
    if (\[string\]::IsNullOrEmpty($BuildDefinition) -and \[string\]::IsNullOrEmpty($BuildNumber) -and \[string\]::IsNullOrEmpty($BuildDirectory))  
    {  
        $(throw &#34;You must specify the build directory or the build definition and build number.&#34;)  
    }  
}  
$tcmExe = \[System.IO.Path\]::GetFullPath($vscommtools + &#34;..IDETCM.exe&#34;)

 

##################################################################################  
\# Ensure TCM.EXE is available in the assumed path.  
if (\[System.IO.File\]::Exists($tcmExe))  
{  
    ##################################################################################  
    # Prepare optional parameters.  
    $testEnvironmentParameter = &#34;/testenvironment:$TestEnvironment&#34;  
    if (\[string\]::IsNullOrEmpty($TestEnvironment))  
    {  
        $testEnvironmentParameter = \[string\]::Empty  
    }  
    if (\[string\]::IsNullOrEmpty($BuildDirectory))  
    {  
        $buildDirectoryParameter = \[string\]::Empty  
    } else  
    {  
        # make sure we remove any trailing slashes as the cause permission issues  
        $BuildDirectory = $BuildDirectory.Trim()  
        while ($BuildDirectory.EndsWith(&#34;&#34;))  
        {  
            $BuildDirectory = $BuildDirectory.Substring(0,$BuildDirectory.Length-1)  
        }  
        $buildDirectoryParameter = &#34;/builddir:&#34;&#34;$BuildDirectory&#34;&#34;&#34;  
      
    }  
    $buildDefinitionParameter = &#34;/builddefinition:&#34;&#34;$BuildDefinition&#34;&#34;&#34;  
    if (\[string\]::IsNullOrEmpty($BuildDefinition))  
    {  
        $buildDefinitionParameter = \[string\]::Empty  
    }  
    $buildNumberParameter = &#34;/build:&#34;&#34;$BuildNumber&#34;&#34;&#34;  
    if (\[string\]::IsNullOrEmpty($BuildNumber))  
    {  
        $buildNumberParameter = \[string\]::Empty  
    }  
    $includeParameter = &#39;/include&#39;  
    if ($RemoveIncludeParameter)  
    {  
        $includeParameter = \[string\]::Empty  
    }  
    $settingsNameParameter = &#34;/settingsname:&#34;&#34;$SettingsName&#34;&#34;&#34;  
    if (\[string\]::IsNullOrEmpty($SettingsName))  
    {  
        $settingsNameParameter = \[string\]::Empty  
    }

 

    ##################################################################################  
    # Create the test run.  
    write-verbose &#34;\`nCreating test run ...&#34;  
    $testRunId = &amp; &#34;$tcmExe&#34; run /create /title:&#34;$Title&#34; /login:$LoginCreds /planid:$PlanId /suiteid:$SuiteId /configid:$ConfigId /collection:&#34;$Collection&#34; /teamproject:&#34;$TeamProject&#34; $testEnvironmentParameter $buildDirectoryParameter $buildDefinitionParameter $buildNumberParameter $settingsNameParameter $includeParameter  
    if ($testRunId -match &#39;.+:s(?&lt;TestRunId&gt;d+).&#39;)  
    {  
        # The test run ID is identified as a property in the match collection  
        # so we can access it directly by using the group name from the regular  
        # expression (i.e. TestRunId).  
        $testRunId = $matches.TestRunId

 

        write-verbose &#34;Waiting for test run $testRunId to complete ...&#34;  
        $waitingForTestRunCompletion = $true  
        while ($waitingForTestRunCompletion)  
        {  
            Start-Sleep -s $TestRunWaitDelay  
            $testRunStatus = &amp; &#34;$tcmExe&#34; run /list  /collection:&#34;$collection&#34; /login:$LoginCreds /teamproject:&#34;$TeamProject&#34; /querytext:&#34;SELECT \* FROM TestRun WHERE TestRunId=$testRunId&#34;  
            if ($testRunStatus.Count -lt 3 -or ($testRunStatus.Count -gt 2 -and $testRunStatus.GetValue(2) -match &#39;.+(?&lt;DateCompleted&gt;d+\[/\]d+\[/\]d+)&#39;))  
            {  
                $waitingForTestRunCompletion = $false  
            }  
        }

 

        write-verbose &#34;Evaluating test run $testRunId results...&#34;  
        # We do a small pause since the results might not be published yet.  
        Start-Sleep -s $TestRunWaitDelay

 

        $testRunResultsTrxFileName = &#34;TestRunResults$testRunId.trx&#34;  
        &amp; &#34;$tcmExe&#34; run /export /id:$testRunId  /collection:&#34;$collection&#34; /login:$LoginCreds /teamproject:&#34;$TeamProject&#34; /resultsfile:&#34;$testRunResultsTrxFileName&#34; | Out-Null  
        if (Test-path($testRunResultsTrxFileName))  
        {  
            # Load the XML document contents.  
            \[xml\]$testResultsXml = Get-Content &#34;$testRunResultsTrxFileName&#34;  
              
            # Extract the results of the test run.  
            $total = $testResultsXml.TestRun.ResultSummary.Counters.total  
            $passed = $testResultsXml.TestRun.ResultSummary.Counters.passed  
            $failed = $testResultsXml.TestRun.ResultSummary.Counters.failed  
            $inconclusive = $testResultsXml.TestRun.ResultSummary.Counters.inconclusive

 

            # Output the results of the test run.  
            write-verbose &#34;\`n========== Test: $total tests ran, $passed succeeded, $failed failed, $inconclusive inconclusive ==========&#34;

 

            # Determine if there were any failed tests during the test run execution.  
            if ($failed -eq 0 -and (-not $InconclusiveFailsTests -or $inconclusive -eq 0))  
            {  
                # Update this script&#39;s exit code.  
                $exitCode = 0  
            }

 

            # Remove the test run results file.  
            remove-item($testRunResultsTrxFileName) | Out-Null  
        }  
        else  
        {  
            write-error &#34;\`nERROR: Unable to export test run results file for analysis.&#34;  
        }  
    }  
}  
else  
{  
    write-error &#34;\`nERROR: Unable to locate $tcmExe&#34;  
}

 

##################################################################################  
\# Indicate the resulting exit code to the calling process.  
if ($exitCode -gt 0)  
{  
    write-error &#34;\`nERROR: Operation failed with error code $exitCode.&#34;  
}  
write-verbose &#34;\`nDone.&#34;  
exit $exitCode
</code></pre><p>Once this script is placed into source control in such a way that it ends up in the drops location for the build you can call it as a standard script item in your pipeline, targeting the VM that has TCM installed. Remember, you get the test environment name and various IDs required from MTM. Check the <a href="https://msdn.microsoft.com/en-us/library/jj155799.aspx">TCM command line</a> for more details.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_258.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_254.png" title="image"></a></p>
<p>However <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/07/25/Lessons-learnt-using-simple-PowerShell-scripts-with-vNext-Release-Management.aspx">we hit a problem, RM sets PowerShell variable, not the parameters for script</a> . So I find it easiest to use a wrapper script, also stored in source control, that converts the variable to the needed parameters. This also gives the opportunity to use RM set runtime variables and build more complex objects such as the credentials</p>
<pre tabindex="0"><code>\# Output execution parameters.  
$VerbosePreference =&#39;Continue&#39; # equiv to -verbose  
$folder = Split-Path -Parent $MyInvocation.MyCommand.Definition 

write-verbose &#34;Running $folderTcmExecWithLogin.ps1&#34; 

 

&amp; &#34;$folderTcmExecWithLogin.ps1&#34; -Collection $Collection -Teamproject $Teamproject -PlanId $PlanId  -SuiteId $SuiteId -ConfigId $ConfigId -BuildDirectory $PackageLocation -TestEnvironment $TestEnvironment -LoginCreds &#34;$TestUserUid,$TestUserPwd&#34; -SettingsName $SettingsName
</code></pre><h2 id="step-5--run-it-all">Step 5 – Run it all</h2>
<p>If you have everything in place you should now be able to trigger your deployment and have the tests run.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_259.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_255.png" title="image"></a></p>
<h2 id="finishing-up-and-one-final-gotcha">Finishing Up and One final gotcha</h2>
<p>I had hoped that my integration test run would be associated with my build. Normally when triggering test via TCM you do this by adding the following parameters to the TCM command line</p>
<pre tabindex="0"><code>TCM \[all the other params\] -BuildNumber &#39;My.Build.CI\_1.7.25.29773&#39; -BuildDefinition &#39;My.Build.CI&#39; 
</code></pre><p>However this will not work in the scenario above. This is because you can only use these flags to associate with successful builds, at the time TCM is run in the pipeline the build has not finished so it is not marked as successful. This does somewhat limit the end to end reporting. However, I think for now I can accept this limitation as the deployment completing is a suitable marker that the tests were passed.</p>
<p>The only workaround I can think is not to trigger the release directly from the build but to use the TFS events system to allow the build to finish first then trigger the release. <a href="https://tfsalertsdsl.codeplex.com/">You could use my TFS DSL Alert processor for that.</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Few issues a few days on with my Windows 10 upgrade</title>
      <link>https://blog.richardfennell.net/posts/few-issues-a-few-days-on-with-my-windows-10-upgrade/</link>
      <pubDate>Sun, 02 Aug 2015 10:31:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/few-issues-a-few-days-on-with-my-windows-10-upgrade/</guid>
      <description>&lt;p&gt;A few days in and I have solved the few problems I have had&lt;/p&gt;
&lt;h2 id=&#34;can-apply-update-security-update-for-windows-10-for-x64-based-systems-kb3074683&#34;&gt;Can apply update Security Update for Windows 10 for x64-based Systems (KB3074683)&lt;/h2&gt;
&lt;p&gt;My system tried to apply the KB3074683 patch a couple of time, rolling it back each time. A search of the &lt;a href=&#34;http://superuser.com/questions/948316/windows-10-we-couldnt-complete-the-updates-undoing-changes&#34;&gt;forums found the answer to this one&lt;/a&gt;. As in the forum post I have an Nvidia video card, in fact it caused the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/07/30/Upgrade-from-Windows-81-to-Windows-10-on-my-Lenovo-W520.aspx&#34;&gt;problems during the update&lt;/a&gt;, so the fix was to delete the UpdatusUser registry entry under &lt;code&gt;HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows NTCurrentVersionProfileList.&lt;/code&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A few days in and I have solved the few problems I have had</p>
<h2 id="can-apply-update-security-update-for-windows-10-for-x64-based-systems-kb3074683">Can apply update Security Update for Windows 10 for x64-based Systems (KB3074683)</h2>
<p>My system tried to apply the KB3074683 patch a couple of time, rolling it back each time. A search of the <a href="http://superuser.com/questions/948316/windows-10-we-couldnt-complete-the-updates-undoing-changes">forums found the answer to this one</a>. As in the forum post I have an Nvidia video card, in fact it caused the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/07/30/Upgrade-from-Windows-81-to-Windows-10-on-my-Lenovo-W520.aspx">problems during the update</a>, so the fix was to delete the UpdatusUser registry entry under <code>HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows NTCurrentVersionProfileList.</code></p>
<p>Once this was deleted the update applied without an issues.</p>
<h2 id="windows-defender-wont-start">Windows Defender won’t start</h2>
<p>Every time my PC started I got the error that Windows Defender would not start.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_255.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_251.png" title="image"></a></p>
<p>After much searching and fiddling with settings, it turned out this was a red herring. Defender was not starting as I had another AV product in place <a href="https://technet.microsoft.com/en-us/library/hh508836.aspx">System Center End Point Protection</a>, just as the dialog said. End Point Protection is installed by our IT team as part of our standard setup. So the actual issue was that the Defender tooltray app was trying to autostart, giving the error as it failed to connect to the background services which were not running. Strange as this appeared not to be an issue for Windows 8.1.</p>
<p>The answer was to use <a href="https://technet.microsoft.com/en-us/sysinternals/bb963902.aspx">SysInternal AutoRuns</a> to disable the loading of the tooltray application.</p>
<h2 id="can-access-a-data-dedupd-disk">Can access a Data DeDup’d disk</h2>
<p>On Windows 8.1 I use the <a href="https://weikingteh.wordpress.com/2013/01/15/how-to-enable-data-deduplication-in-windows-8/">Data DeDup hack</a> on one of my disks that I use for Hyper-V VM; I got 71% disk space saving as there is so much common data between the various VMs. At the time of writing I could not find a matching set of DSIM packages for Windows 10, they need to come from the equivalent release of Server 2016, which is still in CTP/Preview.</p>
<p>After some fiddling with feature packs from preview builds,  I decided to just stop using Data DeDup feature for now. So I attached my disk to a 8.1 machine with DeDup enabled, copied the contents off, re-formated the disk and the replaced the data. then put the disk back in my laptop.</p>
<p>I do hope Microsoft choose to add Data DeDup to Windows 10 in the future, it is of great use to me and anyone else who uses plenty of local VMs.</p>
<p>So I think I am there now, let us see how reliable it is day to day.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrade from Windows 8.1 to Windows 10 on my Lenovo W520</title>
      <link>https://blog.richardfennell.net/posts/upgrade-from-windows-8-1-to-windows-10-on-my-lenovo-w520/</link>
      <pubDate>Thu, 30 Jul 2015 09:44:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrade-from-windows-8-1-to-windows-10-on-my-lenovo-w520/</guid>
      <description>&lt;p&gt;I have just done an in place upgrade on my Lenovo W520 from Windows 8.1 to Windows 10. Something I had not tried during the beta programme, sticking to running Windows 10 in VMs (mostly on Azure).&lt;/p&gt;
&lt;p&gt;I have to say the process was pretty smooth. I only hit one issue, and this was the usual NVidia Optimus problems I saw installing &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/12/The-battle-of-the-Lenovo-W520-and-projectors.aspx&#34;&gt;Windows 8&lt;/a&gt; and &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/15/Issues-repaving-the-Lenovo-W520-with-Windows-81-again.aspx&#34;&gt;8.1&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is what happened&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;With Windows 8.1 running mounted the Windows 10 Enterprise ISO&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just done an in place upgrade on my Lenovo W520 from Windows 8.1 to Windows 10. Something I had not tried during the beta programme, sticking to running Windows 10 in VMs (mostly on Azure).</p>
<p>I have to say the process was pretty smooth. I only hit one issue, and this was the usual NVidia Optimus problems I saw installing <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/12/The-battle-of-the-Lenovo-W520-and-projectors.aspx">Windows 8</a> and <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/15/Issues-repaving-the-Lenovo-W520-with-Windows-81-again.aspx">8.1</a>.</p>
<p>This is what happened</p>
<ol>
<li>
<p>With Windows 8.1 running mounted the Windows 10 Enterprise ISO</p>
</li>
<li>
<p>Ran the setup</p>
</li>
<li>
<p>It did a few checks and eventually asked if I wanted to keep everything – I said yes</p>
</li>
<li>
<p>It showed a percentage complete gauge</p>
</li>
<li>
<p>It copied files OK (about 30%)</p>
</li>
<li>
<p>It said it had found 5% of drivers (32% overall) and stopped – I left it a couple of hours, no disk or network activity</p>
</li>
</ol>
<p>At this point I was a bit worried. But guessed it was the same problem as I had seen on Windows 8.x; the installer needs to access the Intel GPU as well as the NVidia GPU else it gets confused and hangs. A disabled GPU is not an removed GPU.</p>
<p>So I</p>
<ol>
<li>
<p>I rebooted (via the power switch)</p>
</li>
<li>
<p>Boot into BIOS (press the ThinkVantage button)</p>
</li>
<li>
<p>Selected the Enable Nvidia Optimus in the graphics options</p>
</li>
<li>
<p>Saved and rebooted</p>
</li>
<li>
<p>The PC rolled back the Windows 10 update (very quickly, less than 5 minutes)<br>
<strong>Note:</strong> I had expected to be challenged for a Bitlocker code due to the BIOS setting change during the reboot but I wasn’t</p>
</li>
<li>
<p>With Windows 8.1 running again I re-mounted the Windows 10 Enterprise ISO</p>
</li>
<li>
<p>Ran the setup again</p>
</li>
<li>
<p>It did the same few checks and eventually asked if I wanted to keep everything – I said yes again</p>
</li>
<li>
<p>This time it completed without error, it took around an hour</p>
</li>
</ol>
<p>So now I had an upgraded PC, and everything seemed OK. Including my Biometric login – I was surprised me as this had been a <a href="https://technet.microsoft.com/en-us/library/dn344916.aspx">problem to setup in the past</a>.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/12/The-battle-of-the-Lenovo-W520-and-projectors.aspx">Only issue was with my external screen</a>, so went back into the BIOS to disable NVidia Optimus again. This time it did prompt me to re-enter the Bitlocker key. Once this was done I could use external screens with no issues as before.</p>
<p>So a smooth upgrade from our standard Windows 8.1 dev machine image, a good stop gap until our IT team build a Windows 10 image in Systems Center.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Lessons learnt using simple PowerShell scripts with vNext Release Management</title>
      <link>https://blog.richardfennell.net/posts/lessons-learnt-using-simple-powershell-scripts-with-vnext-release-management/</link>
      <pubDate>Sat, 25 Jul 2015 15:23:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/lessons-learnt-using-simple-powershell-scripts-with-vnext-release-management/</guid>
      <description>&lt;p&gt;If you are using &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx&#34;&gt;basic PowerShell scripts as opposed to DSC with Release Management&lt;/a&gt; there are a few gotcha’s I have found.&lt;/p&gt;
&lt;h2 id=&#34;you-cannot-pass-parameters&#34;&gt;You cannot pass parameters&lt;/h2&gt;
&lt;p&gt;Lets look at a sample script that we would like to run via Release Manager&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;

param  
(  
    $param1   
)

 

write-verbose -verbose &amp;#34;Start&amp;#34;  
write-verbose -verbose &amp;#34;Got var1 \[$var1\]&amp;#34;  
write-verbose -verbose &amp;#34;Got param1 \[$param1\]&amp;#34;  
write-verbose -verbose &amp;#34;End&amp;#34;  
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;In Release Manager we have the following vNext workflow&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_254.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_250.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are using <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx">basic PowerShell scripts as opposed to DSC with Release Management</a> there are a few gotcha’s I have found.</p>
<h2 id="you-cannot-pass-parameters">You cannot pass parameters</h2>
<p>Lets look at a sample script that we would like to run via Release Manager</p>
<pre tabindex="0"><code>

param  
(  
    $param1   
)

 

write-verbose -verbose &#34;Start&#34;  
write-verbose -verbose &#34;Got var1 \[$var1\]&#34;  
write-verbose -verbose &#34;Got param1 \[$param1\]&#34;  
write-verbose -verbose &#34;End&#34;  
</code></pre><p>In Release Manager we have the following vNext workflow</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_254.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_250.png" title="image"></a></p>
<p>You can see we are setting two custom values which we intend to use within our script, one is a script parameter (<strong>Param1</strong>), the other one is just a global variable (<strong>Var1</strong>).</p>
<p>If we do a deployment we get the log</p>
<pre tabindex="0"><code>

Copying recursively from \\storedropsrm4583e318-abb2-4f21-9289-9cb0264a3542152 to C:WindowsDtlDownloadsISS vNext Drops succeeded.

Start

Got var1 \[XXXvar1\]

Got param1 \[\]

End
</code></pre><p>You can see the problem, <strong>$var1</strong> is set, <strong>$param1</strong> is not. Took me a while to get my head around this, the problem is the RM activity’s <strong>PSSCriptPath</strong> is just that a script path, not a command line that will be executed. Unlike the PowerShell activities in the vNext build tools you don’t have a pair of settings, one for the path to the script and another for the arguments. Here we have no ways to set the command line arguments.</p>
<p><strong>Note:</strong> The <strong>PSConfigurationPath</strong> is just for <a href="http://colinsalmcorner.com/post/using-webdeploy-in-vnext-releases">DSC configurations as discussed elsewhere</a>.</p>
<p>So in effect the <strong>Param1</strong> is not set, as we did not call</p>
<pre tabindex="0"><code>test -param1 “some value”
</code></pre><p>This means there is no point using parameters in the script you wish to use with RM vNext. But wait, I bet you are thinking ‘<em>I want to run my script externally to Release Manager to test it, and using parameters with validation rules is best practice, I don’t want to loose that advantage</em>’</p>
<p>The best workaround I have found is to use a wrapper script that takes the variable and makes them parameters, something like this</p>
<pre tabindex="0"><code>$folder = Split-Path -Parent $MyInvocation.MyCommand.Definition  
&amp; $foldertest.ps1 -param1 $param1
</code></pre><p><strong>Another Gotcha</strong> Note that I need to find the path the wrapper script is running in and use it to build the path to my actual script. If I don’t do this I get that the <strong>test.ps1</strong> script can’t be found.</p>
<p>After altering my pipeline to use the wrapper and rerunning the deployment I get the log file I wanted</p>
<pre tabindex="0"><code>

Copying recursively from \\storedropsrm4583e318-abb2-4f21-9289-9cb0264a3542160 to C:WindowsDtlDownloadsISS vNext Drops succeeded.

Start

Got var1 \[XXXvar1\]

Got param1 \[XXXparam1\]

End
</code></pre><p>This is all a bit ugly, but works.</p>
<p>Looking forward this appears to not be too much of an issue. The next version of <a href="https://channel9.msdn.com/Events/Build/2015/2-615">Release Management as shown at Build</a> is based around the vNext  TFS build tooling which seems to always allow you to pass true PowerShell command line arguments. So this problem should go away in the not too distant future.</p>
<h2 id="dont-write-to-the-console">Don’t write to the console</h2>
<p>The other big problem is any script that writes or reads from the console. Usually this means a <strong>write-host</strong> call in a script that causes an error along the lines</p>
<pre tabindex="0"><code>A command that prompts the user failed because the host program or the command type does not support user interaction. Try a host program that supports user interaction, such as the Windows PowerShell Console or Windows PowerShell ISE, and remove prompt-related commands from command types that do not support user interaction, such as Windows PowerShell workflows.  
 +At C:WindowsDtlDownloadsISS vNext Dropsscriptstest.ps1:7 char:1  
\+ Write-Host &#34;hello 1&#34; -ForegroundColor red
</code></pre><p>But also watch out for any <strong>CLS</strong> calls, that has caught me out. I have found the it can be hard to track down the offending lines, especially if there are PowerShell modules loading modules.</p>
<p>The best recommendation is to just use <strong>write-verbose</strong> and <strong>write-error</strong>.</p>
<ul>
<li><strong>write-error</strong> if your script has errored. This will let RM know the script has failed, thus failing the deployment – just what we want</li>
<li><strong>write-verbose</strong> for any logging</li>
</ul>
<p>Any other form of PowerShell output will not be passed to RM, be warned!</p>
<p>You might also notice in my sample script that I am passing the <strong>–verbose</strong> argument to the <strong>write-verbose</strong> command, again you have to have this maximal level of logging on  for the messages to make it out to the RM logs. Probably a better solution, if you think you might vary the level of logging, is to change the script to set the <strong>$VerbosePreference</strong></p>
<pre tabindex="0"><code>param  
(  
    $param1   
)   
   
  
  
  
$VerbosePreference =&#39;Continue&#39; # equiv to -verbose 

write-verbose &#34;Start&#34;  
write-verbose &#34;Got var1 \[$var1\]&#34;  
write-verbose &#34;Got param1 \[$param1\]&#34;  
write-verbose &#34;End&#34;  
</code></pre><p>So hopefully a few pointers to make your deployments a bit smoother</p>
]]></content:encoded>
    </item>
    <item>
      <title>Changes in VS/TFS licensing you really need to be aware of</title>
      <link>https://blog.richardfennell.net/posts/changes-in-vstfs-licensing-you-really-need-to-be-aware-of/</link>
      <pubDate>Tue, 21 Jul 2015 22:23:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/changes-in-vstfs-licensing-you-really-need-to-be-aware-of/</guid>
      <description>&lt;p&gt;With the release of Visual Studio 2015 there are some significant changes to Visual Studio and TFS licensing, you can find the &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2015/07/21/licensing-and-packaging-changes-for-tfs-2015.aspx&#34;&gt;details of Brian Harry’s blog&lt;/a&gt;. These changes can make a serious change in what you need to purchase for different roles, so it could well be worth a look.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With the release of Visual Studio 2015 there are some significant changes to Visual Studio and TFS licensing, you can find the <a href="http://blogs.msdn.com/b/bharry/archive/2015/07/21/licensing-and-packaging-changes-for-tfs-2015.aspx">details of Brian Harry’s blog</a>. These changes can make a serious change in what you need to purchase for different roles, so it could well be worth a look.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Stray white space in a ‘path to custom test adaptors’ will cause tests to fail on VSO vNext build</title>
      <link>https://blog.richardfennell.net/posts/stray-white-space-in-a-path-to-custom-test-adaptors-will-cause-tests-to-fail-on-vso-vnext-build/</link>
      <pubDate>Mon, 13 Jul 2015 16:59:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/stray-white-space-in-a-path-to-custom-test-adaptors-will-cause-tests-to-fail-on-vso-vnext-build/</guid>
      <description>&lt;p&gt;If you are providing a path to a custom test adaptor such as nUnit or Chutzpah for a TFS/VSO vNext build e.g. &lt;strong&gt;$(Build.SourcesDirectory)packages,&lt;/strong&gt; make sure you have no leading whitespace in the data entry form.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_253.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_249.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you do have a space you will see an error log like this as the adaptor cannot be found as the command line generated is malformed&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;2015-07-13T16:11:32.8986514Z Executing the powershell script: C:LRMMSServicesMmsTaskAgentProvisionerToolstasksVSTest1.0.16VSTest.ps1 2015-07-13T16:11:33.0727047Z ##\[debug\]Calling Invoke-VSTest for all test assemblies 2015-07-13T16:11:33.0756512Z Working folder: C:a549426d 2015-07-13T16:11:33.0777083Z Executing C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDECommonExtensionsMicrosoftTestWindowvstest.console.exe &amp;#34;C:a549426dUnitTestDemoWebApp.TestsScriptsmycode.tests.js&amp;#34;  /TestAdapterPath: C:a549426dUnitTestDemoChutzpah /logger:trx 2015-07-13T16:11:34.3495987Z Microsoft (R) Test Execution Command Line Tool Version 12.0.30723.0 2015-07-13T16:11:34.3505995Z Copyright (c) Microsoft Corporation.  All rights reserved. 2015-07-13T16:11:34.3896000Z ##\[error\]Error: The /TestAdapterPath parameter requires a value, which is path of a location containing custom test adapters. Example:  /TestAdapterPath:c:MyCustomAdapters 2015-07-13T16:11:36.5808275Z ##\[error\]Error: The test source file &amp;#34;C:a549426dUnitTestDemoChutzpah&amp;#34; provided was not found. 2015-07-13T16:11:37.0004574Z ##\[error\]VSTest Test Run failed with exit code: 1 2015-07-13T16:11:37.0094570Z ##\[warning\]No results found to publish.
&lt;/code&gt;&lt;/pre&gt;</description>
      <content:encoded><![CDATA[<p>If you are providing a path to a custom test adaptor such as nUnit or Chutzpah for a TFS/VSO vNext build e.g. <strong>$(Build.SourcesDirectory)packages,</strong> make sure you have no leading whitespace in the data entry form.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_253.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_249.png" title="image"></a></p>
<p>If you do have a space you will see an error log like this as the adaptor cannot be found as the command line generated is malformed</p>
<pre tabindex="0"><code>2015-07-13T16:11:32.8986514Z Executing the powershell script: C:LRMMSServicesMmsTaskAgentProvisionerToolstasksVSTest1.0.16VSTest.ps1 2015-07-13T16:11:33.0727047Z ##\[debug\]Calling Invoke-VSTest for all test assemblies 2015-07-13T16:11:33.0756512Z Working folder: C:a549426d 2015-07-13T16:11:33.0777083Z Executing C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDECommonExtensionsMicrosoftTestWindowvstest.console.exe &#34;C:a549426dUnitTestDemoWebApp.TestsScriptsmycode.tests.js&#34;  /TestAdapterPath: C:a549426dUnitTestDemoChutzpah /logger:trx 2015-07-13T16:11:34.3495987Z Microsoft (R) Test Execution Command Line Tool Version 12.0.30723.0 2015-07-13T16:11:34.3505995Z Copyright (c) Microsoft Corporation.  All rights reserved. 2015-07-13T16:11:34.3896000Z ##\[error\]Error: The /TestAdapterPath parameter requires a value, which is path of a location containing custom test adapters. Example:  /TestAdapterPath:c:MyCustomAdapters 2015-07-13T16:11:36.5808275Z ##\[error\]Error: The test source file &#34;C:a549426dUnitTestDemoChutzpah&#34; provided was not found. 2015-07-13T16:11:37.0004574Z ##\[error\]VSTest Test Run failed with exit code: 1 2015-07-13T16:11:37.0094570Z ##\[warning\]No results found to publish.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Cannot run Pester unit tests in Visual Studio but they work Ok from the command prompt</title>
      <link>https://blog.richardfennell.net/posts/cannot-run-pester-unit-tests-in-visual-studio-but-they-work-ok-from-the-command-prompt/</link>
      <pubDate>Tue, 07 Jul 2015 09:53:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-run-pester-unit-tests-in-visual-studio-but-they-work-ok-from-the-command-prompt/</guid>
      <description>&lt;p&gt;I have been using &lt;a href=&#34;https://github.com/pester/Pester&#34;&gt;Pester for some PowerShell tests&lt;/a&gt;. From the command prompt all is good, but I kept getting the error &lt;em&gt;‘module cannot be loaded because scripts is disabled on this system’&lt;/em&gt; when I tried to run them via the Visual Studio Test Explorer&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_251.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_247.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://stackoverflow.com/questions/6142914/powershell-executionpolicy-is-wrong-when-run-through-visualstudio&#34;&gt;I found the solution on StackOverflow&lt;/a&gt;, I had forgotten that Visual Studio is 32bit, so you need to set the 32bit execution policy. Opening the default PowerShell command prompt and and setting the policy only effect the 64Bit instance.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been using <a href="https://github.com/pester/Pester">Pester for some PowerShell tests</a>. From the command prompt all is good, but I kept getting the error <em>‘module cannot be loaded because scripts is disabled on this system’</em> when I tried to run them via the Visual Studio Test Explorer</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_251.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_247.png" title="image"></a></p>
<p><a href="http://stackoverflow.com/questions/6142914/powershell-executionpolicy-is-wrong-when-run-through-visualstudio">I found the solution on StackOverflow</a>, I had forgotten that Visual Studio is 32bit, so you need to set the 32bit execution policy. Opening the default PowerShell command prompt and and setting the policy only effect the 64Bit instance.</p>
<ol>
<li>Open C:WindowsSysWOW64WindowsPowerShellv1.0powershell.exe</li>
<li>Run the command Set-ExecutionPolicy RemoteSigned</li>
<li>My tests passed (without restarting Visual Studio)</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_252.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_248.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Overwriting your own parameters in Release Management can cause Powershell remoting problems</title>
      <link>https://blog.richardfennell.net/posts/overwriting-your-own-parameters-in-release-management-can-cause-powershell-remoting-problems/</link>
      <pubDate>Tue, 30 Jun 2015 20:58:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/overwriting-your-own-parameters-in-release-management-can-cause-powershell-remoting-problems/</guid>
      <description>&lt;p&gt;I have been doing &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx&#34;&gt;some work on vNext Release Management&lt;/a&gt;; I managed to waste a good hour today with a stupid error.&lt;/p&gt;
&lt;p&gt;In vNext process templates you provide a username and password to be used as the Powershell remoting credentials (in the red box below)&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_250.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_246.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;My Powershell script also took a parameter username, so this was provided as a custom configuration too (the green box). This was the issue. Not unsurprisingly having two parameters with the same name is a problem. You might get away with it if they are the same value (I did on one stage, which caused more confusion), but if they differ (as mine did in my production stage) the last one set wins, which meant my remote Powershell returned the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been doing <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/06/18/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts.aspx">some work on vNext Release Management</a>; I managed to waste a good hour today with a stupid error.</p>
<p>In vNext process templates you provide a username and password to be used as the Powershell remoting credentials (in the red box below)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_250.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_246.png" title="image"></a></p>
<p>My Powershell script also took a parameter username, so this was provided as a custom configuration too (the green box). This was the issue. Not unsurprisingly having two parameters with the same name is a problem. You might get away with it if they are the same value (I did on one stage, which caused more confusion), but if they differ (as mine did in my production stage) the last one set wins, which meant my remote Powershell returned the error</p>
<blockquote>
<p><em>System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. &mdash;&gt; System.AggregateException: One or more errors occurred. &mdash;&gt; Microsoft.TeamFoundation.Release.Common.Helpers.OperationFailedException: Permission denied while trying to connect to the target machine Gadila.blackmarble.co.uk on the port:5985 via power shell remoting.</em></p></blockquote>
<p>Easy to fix once you realise the problem, a logon failure is logged on the target machine in the event log. Just make sure you have unique parameters</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Leeds DevOps on the 21st of July</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-leeds-devops-on-the-21st-of-july/</link>
      <pubDate>Tue, 30 Jun 2015 08:19:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-leeds-devops-on-the-21st-of-july/</guid>
      <description>&lt;p&gt;I will be speaking at &lt;a href=&#34;http://www.leedsdevops.org.uk/post/122787096355/meetup-tuesday-21st-july-2015-at-the-odi-node-in&#34;&gt;Leeds DevOps on the 21st of July&lt;/a&gt; on the subject of &lt;a href=&#34;http://searchwindowsserver.techtarget.com/definition/Microsoft-Windows-PowerShell-DSC-Desired-State-Configuration&#34;&gt;Desired State Configuration (DSC)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;‘In the Windows world, due to its API based architecture, deployment is too often not as simple as copying an EXE and updating a text configuration file. Desired State Configuration is an attempt to ease the pain we suffer in this space. Providing a set of tools that can be leveraged by any set of deployment tools whether in a Windows or heterogeneous environment. In this session we will look at what DSC is, what resource are available and how to write your own’.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking at <a href="http://www.leedsdevops.org.uk/post/122787096355/meetup-tuesday-21st-july-2015-at-the-odi-node-in">Leeds DevOps on the 21st of July</a> on the subject of <a href="http://searchwindowsserver.techtarget.com/definition/Microsoft-Windows-PowerShell-DSC-Desired-State-Configuration">Desired State Configuration (DSC)</a>.</p>
<p><em>‘In the Windows world, due to its API based architecture, deployment is too often not as simple as copying an EXE and updating a text configuration file. Desired State Configuration is an attempt to ease the pain we suffer in this space. Providing a set of tools that can be leveraged by any set of deployment tools whether in a Windows or heterogeneous environment. In this session we will look at what DSC is, what resource are available and how to write your own’.</em></p>
<p>The event is at the <a href="http://theodi.org/nodes/leeds">The Node in Leeds</a>, tickets are free and are available over on <a href="http://leedsdevops-jul-15.eventbrite.co.uk/">Eventbrite</a> or <a href="http://leedsdevops-jul-15.eventbrite.co.uk/">meetup.com</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Release Management vNext templates when you don’t want to use DSC scripts</title>
      <link>https://blog.richardfennell.net/posts/using-release-management-vnext-templates-when-you-dont-want-to-use-dsc-scripts/</link>
      <pubDate>Thu, 18 Jun 2015 20:35:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-release-management-vnext-templates-when-you-dont-want-to-use-dsc-scripts/</guid>
      <description>&lt;p&gt;Update 21 Aug 2015 - This post contains all the basic information, but there is an improved PowerShell script discussed in &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/21/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts-A-better-script.aspx&#34;&gt;Using Release Management vNext templates when you don’t want to use DSC scripts – A better script&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Many web sites are basically forms over data, so you need to deploy some DB schema and something like a MVC website. Even for this ’bread and butter’ work it is important to have an automated process to avoid human error. Hence the rise in use of release tools to run your DACPAC and MSDeploy packages.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Update 21 Aug 2015 - This post contains all the basic information, but there is an improved PowerShell script discussed in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/08/21/Using-Release-Management-vNext-templates-when-you-dont-want-to-use-DSC-scripts-A-better-script.aspx">Using Release Management vNext templates when you don’t want to use DSC scripts – A better script</a></p>
<hr>
<p>Many web sites are basically forms over data, so you need to deploy some DB schema and something like a MVC website. Even for this ’bread and butter’ work it is important to have an automated process to avoid human error. Hence the rise in use of release tools to run your DACPAC and MSDeploy packages.</p>
<p>In the Microsoft space this might lead to the question of how <a href="http://blogs.technet.com/b/privatecloud/archive/2013/08/30/introducing-powershell-desired-state-configuration-dsc.aspx">Desired State Configuration (DSC)</a> can help? I, and <a href="http://colinsalmcorner.com/post/using-webdeploy-in-vnext-releases">others</a>, have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/12/24/Thoughts-in-vNext-deployment-in-Release-Management.aspx">posted in the past</a> about how DSC can be used to achieve this type of deployment, but this can be complex and you have to ask is DSC the best way to manage DACPAC and MSDeploy packages? Or is DSC better suited to only the configuration of your infrastructure/OS features?</p>
<p>You might ask why would you not want to use DSC, well the most common reason I see is that you need to provide deployment script to end clients who don’t use DSC, or you have just decided want basic PowerShell. Only you will be able to judge which is the best for your systems, but I thought it worth outlining an alternative way to do deployment of these package using Release Management vNext pipelines that does not make use of DSC.</p>
<h2 id="background">Background</h2>
<p>Let us assume we have a system with a SQL server and a IIS web server that have been added to the Release Management vNext environment. These already have SQL and IIS enabled, maybe you used DSC for that?</p>
<p>The vNext release template allows you to run either DSC or PowerShell on the machines, we will <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/12/24/Thoughts-in-vNext-deployment-in-Release-Management.aspx">ignore DSC</a>, so what can you do if you want to use simple PowerShell scripts?</p>
<h2 id="where-do-i-put-my-scripts">Where do I put my Scripts?</h2>
<p>We will place the PowerShell scripts (and maybe any tools they call) under source control such that they end up in the build drops location, thus making it easy for Release Management to find them, and allowing the scripts (and tools) to be versioned.</p>
<h2 id="deploying-a-dacpac">Deploying a DACPAC</h2>
<p>The script I have been using to deploy DACPACs is as follows</p>
<pre tabindex="0"><code>

\# find the script folder  
$folder = Split-Path -parent $MyInvocation.MyCommand.Definition  
Write-Verbose &#34;Deploying DACPAC $SOURCEFILE using script in &#39;$folder&#39;&#34;  
&amp; $foldersqlpackage.exe /Action:Publish /SourceFile:$folder..$SOURCEFILE /TargetServerName:$TARGETSERVERNAME /TargetDatabaseName:$TARGETDATABASENAME | Write-Verbose -Verbose
</code></pre><p>Note that:</p>
<ol>
<li>First it finds the folder it is running in, this is the easiest way to find other resource I need</li>
<li>The only way any logging will end up in the Release Management logs is if is logged at the verbose level i.e. <em>write-verbose “your message” –verbose</em></li>
<li>I have used a simple <em>&amp; my.exe</em> to execute my command, but pass the output via the <em>write-verbose</em> cmdlet to make sure we see the results. The alternative would be to use <em>invoke-process</em></li>
<li>SQLPACKAGE.EXE (and its associated DLLs) are located in the same SCRIPTS folder as the PowerShell script and are under source control. Of course you could make sure any tools you need are already installed on the target machine.</li>
</ol>
<p>I pass the three parameters need for the strips as custom configuration</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_248.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_244.png" title="image"></a></p>
<p>Remember that you don’t have to be the SQL server to run SQLPACKAGE.EXE, it can be run remotely (that is why in the screen shot above the ServerName is ISS IIS8 not SQL as you might expect)</p>
<h2 id="deploying-a-msdeploy-package">Deploying a MSDeploy Package</h2>
<p>The script I use to deploy the WebDeploy package this is as follows</p>
<pre tabindex="0"><code>

function Update-ParametersFile  
{  
    param  
    (  
        $paramFilePath,  
        $paramsToReplace  
    )

 

    write-verbose &#34;Updating parameters file &#39;$paramFilePath&#39;&#34; -verbose  
    $content = get-content $paramFilePath  
    $paramsToReplace.GetEnumerator() | % {  
        Write-Verbose &#34;Replacing value for key &#39;$($\_.Key)&#39;&#34; -Verbose  
        $content = $content.Replace($\_.Key, $\_.Value)  
    }  
    set-content -Path $paramFilePath -Value $content

 

}

 

  
\# the script folder  
$folder = Split-Path -parent $MyInvocation.MyCommand.Definition  
write-verbose &#34;Deploying Website &#39;$package&#39; using script in &#39;$folder&#39;&#34; -verbose

 

Update-ParametersFile -paramFilePath &#34;$folder..\_PublishedWebsites$($package)\_Package$package.SetParameters.xml&#34; -paramsToReplace @{  
      &#34;\_\_DataContext\_\_&#34; = $datacontext  
      &#34;\_\_SiteName\_\_&#34; = $siteName  
      &#34;\_\_Domain\_\_&#34; = $Domain  
      &#34;\_\_AdminGroups\_\_&#34; = $AdminGroups  
  
}

 

write-verbose &#34;Calling &#39;$package.deploy.cmd&#39;&#34; -verbose  
&amp; &#34;$folder..\_PublishedWebsites$($package)\_Package$package.deploy.cmd&#34; /Y | Write-Verbose -verbose
</code></pre><p>Note that:</p>
<ol>
<li>First I declare a function that I use to replace the contents of the <em>package.setparameters.xml</em> file, <a href="http://colinsalmcorner.com/post/webdeploy-and-release-management--the-proper-way">a key step in using binary promotion and WebDeploy</a></li>
<li>Again I finds the folder the script is running in so I can locate other resources</li>
<li>I then declare the parameters I need to replace and call the replacement function </li>
<li>Finally I call the <em>package.deploy.cmd</em> command, and pass the output via the <em>write-verbose</em> to pass the output to the Release Management logs</li>
</ol>
<p>This is called as follows</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_249.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_245.png" title="image"></a></p>
<h2 id="summary">Summary</h2>
<p>So I think these reusable scripts give a fairly  easy way to make use of  vNext Release Management pipelines. They can also easily be given to clients who just want to manually run something.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for 500 internal errors when trying to trigger a Release Management pipeline from a build via the REST API</title>
      <link>https://blog.richardfennell.net/posts/fix-for-500-internal-errors-when-trying-to-trigger-a-release-management-pipeline-from-a-build-via-the-rest-api/</link>
      <pubDate>Wed, 17 Jun 2015 15:03:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-500-internal-errors-when-trying-to-trigger-a-release-management-pipeline-from-a-build-via-the-rest-api/</guid>
      <description>&lt;p&gt;With the help of the Release Management team at Microsoft I now have a working REST based automated TFS Build to Release Management pipeline. Previously we were using a TFS automated build and then manually triggering our agent based Release Management pipeline. When we moved to a vNext PS/DSC based RM pipeline I took the chance to automate the link using REST via a &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2014/10/10/trigger-release-from-build-with-release-management-for-visual-studio-2013-update-3.aspx&#34;&gt;PowerShell script&lt;/a&gt; to trigger the initial deployment. However, I hit problem, first with a stupid 401 permission error and later with a much stranger 500 internal server error.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With the help of the Release Management team at Microsoft I now have a working REST based automated TFS Build to Release Management pipeline. Previously we were using a TFS automated build and then manually triggering our agent based Release Management pipeline. When we moved to a vNext PS/DSC based RM pipeline I took the chance to automate the link using REST via a <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2014/10/10/trigger-release-from-build-with-release-management-for-visual-studio-2013-update-3.aspx">PowerShell script</a> to trigger the initial deployment. However, I hit problem, first with a stupid 401 permission error and later with a much stranger 500 internal server error.</p>
<h2 id="fixing-the-401-error">Fixing the 401 error</h2>
<p>The first problem was that in the <strong>InitiateReleaseFromBuild.ps1</strong> script defaults to a hardcoded username and password. You should really be using the current credentials. To do this make sure the lines around line60 in the script are as shown below (or enter valid credentials if you don’t want to use default credentials)</p>
<pre tabindex="0"><code>$wc = New-Object System.Net.WebClient  
$wc.UseDefaultCredentials = $true  
\# rmuser should be part rm users list and he should have permission to trigger the release.  
#$wc.Credentials = new-object System.Net.NetworkCredential(&#34;rmuser&#34;, &#34;rmuserpassword&#34;, &#34;rmuserdomain&#34;)  
</code></pre><h2 id="fixing-the-500-error">Fixing the 500 error</h2>
<p>The 500 error was stranger. Turns out the issue was the registration of our TFS server in Release Management.</p>
<p>Using the dialogs in the RM client we has registered our TFS server, this had generated the URL <a href="https://tfs.domain.com:443/tfs">https://tfs.domain.com:443/tfs</a>. If we ran the <strong>InitiateReleaseFromBuild.ps1</strong> script with this URL set as a parameter we got the 500 error, the RM logs showed the workflow could not start. Eventually we realised it was because RM thought it could not access the TFS server. So the problem was that at some point  between the script being run and the RM server processing the URL the :443 had been removed; presumably because this is the default for HTTPS and some layer was being ‘helpful’. This meant that the RM server was trying to string match the URL <a href="https://tfs.domain.com/tfs">https://tfs.domain.com/tfs</a> against <a href="https://tfs.domain.com:443/tfs">https://tfs.domain.com:443/tfs</a> which failed, hence the workflow failed.</p>
<p>The fix was to edit the TFS registration in RM to remove the port number, leave the field empty (not that obvious as the dialog completes this field for you when you select HTTPS)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_247.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_243.png" title="image"></a></p>
<p>Once this was done the URL matching worked and the release pipeline triggered as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Failing Ping tests on Application Insights</title>
      <link>https://blog.richardfennell.net/posts/failing-ping-tests-on-application-insights/</link>
      <pubDate>Mon, 08 Jun 2015 21:35:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/failing-ping-tests-on-application-insights/</guid>
      <description>&lt;p&gt;Whilst setting up &lt;a href=&#34;https://azure.microsoft.com/en-us/documentation/articles/app-insights-get-started/&#34;&gt;Application Insights&lt;/a&gt; on one of our web sites I hit a problem. The target site appeared to be working OK, but if I setup a &lt;a href=&#34;https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/&#34;&gt;ping test&lt;/a&gt; it failed.&lt;/p&gt;
&lt;p&gt;Digging into the failure, as with much of Application Insights just keep clicking to go deeper, I found the issue was that a CSS file was failing to load.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_246.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_242.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Presumably on this Umbraco site the CSS file is meant to be loaded for the site but none of the styles are actually used, hence the site renders OK.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst setting up <a href="https://azure.microsoft.com/en-us/documentation/articles/app-insights-get-started/">Application Insights</a> on one of our web sites I hit a problem. The target site appeared to be working OK, but if I setup a <a href="https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/">ping test</a> it failed.</p>
<p>Digging into the failure, as with much of Application Insights just keep clicking to go deeper, I found the issue was that a CSS file was failing to load.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_246.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_242.png" title="image"></a></p>
<p>Presumably on this Umbraco site the CSS file is meant to be loaded for the site but none of the styles are actually used, hence the site renders OK.</p>
<p>The fix was to make sure the video.css file was present on the server. So Application Insights found a problem with a production system – just as it is meant to!</p>
<p>So it is important to remember that the ping test is not the simple thing I thought it was, it is actually a full page load, making sure that only 200 OK responses are seen.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows Media Center issues again</title>
      <link>https://blog.richardfennell.net/posts/windows-media-center-issues-again/</link>
      <pubDate>Mon, 08 Jun 2015 19:43:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-media-center-issues-again/</guid>
      <description>&lt;p&gt;Today was my day for semi annual Media Center (MCE) problems. As usual they seemed to start with an unexpected power issue, a local power cut, maybe the answer is a UPS for the TV setup? Once the PC was rebooted it had forgotten it had any tuners. If I tried to view live TV or re-setup the TV signal it just hung with a spinning ‘toilet bowl of death’ cursor. Corrupt TV data DB I suspect, &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/category/MCE.aspx&#34;&gt;I have seen it before&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today was my day for semi annual Media Center (MCE) problems. As usual they seemed to start with an unexpected power issue, a local power cut, maybe the answer is a UPS for the TV setup? Once the PC was rebooted it had forgotten it had any tuners. If I tried to view live TV or re-setup the TV signal it just hung with a spinning ‘toilet bowl of death’ cursor. Corrupt TV data DB I suspect, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/category/MCE.aspx">I have seen it before</a></p>
<p>I tried clearing the DB content in <em>C:programdatawindowsehome</em>, but no luck. In the end I did the dirty fix of</p>
<ul>
<li>Going into Window features</li>
<li>Remove media center</li>
<li>Reboot</li>
<li>Re-add media center</li>
<li>Re-run MCE setup – this took over an hour, it is slow to find Freeview channels</li>
</ul>
<p>Downside of this is that it has the issue it resets all the series settings, media locations etc. but it does tend to work.</p>
<p>My MCE seems to have been getting slower and generally needed more reboots for a while, strange is it has been on the same dedicated hardware for a few years.  <a href="http://www.theregister.co.uk/2015/05/05/no_windows_media_center_win_10/">Given Windows 10 is on the horizon and it has no MCE</a> I guess it  is time to revisit an MCE replacement (or leave my MCE box on Windows 8). Last time I looked the issue was PVR support for Freeview and general ‘wife friendly operations’. It does seem that fewer and fewer people are prioritising terrestrial broadcast as media source, it all seems to be about streaming. Just don’t think I am there yet, I like my PVR. But there is no harm is a trawl of the other current offerings, I might be surprised</p>
<p><strong>Updated 9pm  when the setup wizard actually finished</strong> – turns out my media library settings were not lost, just series recording settings</p>
]]></content:encoded>
    </item>
    <item>
      <title>Strange TFS build process template editing issue with Typemock</title>
      <link>https://blog.richardfennell.net/posts/strange-tfs-build-process-template-editing-issue-with-typemock/</link>
      <pubDate>Tue, 02 Jun 2015 14:30:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/strange-tfs-build-process-template-editing-issue-with-typemock/</guid>
      <description>&lt;p&gt;Had a strange issue today while editing our standard TFS 2013 XAML build process template to add an optional post drop script block to &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2014/10/10/trigger-release-from-build-with-release-management-for-visual-studio-2013-update-3.aspx&#34;&gt;allow a Release Management pipeline to be triggered via REST&lt;/a&gt;. Our standard template includes a block for &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx&#34;&gt;enabling and disabling Typemock&lt;/a&gt;, after editing our template to add the new script block (nowhere near the Typemock section) our builds failed with the error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;TF215097: An error occurred while initializing a build for build definition BMISS.Expenses.Main.CI: Exception Message: Cannot set unknown member &amp;#39;TypeMock.TFS2013.TypeMockStart.DisableAutoLink&amp;#39;. (type XamlObjectWriterException) Exception Stack Trace: at System.Xaml.XamlObjectWriter.WriteStartMember(XamlMember property) 
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;It took ages to find the issue, we hunted for badly formed XAML, but the issue turned out to be that when ever we opened the template in Visual Studio 2013 it added the highlighted property&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Had a strange issue today while editing our standard TFS 2013 XAML build process template to add an optional post drop script block to <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2014/10/10/trigger-release-from-build-with-release-management-for-visual-studio-2013-update-3.aspx">allow a Release Management pipeline to be triggered via REST</a>. Our standard template includes a block for <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx">enabling and disabling Typemock</a>, after editing our template to add the new script block (nowhere near the Typemock section) our builds failed with the error</p>
<pre tabindex="0"><code>TF215097: An error occurred while initializing a build for build definition BMISS.Expenses.Main.CI: Exception Message: Cannot set unknown member &#39;TypeMock.TFS2013.TypeMockStart.DisableAutoLink&#39;. (type XamlObjectWriterException) Exception Stack Trace: at System.Xaml.XamlObjectWriter.WriteStartMember(XamlMember property) 
</code></pre><p>It took ages to find the issue, we hunted for badly formed XAML, but the issue turned out to be that when ever we opened the template in Visual Studio 2013 it added the highlighted property</p>
<pre tabindex="0"><code>&lt;If Condition=&#34;\[UseTypemock = True\]&#34; DisplayName=&#34;If using Typemock&#34; sap2010:WorkflowViewState.IdRef=&#34;If\_8&#34;&gt;  
  &lt;If.Then&gt;  
   &lt;Sequence DisplayName=&#34;Enabling Typemock&#34; sap2010:WorkflowViewState.IdRef=&#34;Sequence\_16&#34;&gt;  
      &lt;tt:TypeMockRegister AutoDeployDir=&#34;\[TypemockAutoDeployDir\]&#34; Company=&#34;\[TypemockCompany\]&#34; sap2010:WorkflowViewState.IdRef=&#34;TypeMockRegister\_1&#34; License=&#34;\[TypemockLicense\]&#34; /&gt;  
      &lt;tt:TypeMockStart DisableAutoLink=&#34;{x:Null}&#34; EvaluationFolder=&#34;{x:Null}&#34; Link=&#34;{x:Null}&#34; LogLevel=&#34;{x:Null}&#34; LogPath=&#34;{x:Null}&#34; ProfilerLaunchedFirst=&#34;{x:Null}&#34; Target=&#34;{x:Null}&#34; Verbosity=&#34;{x:Null}&#34; Version=&#34;{x:Null}&#34; AutoDeployDir=&#34;\[TypemockAutoDeployDir\]&#34; sap2010:WorkflowViewState.IdRef=&#34;TypeMockStart\_1&#34; /&gt;  
     &lt;/Sequence&gt;  
  &lt;/If.Then&gt;  
&lt;/If&gt;  
</code></pre><p>It should have been</p>
<pre tabindex="0"><code>&lt;If Condition=&#34;\[UseTypemock = True\]&#34; DisplayName=&#34;If using Typemock&#34; sap2010:WorkflowViewState.IdRef=&#34;If\_8&#34;&gt;  
  &lt;If.Then&gt;  
    &lt;Sequence DisplayName=&#34;Enabling Typemock&#34; sap2010:WorkflowViewState.IdRef=&#34;Sequence\_16&#34;&gt;  
       &lt;tt:TypeMockRegister AutoDeployDir=&#34;\[TypemockAutoDeployDir\]&#34; Company=&#34;\[TypemockCompany\]&#34; sap2010:WorkflowViewState.IdRef=&#34;TypeMockRegister\_1&#34; License=&#34;\[TypemockLicense\]&#34; /&gt;  
       &lt;tt:TypeMockStart EvaluationFolder=&#34;{x:Null}&#34; Link=&#34;{x:Null}&#34; LogLevel=&#34;{x:Null}&#34; LogPath=&#34;{x:Null}&#34; ProfilerLaunchedFirst=&#34;{x:Null}&#34; Target=&#34;{x:Null}&#34; Verbosity=&#34;{x:Null}&#34; Version=&#34;{x:Null}&#34; AutoDeployDir=&#34;\[TypemockAutoDeployDir\]&#34; sap2010:WorkflowViewState.IdRef=&#34;TypeMockStart\_1&#34; /&gt;  
    &lt;/Sequence&gt;  
  &lt;/If.Then&gt;  
&lt;/If&gt;
</code></pre><p>All I can assume is that this is due to some assembly mismatch between the Typemock DLLs linked to the XAML build process template and those on my development PC.</p>
<p>The fix for now is to do the editing in a text editor, or at least checking the file to make sure the property has not been edited before it is checked in.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Build and Ignite Event Sessions</title>
      <link>https://blog.richardfennell.net/posts/build-and-ignite-event-sessions/</link>
      <pubDate>Thu, 28 May 2015 14:24:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/build-and-ignite-event-sessions/</guid>
      <description>&lt;p&gt;If you came to our &lt;a href=&#34;http://www.blackmarble.co.uk/events&#34;&gt;re:Build and re:Ignite events&lt;/a&gt; last week and want more information on the subjects we covered, remember that all the sessions from both the Microsoft Build and Ignite events are available at &lt;a href=&#34;http://channel9.msdn.com/Events/Build/2015&#34;&gt;http://channel9.msdn.com/Events/Build/2015&lt;/a&gt; and &lt;a href=&#34;http://channel9.msdn.com/Events/Ignite/2015&#34;&gt;http://channel9.msdn.com/Events/Ignite/2015&lt;/a&gt;. On these sites you can find videos of the sessions and slide stacks, hours of family viewing.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you came to our <a href="http://www.blackmarble.co.uk/events">re:Build and re:Ignite events</a> last week and want more information on the subjects we covered, remember that all the sessions from both the Microsoft Build and Ignite events are available at <a href="http://channel9.msdn.com/Events/Build/2015">http://channel9.msdn.com/Events/Build/2015</a> and <a href="http://channel9.msdn.com/Events/Ignite/2015">http://channel9.msdn.com/Events/Ignite/2015</a>. On these sites you can find videos of the sessions and slide stacks, hours of family viewing.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Free Black Marble Build and Ignite round up event in Yorkshire</title>
      <link>https://blog.richardfennell.net/posts/free-black-marble-build-and-ignite-round-up-event-in-yorkshire/</link>
      <pubDate>Thu, 07 May 2015 12:28:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/free-black-marble-build-and-ignite-round-up-event-in-yorkshire/</guid>
      <description>&lt;p&gt;Did you miss content from Microsoft  &lt;a href=&#34;http://www.buildwindows.com/&#34;&gt;Build&lt;/a&gt; and &lt;a href=&#34;http://ignite.microsoft.com/&#34;&gt;Ignite&lt;/a&gt; events?&lt;/p&gt;
&lt;p&gt;Well you can catch up on &lt;a href=&#34;http://channel9.msdn.com/&#34;&gt;Channel9&lt;/a&gt;, but we at Black Marble are running free round up events on the 20th May in Leeds.&lt;/p&gt;
&lt;p&gt;Why not come and talk to Black Marble and Microsoft staff who attended the events?&lt;/p&gt;
&lt;p&gt;To register go to the &lt;a href=&#34;http://www.blackmarble.co.uk/events&#34;&gt;Black Marble events site&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Did you miss content from Microsoft  <a href="http://www.buildwindows.com/">Build</a> and <a href="http://ignite.microsoft.com/">Ignite</a> events?</p>
<p>Well you can catch up on <a href="http://channel9.msdn.com/">Channel9</a>, but we at Black Marble are running free round up events on the 20th May in Leeds.</p>
<p>Why not come and talk to Black Marble and Microsoft staff who attended the events?</p>
<p>To register go to the <a href="http://www.blackmarble.co.uk/events">Black Marble events site</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Generating MsTest wrappers for nUnit tests</title>
      <link>https://blog.richardfennell.net/posts/generating-mstest-wrappers-for-nunit-tests/</link>
      <pubDate>Thu, 07 May 2015 09:32:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/generating-mstest-wrappers-for-nunit-tests/</guid>
      <description>&lt;p&gt;Recently whilst at a clients one of our consultants came across an interesting issue; the client was using &lt;a href=&#34;http://www.seleniumhq.org/&#34;&gt;Selenium&lt;/a&gt; to write web tests, they wanted to trigger them both from Microsoft Test Manager (MTM) as local automated tests, and also run them using &lt;a href=&#34;https://www.browserstack.com/automate/c-sharp&#34;&gt;BrowserStack&lt;/a&gt; for multi browser regression testing. The problem was to import the tests into MTM they needed to be written in MsTest and for BrowserStack nUnit.&lt;/p&gt;
&lt;p&gt;As they did not want to duplicate each test what could they ?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently whilst at a clients one of our consultants came across an interesting issue; the client was using <a href="http://www.seleniumhq.org/">Selenium</a> to write web tests, they wanted to trigger them both from Microsoft Test Manager (MTM) as local automated tests, and also run them using <a href="https://www.browserstack.com/automate/c-sharp">BrowserStack</a> for multi browser regression testing. The problem was to import the tests into MTM they needed to be written in MsTest and for BrowserStack nUnit.</p>
<p>As they did not want to duplicate each test what could they ?</p>
<p>After a bit of thought <a href="https://msdn.microsoft.com/en-us/library/bb126445.aspx">T4 templates</a> came to the rescue, it was fairly easy to write a proof of concept T4 template to generate an MsTest wrapper for each nUnit at compile time. This is what we did, and the gotcha’s we discovered.</p>
<h3 id="prerequisites">Prerequisites</h3>
<ul>
<li>Read the tutorial and resources on <a href="http://www.olegsych.com/tag/t4/">Oleg Sych’s blog on T4 Templating</a> – a brilliant T4 resource</li>
<li>Install the <a href="http://www.microsoft.com/en-gb/download/details.aspx?id=40758">Visual Studio 2013 SDK</a></li>
<li>Install the <a href="http://www.microsoft.com/en-us/download/confirmation.aspx?id=40754">Visual Studio 2013 Modeling and Visualization SDK</a></li>
</ul>
<h3 id="process">Process</h3>
<p>[To make life easier this code has all been made available on <a href="https://github.com/rfennell/T4GenerateMsTestWrappersForNunitTests">GitHub</a>]</p>
<ol>
<li>
<p>Create a solution containing a class library with some nUnit tests as test data</p>
</li>
<li>
<p>Add a MsTest Unit Test project to this solution.</p>
</li>
<li>
<p>Add a T4 ‘Text Template’ item to the MsTest project</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_244.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_240.png" title="image"></a></p>
</li>
<li>
<p>Write the T4 template that uses reflection to find the nUnit tests in the solution and generates the MsTest wrappers. <a href="https://github.com/rfennell/T4GenerateMsTestWrappersForNunitTests/blob/master/T4GenerateMsTestWrappersForNunitTests/GeneratedMstests/GenerateTestWrapper.tt">See the source for the template on Github</a></p>
</li>
<li>
<p>Once this is done both the nUnit and MsTest can now be run inside Visual Studio</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_245.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_241.png" title="image"></a></p>
</li>
<li>
<p>You can now add the tests to either MTM or BrowserStack as needed, each product using the unit tests it can see.</p>
</li>
</ol>
<h3 id="the-gotcha--you-have-two-build-engines">The Gotcha – you have two build engines</h3>
<p>The main issues I had were due to me not realising the implications of the T4 template being processed in different ways between Visual Studio and MSBuild.</p>
<p>By default the template is processed whenever the .TT file is edited in Visual Studio, for me this is not the behaviour required, I wanted the template processed every time the nUnit tests are altered. The easiest way to do this is to always regenerate the .CS file from the template on a compile. <a href="http://www.olegsych.com/2010/04/understanding-t4-msbuild-integration/">Oleg again provides great documentation on how to do this</a>, you end up editing the .CSPROJ file.</p>
<pre tabindex="0"><code>&lt;!-- Include the T$ processing targets--&gt;  
 &lt;Import Project=&#34;$(VSToolsPath)TextTemplatingMicrosoft.TextTemplating.targets&#34; /&gt;  
   
 &lt;!-- Set parameters we want to access in the transform --&gt;  
 &lt;ItemGroup&gt;  
   &lt;T4ParameterValues Include=&#34;slnDir&#34;&gt;  
     &lt;Value&gt;$(MSBuildProjectDirectory)..&lt;/Value&gt;  
     &lt;Visible&gt;false&lt;/Visible&gt;  
   &lt;/T4ParameterValues&gt;  
  &lt;/ItemGroup&gt; 

 &lt;ItemGroup&gt;  
   &lt;T4ParameterValues Include=&#34;configuration&#34;&gt;  
     &lt;Value&gt;$(Configuration)&lt;/Value&gt;  
     &lt;Visible&gt;false&lt;/Visible&gt;  
   &lt;/T4ParameterValues&gt;  
 &lt;/ItemGroup&gt;

 

 &lt;ItemGroup&gt;  
   &lt;T4ParameterValues Include=&#34;projectName&#34;&gt;  
     &lt;Value&gt;$(MSBuildProjectName)&lt;/Value&gt;  
     &lt;Visible&gt;false&lt;/Visible&gt;  
   &lt;/T4ParameterValues&gt;  
 &lt;/ItemGroup&gt;  
   
 &lt;!-- Tell the MSBuild T4 task to make the property available: --&gt;  
 &lt;PropertyGroup&gt;  
   &lt;!-- do the transform --&gt;  
   &lt;TransformOnBuild&gt;true&lt;/TransformOnBuild&gt;  
   &lt;!-- Force a complete reprocess --&gt;  
   &lt;TransformOutOfDateOnly&gt;false&lt;/TransformOutOfDateOnly&gt;  
 &lt;/PropertyGroup&gt;  
</code></pre><p>I thought after editing my .CSPROJ file to call the MSBuild targets required, and exposed properties I needed from MSBuild, that all would be good. However I quickly found that though when building my solution with MSBuild from the command line all was fine, a build in Visual Studio failed. Turns out I had to make my template support both forms of building.</p>
<p>This meant assuming in my .TT file I was building on MSBuild and if I got nulls for required property values switch to the Visual Studio way of working e.g.</p>
<pre tabindex="0"><code>    // get the msbuild variables if we can  
    var configName = Host.ResolveParameterValue(&#34;-&#34;, &#34;-&#34;, &#34;configuration&#34;);  
    

    if (String.IsNullOrEmpty(configName)==true)  
    {  
        WriteLine (&#34;// Generated from Visual Studio&#34;);

        // Get the VS instance  
        IServiceProvider serviceProvider = (IServiceProvider)this.Host;  
        DTE dte = serviceProvider.GetService(typeof(DTE)) as DTE;    
        configName = dte.Solution.SolutionBuild.ActiveConfiguration.Name ;

 

    } else  
    {    
        WriteLine (&#34;// Generated from MSBuild&#34;);  
    }
</code></pre><p>Once this was done, I then made sure I could get a successful build both inside Visual Studio and from the command prompt in the folder containing my .SLN file (in my case passing in the Visual Studio version as I was using a VS2015RC command prompt, but only had the VS2013 SDKs installed) e.g.</p>
<blockquote>
<p>msbuild /p:VisualStudioVersion=12.0</p></blockquote>
<h3 id="so-where-are-we-now">So where are we now?</h3>
<p>Now I have a nice little proof of concept on <a href="https://github.com/rfennell/T4GenerateMsTestWrappersForNunitTests">GitHub</a>. To use it add the <strong>GeneratedMstests</strong> project to your solution and in this project add references to any nUnit projects. Once this is done you should be able to generate wrappers for nUnit tests.</p>
<p>I am sure I could do a better job of test discovery, adding references to assemblies and it would be a good idea to make my the sample code into a Visual Studio template, but it is a start, lets see if it actual does what is needed</p>
]]></content:encoded>
    </item>
    <item>
      <title>MSDeploy Parameters.xml can only replace web.config values is a value is already set</title>
      <link>https://blog.richardfennell.net/posts/msdeploy-parameters-xml-can-only-replace-web-config-values-is-a-value-is-already-set/</link>
      <pubDate>Fri, 01 May 2015 13:37:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/msdeploy-parameters-xml-can-only-replace-web-config-values-is-a-value-is-already-set/</guid>
      <description>&lt;p&gt;If you are using a &lt;a href=&#34;https://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/04/28/A-Visual-Studio-Extension-to-create-MSDeploy-parametersxml-files.aspx&#34;&gt;parameters.xml file to set value with MSDeploy&lt;/a&gt; I have just found a gotcha. You need some value in the web.config file, not just an empty XML tag, else the replacement fails. So to explain…&lt;/p&gt;
&lt;p&gt;I had the following parameters.xml file, and use Release Management to replace the __TAG__ values at deployment time.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&amp;lt;parameters&amp;gt;  
  &amp;lt;parameter name=&amp;#34;Domain&amp;#34; description=&amp;#34;Please enter the name of the domain&amp;#34; defaultvalue=&amp;#34;\_\_Domain\_\_&amp;#34; tags=&amp;#34;&amp;#34;&amp;gt;  
    &amp;lt;parameterentry kind=&amp;#34;XmlFile&amp;#34; scope=&amp;#34;\\web.config$&amp;#34; match=&amp;#34;/configuration/applicationSettings/Web.Properties.Settings/setting\[@name=&amp;#39;Domain&amp;#39;\]/value/text()&amp;#34; /&amp;gt;  
  &amp;lt;/parameter&amp;gt;

  &amp;lt;parameter name=&amp;#34;AdminGroups&amp;#34; description=&amp;#34;Please enter the name of the admin group&amp;#34; defaultvalue=&amp;#34;\_\_AdminGroups\_\_&amp;#34; tags=&amp;#34;&amp;#34;&amp;gt;  
    &amp;lt;parameterentry kind=&amp;#34;XmlFile&amp;#34; scope=&amp;#34;\\web.config$&amp;#34; match=&amp;#34;/configuration/applicationSettings/Web.Properties.Settings/setting\[@name=&amp;#39;AdminGroups&amp;#39;\]/value/text()&amp;#34; /&amp;gt;  
  &amp;lt;/parameter&amp;gt;  
&amp;lt;/parameters&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If my web.config file (in the MSDeploy package to be transformed) was set to&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are using a <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/04/28/A-Visual-Studio-Extension-to-create-MSDeploy-parametersxml-files.aspx">parameters.xml file to set value with MSDeploy</a> I have just found a gotcha. You need some value in the web.config file, not just an empty XML tag, else the replacement fails. So to explain…</p>
<p>I had the following parameters.xml file, and use Release Management to replace the __TAG__ values at deployment time.</p>
<pre tabindex="0"><code>&lt;parameters&gt;  
  &lt;parameter name=&#34;Domain&#34; description=&#34;Please enter the name of the domain&#34; defaultvalue=&#34;\_\_Domain\_\_&#34; tags=&#34;&#34;&gt;  
    &lt;parameterentry kind=&#34;XmlFile&#34; scope=&#34;\\web.config$&#34; match=&#34;/configuration/applicationSettings/Web.Properties.Settings/setting\[@name=&#39;Domain&#39;\]/value/text()&#34; /&gt;  
  &lt;/parameter&gt;

  &lt;parameter name=&#34;AdminGroups&#34; description=&#34;Please enter the name of the admin group&#34; defaultvalue=&#34;\_\_AdminGroups\_\_&#34; tags=&#34;&#34;&gt;  
    &lt;parameterentry kind=&#34;XmlFile&#34; scope=&#34;\\web.config$&#34; match=&#34;/configuration/applicationSettings/Web.Properties.Settings/setting\[@name=&#39;AdminGroups&#39;\]/value/text()&#34; /&gt;  
  &lt;/parameter&gt;  
&lt;/parameters&gt;
</code></pre><p>If my web.config file (in the MSDeploy package to be transformed) was set to</p>
<pre tabindex="0"><code>&lt;applicationSettings&gt;  
    &lt;Web.Properties.Settings&gt;  
      &lt;setting name=&#34;Domain&#34; serializeAs=&#34;String&#34;&gt;  
        &lt;value&gt;Blackmarble&lt;/value&gt;  
      &lt;/setting&gt;  
      &lt;setting name=&#34;AdminGroups&#34; serializeAs=&#34;String&#34;&gt;  
        &lt;value /&gt;  
      &lt;/setting&gt;  
    &lt;/BlackMarble.ISS.Expenses.Web.Properties.Settings&gt;  
  &lt;/applicationSettings&gt;
</code></pre><p>or</p>
<pre tabindex="0"><code>&lt;applicationSettings&gt;  
    &lt;Web.Properties.Settings&gt;  
      &lt;setting name=&#34;Domain&#34; serializeAs=&#34;String&#34;&gt;  
        &lt;value&gt;Blackmarble&lt;/value&gt;  
      &lt;/setting&gt;  
      &lt;setting name=&#34;AdminGroups&#34; serializeAs=&#34;String&#34;&gt;  
        &lt;value&gt;&lt;/value&gt;  
      &lt;/setting&gt;  
    &lt;/BlackMarble.ISS.Expenses.Web.Properties.Settings&gt;  
  &lt;/applicationSettings&gt;
</code></pre><p>only the Domain setting was set.</p>
<p>To get both set I had to have a value for each property, even though they were being reset at deployment.</p>
<pre tabindex="0"><code>&lt;applicationSettings&gt;  
    &lt;Web.Properties.Settings&gt;  
      &lt;setting name=&#34;Domain&#34; serializeAs=&#34;String&#34;&gt;  
        &lt;value&gt;DummyDomain&lt;/value&gt;  
      &lt;/setting&gt;  
      &lt;setting name=&#34;AdminGroups&#34; serializeAs=&#34;String&#34;&gt;  
        &lt;value&gt;DummyAdmins&lt;/value&gt;  
      &lt;/setting&gt;  
    &lt;/BlackMarble.ISS.Expenses.Web.Properties.Settings&gt;  
  &lt;/applicationSettings&gt;
</code></pre><p>Never seen that one before.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A Visual Studio Extension to create MSDeploy parameters.xml files</title>
      <link>https://blog.richardfennell.net/posts/a-visual-studio-extension-to-create-msdeploy-parameters-xml-files/</link>
      <pubDate>Tue, 28 Apr 2015 15:18:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-visual-studio-extension-to-create-msdeploy-parameters-xml-files/</guid>
      <description>&lt;p&gt;When you using &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/05/01/Changing-WCF-bindings-for-MSDeploy-packages-when-using-Release-Management.aspx&#34;&gt;MSdeploy&lt;/a&gt; you should create a parameters.xml file that exposes your web.config settings at the time of installation. This enables good deployment habits, build the product one and then set system specific values using deployment tools. The problem is that this parameters.xml file is a pain to write, it is a series of XML blocks that contain XPath to find the entries to replace, typo’s are easy to introduce.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you using <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/05/01/Changing-WCF-bindings-for-MSDeploy-packages-when-using-Release-Management.aspx">MSdeploy</a> you should create a parameters.xml file that exposes your web.config settings at the time of installation. This enables good deployment habits, build the product one and then set system specific values using deployment tools. The problem is that this parameters.xml file is a pain to write, it is a series of XML blocks that contain XPath to find the entries to replace, typo’s are easy to introduce.</p>
<p>A ripe candidate for automation, but I could not find a tool to do it, so I wrote one for Visual Studio 2013 and 2015. You can find the source on <a href="https://github.com/rfennell/ParametersXmlAddin">GitHub</a> andthe actual VSIX package in the <a href="https://visualstudiogallery.msdn.microsoft.com/cbf2764d-d205-49d6-810f-25324402c3a9?SRC=Home">Visual Studio Gallery</a>.</p>
<p>So what does it do?</p>
<p>Once it is installed, if you right click on a web.config file you will see a context menu option to generate a parameters.xml file, click it, if the file does not exist it will be generated and added to the current project. Entries will be made for all <strong>appSettings</strong> and any custom <strong>applicationSettings</strong> blocks found in the web.config. The actual web.config values will be replaced with __TAGS__ to be set via Release Management or your tool of choice.</p>
<p>So the web.config file</p>
<pre tabindex="0"><code>&lt;configuration&gt;  
  &lt;applicationSettings&gt;  
    &lt;Service.Properties.Settings&gt;  
      &lt;setting name=&#34;Directory1&#34; serializeAs=&#34;String&#34;&gt;  
        &lt;value&gt;C:ABC1111&lt;/value&gt;  
      &lt;/setting&gt;  
      &lt;setting name=&#34;Directory2&#34; serializeAs=&#34;String&#34;&gt;  
        &lt;value&gt;C:abc2222&lt;/value&gt;  
      &lt;/setting&gt;  
    &lt;/Service.Properties.Settings&gt;  
  &lt;/applicationSettings&gt;  
  &lt;appSettings&gt;  
    &lt;add key=&#34;APPSETTING1&#34; value=&#34;123&#34; /&gt;  
    &lt;add key=&#34;AppSetting2&#34; value=&#34;456&#34; /&gt;  
  &lt;/appSettings&gt;  
&lt;/configuration&gt;   
</code></pre><p>it generates the parameters.xml</p>
<pre tabindex="0"><code>&lt;parameters&gt;  
  &lt;parameter name=&#34;APPSETTING1&#34; description=&#34;Description for APPSETTING1&#34; defaultvalue=&#34;\_\_APPSETTING1\_\_&#34; tags=&#34;&#34;&gt;  
    &lt;parameterentry kind=&#34;XmlFile&#34; scope=&#34;\\web.config$&#34; match=&#34;/configuration/appSettings/add\[@key=&#39;APPSETTING1&#39;\]/@value&#34; /&gt;  
  &lt;/parameter&gt; 

  &lt;parameter name=&#34;AppSetting2&#34; description=&#34;Description for AppSetting2&#34; defaultvalue=&#34;\_\_APPSETTING2\_\_&#34; tags=&#34;&#34;&gt;  
    &lt;parameterentry kind=&#34;XmlFile&#34; scope=&#34;\\web.config$&#34; match=&#34;/configuration/appSettings/add\[@key=&#39;AppSetting2&#39;\]/@value&#34; /&gt;  
  &lt;/parameter&gt;

  &lt;parameter name=&#34;Directory1&#34; description=&#34;Description for Directory1&#34; defaultvalue=&#34;\_\_DIRECTORY1\_\_&#34; tags=&#34;&#34;&gt;  
    &lt;parameterentry kind=&#34;XmlFile&#34; scope=&#34;\\web.config$&#34; match=&#34;/configuration/applicationSettings/Service.Properties.Settings/setting\[@name=&#39;Directory1&#39;\]/value/text()&#34; /&gt;  
  &lt;/parameter&gt;

  &lt;parameter name=&#34;Directory2&#34; description=&#34;Description for Directory2&#34; defaultvalue=&#34;\_\_DIRECTORY2\_\_&#34; tags=&#34;&#34;&gt;  
    &lt;parameterentry kind=&#34;XmlFile&#34; scope=&#34;\\web.config$&#34; match=&#34;/configuration/applicationSettings/Service.Properties.Settings/setting\[@name=&#39;Directory2&#39;\]/value/text()&#34; /&gt;  
  &lt;/parameter&gt;

&lt;/parameters&gt;
</code></pre><p>If a parameters.xml file already exists then you are prompted first if you wish to replace it, if you say no, then you  are prompted if you wish to add any new entries in the web.config, or do nothing.</p>
<p>All the work is done via an XSL Transform, so if you need to transform extra settings just add to the embedded XSLT resource and rebuild the VSIX package.</p>
<p>So the tool won’t do everything, but should get you close to the file you need.</p>
]]></content:encoded>
    </item>
    <item>
      <title>May is a busy time for events</title>
      <link>https://blog.richardfennell.net/posts/may-is-a-busy-time-for-events/</link>
      <pubDate>Thu, 23 Apr 2015 15:16:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/may-is-a-busy-time-for-events/</guid>
      <description>&lt;p&gt;Mid May is a busy time for me presenting-wise:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;On the 12th/13th I am presenting at &lt;a href=&#34;http://www.techorama.be/agenda-2015/&#34;&gt;Techorama&lt;/a&gt; in Belgium&lt;/li&gt;
&lt;li&gt;And on the 14th I will be presenting at a &lt;a href=&#34;http://www.greymatter.com/corporate/showcase/visual-studio/visual-studio-2015-event/&#34;&gt;Microsoft/GreyMatter&lt;/a&gt;  event at Microsoft’s Reading office.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And after that there are also the &lt;a href=&#34;http://www.blackmarble.co.uk/events&#34;&gt;Black Marble Re:Build and Re:Ignite events&lt;/a&gt;. I am sure I will be involved at those, but we have to wait for a couple of week until after &lt;a href=&#34;http://www.buildwindows.com/&#34;&gt;Build&lt;/a&gt; and &lt;a href=&#34;http://ignite.microsoft.com/&#34;&gt;Ignite&lt;/a&gt; to find out what we will be talking about.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Mid May is a busy time for me presenting-wise:</p>
<ul>
<li>On the 12th/13th I am presenting at <a href="http://www.techorama.be/agenda-2015/">Techorama</a> in Belgium</li>
<li>And on the 14th I will be presenting at a <a href="http://www.greymatter.com/corporate/showcase/visual-studio/visual-studio-2015-event/">Microsoft/GreyMatter</a>  event at Microsoft’s Reading office.</li>
</ul>
<p>And after that there are also the <a href="http://www.blackmarble.co.uk/events">Black Marble Re:Build and Re:Ignite events</a>. I am sure I will be involved at those, but we have to wait for a couple of week until after <a href="http://www.buildwindows.com/">Build</a> and <a href="http://ignite.microsoft.com/">Ignite</a> to find out what we will be talking about.</p>
<p>There are spaces I think at all these events, why not have a look.</p>
]]></content:encoded>
    </item>
    <item>
      <title>After a few days living with a Microsoft Band…</title>
      <link>https://blog.richardfennell.net/posts/after-a-few-days-living-with-a-microsoft-band/</link>
      <pubDate>Sun, 05 Apr 2015 15:15:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/after-a-few-days-living-with-a-microsoft-band/</guid>
      <description>&lt;p&gt;I have worn a Polar s610 heart rate monitor as my watch for years (probably 15+), as I write it needs another battery swap, which means sending it to Polar, something I have done a couple of times in the past for servicing The batteries in the watch last 5 years or so, in the associated heart rate monitor strap maybe a bit more depending on usage.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSJaA1XqMQcgQYGiHwENzEHcQwN2XCHrItPuwrtnGOR4j82V-lo&#34;&gt;&lt;/p&gt;
&lt;p&gt;The point is I am used to having a device that ‘can’ give heart rate information, it seems normal, but I do need to remember to put on the heart rate strap, something I would only usually do for a race or specific training set. Now don’t get me wrong having the s610 does not mean I am a talented athlete, but I do have good idea of my heart rate for any given sporting activity&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have worn a Polar s610 heart rate monitor as my watch for years (probably 15+), as I write it needs another battery swap, which means sending it to Polar, something I have done a couple of times in the past for servicing The batteries in the watch last 5 years or so, in the associated heart rate monitor strap maybe a bit more depending on usage.</p>
<p><img loading="lazy" src="https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSJaA1XqMQcgQYGiHwENzEHcQwN2XCHrItPuwrtnGOR4j82V-lo"></p>
<p>The point is I am used to having a device that ‘can’ give heart rate information, it seems normal, but I do need to remember to put on the heart rate strap, something I would only usually do for a race or specific training set. Now don’t get me wrong having the s610 does not mean I am a talented athlete, but I do have good idea of my heart rate for any given sporting activity</p>
<p>So this week I got hold of a Microsoft Band, nice timing as my s610 needs that new battery, but how has the Band done?</p>
<p><img alt="Microsoft Band in Watch Mode" loading="lazy" src="https://compass-ssl.surface.com/assets/66/3f/663fd08d-0b02-4ffc-af49-dd2c4e8ed65a.png#microsoft-band-watch-mode-325.png"></p>
<p>Whilst in the USA last year I had tried on other people’s Bands and felt them cumbersome, but mine is a small (previously I had only tried on large or mediums). Actually I find it little worse on the wrist than the s610. So tip here – it seems sizing is critical, especially if have spindly little wrists like me. Like compression sport wear, if in doubt err to a smaller size.</p>
<p>Beyond being a watch, I have been using it to track my runs and cycling and the HR seems accurate, at least the numbers are within a few beats of what I would expect on the s610, and it is so much easier than remembering the monitor strap for my Polar. It is great that it gets so much information so easily, and that it  can push it onto other services as I want without me having to play with modem style audio based Polar SonicLink (I did say my s610 is old)</p>
<p>But of course I do have some issues:</p>
<ul>
<li>The obvious one is battery life. I am seeing 6 to 48 hours depending on how much GPS I used. For the first two days it was great, when I used it as a watch, no charging needed. However, yesterday I did a a 5K <a href="http://www.parkrun.org.uk/">Parkrun</a> and a 40K bike ride, so after less than 3 hours of GPS, HR and screen on etc. it needed a charge. I can live with this I think, I do need to make sure the screen is off and see how that helps. I don’t want it dying on a half day or so cycle ride.</li>
<li>Turns out I glance at my watch a lot – as by default the screen is off I have to press button to see the time – it took me back to the LED watches of the 70s. Again this comes back to battery life. I know I can leave the screen on, but it just needs to much power.</li>
<li>When running the splits are in km, it would be nice to have my own trigger e.g laps.  For example on my local Parkrun we do 3 laps, and I know my target splits. On the Polar I have a big red button to press for each lap to get a lap time,  on the Band I have to do maths in my head. Now there might be a way to do this, but I have not found it yet.</li>
<li>Finally my major issue is it is not waterproof so can’t wear it to swim, so not useful in a Triathlon as it is as something else to have to put on in T1. Also I do use the time splits in the pool on my S610 when training, again counting laps not KM (I don’t swim that far!). Not sure how they would make it fully waterproof, but it would be a great feature.</li>
</ul>
<p>So first thoughts, loads better than I expected, all my niggles are minors and are more to do with current battery technology than the device itself. For the price, <a href="//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&amp;OneJS=1&amp;Operation=GetAdHtml&amp;MarketPlace=GB&amp;source=ss&amp;ref=ss_til&amp;ad_type=product_link&amp;tracking_id=buitwoonmypc-21&amp;marketplace=amazon&amp;region=GB&amp;placement=B00UOUFMP4&amp;asins=B00UOUFMP4&amp;linkId=GQPKSWDU5AIM6HGP&amp;show_border=true&amp;link_opens_in_new_window=true">£169 at Amazon UK for pre-order</a>, an interesting alternative to <a href="http://www.garmin.com/en-GB">Garmin</a> or <a href="http://www.polar.com/uk-en">Polar</a>. Certainly got some interest at the 10K race I did this morning.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running StyleCop from the command line and in a TFS 2015 vNext build</title>
      <link>https://blog.richardfennell.net/posts/running-stylecop-from-the-command-line-and-in-a-tfs-2015-vnext-build/</link>
      <pubDate>Fri, 03 Apr 2015 13:50:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-stylecop-from-the-command-line-and-in-a-tfs-2015-vnext-build/</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Updated 6 Feb 2016&lt;/strong&gt; - See the &lt;a href=&#34;https://blog.richardfennell.net/blogs/rfennell/post/2016/02/06/A-VSTS-vNext-build-task-to-run-StyleCop.aspx&#34;&gt;newer post&lt;/a&gt; about the new &lt;a href=&#34;https://github.com/rfennell/vNextBuild/wiki/StyleCop-Runner-Task&#34;&gt;vNext build task&lt;/a&gt; I have written to do the same job&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Virtually any automated build will require some customisation beyond a basic compile. So as part of my upcoming &lt;a href=&#34;http://www.techorama.be/&#34;&gt;Techorama&lt;/a&gt; session on TFS 2015 vNext build I need a demo of using a custom script as part of the build process. Two common customisations we use are version stamping of assemblies and running code analysis tools. For vNext build there is already a &lt;a href=&#34;http://vsalmdocs.azurewebsites.net/tfs/build/scripts/&#34;&gt;sample of version stamping&lt;/a&gt;, so I thought getting &lt;a href=&#34;https://stylecop.codeplex.com/&#34;&gt;StyleCop&lt;/a&gt; running would be a good sample.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em><strong>Updated 6 Feb 2016</strong> - See the <a href="/blogs/rfennell/post/2016/02/06/A-VSTS-vNext-build-task-to-run-StyleCop.aspx">newer post</a> about the new <a href="https://github.com/rfennell/vNextBuild/wiki/StyleCop-Runner-Task">vNext build task</a> I have written to do the same job</em></p>
<p>Virtually any automated build will require some customisation beyond a basic compile. So as part of my upcoming <a href="http://www.techorama.be/">Techorama</a> session on TFS 2015 vNext build I need a demo of using a custom script as part of the build process. Two common customisations we use are version stamping of assemblies and running code analysis tools. For vNext build there is already a <a href="http://vsalmdocs.azurewebsites.net/tfs/build/scripts/">sample of version stamping</a>, so I thought getting <a href="https://stylecop.codeplex.com/">StyleCop</a> running would be a good sample.</p>
<h3 id="the-problem">The problem</h3>
<p>Customisation in vNext build is based around running a script, in the case of a Windows based build agents this a PowerShell script. The problem with StyleCop is that it does not provide a command line iterface. The  <a href="https://stylecop.codeplex.com/">StyleCop CodePlex project</a> provides only a Visual Studio add-in. There is also the ALM Ranger’s <a href="https://github.com/tfsbuildextensions/CustomActivities/wiki/Getting%20started%20with%20the%20StyleCop%20activity">TFS community custom build  activity</a>, but I could find no current command line interface projects.</p>
<p>So I needed to build one.</p>
<h3 id="step-1--create-a-command-line">Step 1 – Create a command line</h3>
<p>So my first step was to create a command line version of StyleCop. I chose to use the community build activity as a starting point. I had planned to do this all in PowerShell, but quickly found that the conversion of parameter object types and the handling of the events StyleCop uses was a bit messy. So I decided to write a wrapper class in C# that presented the same parameters as the old TFS build activity, basically take the old code and remove the Windows Workflow logic. I then provided a <strong>Main (args)</strong> method to expose the object to the command line such that it was easy to provide the required parameters.</p>
<p>This can all be found on my <a href="https://github.com/rfennell/StyleCopCmdLine">GitHub</a> site.</p>
<p><em><strong>Note on solution structure</strong>: As I wanted this to work for PowerShell and the command prompt I had to place the <strong>Main(args[])</strong> method .EXE entry point in a project that built an EXE and all the rest of the wrapper code in one that built a .DLL. This is because you cannot load a type in PowerShell using <strong>add-type</strong> from an assembly built as an EXE, you get a EXTENSION_NOT_SUPPORTED exception. It means there are two projects (a DLL and an EXE) when I would really have like a single one (the EXE)</em></p>
<p>So I now had a command line I could call from my PowerShell script</p>
<pre tabindex="0"><code>StyleCopCmdLine --f=&#34;File1.cs&#34; &#34;File2.cs&#34; --s=&#34;AllSettingsEnabled.StyleCop&#34;
</code></pre><p>A good starting point. However,  more a TFS build it makes more sense to call StyleCop directly in the PowerShell, why shell out to a command prompt to run an EXE when your can run the code directly in PowerShell?</p>
<h3 id="step-2--create-a-simple-powershell-script">Step 2 – Create a simple PowerShell script</h3>
<p>The PowerShell required to run StyleCop using the wrapper is simple, just providing the same parameters as used for the EXE.</p>
<pre tabindex="0"><code>Add-Type -Path &#34;StyleCopWrapper.dll&#34;
</code></pre><p>$scanner = new-object StyleCopWrapper.Wrapper<br>
$scanner.MaximumViolationCount = 1000<br>
$scanner.ShowOutput = $true<br>
$scanner.CacheResults = $false<br>
$scanner.ForceFullAnalysis = $true<br>
$scanner.XmlOutputFile = &ldquo;$pwdout.xml&rdquo;<br>
$scanner.LogFile = &ldquo;$pwdlog.txt&rdquo;<br>
$scanner.SourceFiles =  @(&ldquo;file1.cs&rdquo;, &ldquo;file2.cs&rdquo;) )<br>
$scanner.SettingsFile = &ldquo;settings.stylecop&rdquo;<br>
$scanner.AdditionalAddInPaths = @(&ldquo;C:Program Files (x86)StyleCop 4.7&rdquo; )<br>
$scanner.TreatViolationsErrorsAsWarnings = $false</p>
<p>$scanner.Scan()</p>
<p><code>write-host (&quot;Succeeded [{0}]&quot; -f $scanner.Succeeded)   write-host (&quot;Violation count [{0}]&quot; -f $scanner.ViolationCount)</code></p>
<p>See the <a href="https://github.com/rfennell/StyleCopCmdLine">GitHub</a> site’s WIKI for the usage details.</p>
<h3 id="step-3--create-a-vnext-build-powershell-script">Step 3 – Create a vNext build PowerShell script</h3>
<p>So now we have the basic tools we need to run StyleCop from a TFS vNext build, but we do need a more complex script.</p>
<p>The script you use is up to you, mine looks for .csproj files and runs StyleCop recursively from the directories containing the .csproj files. This means I can have a different  <strong>setting.stylecop</strong> file for each project. In general I have more strict rules on production code than unit test e.g. for unit tests I am not bother about the XML method documentation, but for production code I make sure they are present and match the method parameters.</p>
<p><strong>Note:</strong> As the script just uses parameters and environment variable it is easy to test outside TFS build, a great improvement over the old build system</p>
<pre tabindex="0"><code>#  
# Script to allow StyleCop to be run as part of the TFS vNext build  
#  
[CmdletBinding()]  
param  
(  
    # We have to pass this boolean flag as string, we cast it before we use it  
    # have to use 0 or 1, true or false  
    [string]$TreatStyleCopViolationsErrorsAsWarnings = &#39;False&#39;  
)
</code></pre><p># local test values, should be commented out in production<br>
#$Env:BUILD_STAGINGDIRECTORY = &ldquo;C:drops&rdquo;<br>
#$Env:BUILD_SOURCESDIRECTORY = &ldquo;C:codeMySolution&rdquo;</p>
<p>if(-not ($Env:BUILD_SOURCESDIRECTORY -and $Env:BUILD_STAGINGDIRECTORY))<br>
{<br>
    Write-Error &ldquo;You must set the following environment variables&rdquo;<br>
    Write-Error &ldquo;to test this script interactively.&rdquo;<br>
    Write-Host &lsquo;$Env:BUILD_SOURCESDIRECTORY - For example, enter something like:&rsquo;<br>
    Write-Host &lsquo;$Env:BUILD_SOURCESDIRECTORY = &ldquo;C:codeMySolution&rdquo;&rsquo;<br>
    Write-Host &lsquo;$Env:BUILD_STAGINGDIRECTORY - For example, enter something like:&rsquo;<br>
    Write-Host &lsquo;$Env:BUILD_STAGINGDIRECTORY = &ldquo;C:drops&rdquo;&rsquo;<br>
    exit 1<br>
}</p>
<p># pickup the build locations from the environment<br>
$stagingfolder = $Env:BUILD_STAGINGDIRECTORY<br>
$sourcefolder = $Env:BUILD_SOURCESDIRECTORY</p>
<p># have to convert the string flag to a boolean<br>
$treatViolationsErrorsAsWarnings = [System.Convert]::ToBoolean($TreatStyleCopViolationsErrorsAsWarnings)</p>
<p>Write-Host (&ldquo;Source folder (`$Env)  [{0}]&rdquo; -f $sourcefolder) -ForegroundColor Green<br>
Write-Host (&ldquo;Staging folder (`$Env) [{0}]&rdquo; -f $stagingfolder) -ForegroundColor Green<br>
Write-Host (&ldquo;Treat violations as warnings (Param) [{0}]&rdquo; -f $treatViolationsErrorsAsWarnings) -ForegroundColor Green<br>
 <br>
# the overall results across all sub scans<br>
$overallSuccess = $true<br>
$projectsScanned = 0<br>
$totalViolations = 0</p>
<p># load the StyleCop classes, this assumes that the StyleCop.DLL, StyleCop.Csharp.DLL, # StyleCop.Csharp.rules.DLL in the same folder as the StyleCopWrapper.dll $folder = Split-Path -parent $MyInvocation.MyCommand.Definition Write-Host (&ldquo;Loading from folder from [{0}]&rdquo; -f $folder) -ForegroundColor Green $dllPath = [System.IO.Path]::Combine($folder,&ldquo;StyleCopWrapper.dll&rdquo;) Write-Host (&ldquo;Loading DDLs from [{0}]&rdquo; -f $dllPath) -ForegroundColor Green Add-Type -Path $dllPath</p>
<p>$scanner = new-object StyleCopWrapper.Wrapper</p>
<p># Set the common scan options,<br>
$scanner.MaximumViolationCount = 1000<br>
$scanner.ShowOutput = $true<br>
$scanner.CacheResults = $false<br>
$scanner.ForceFullAnalysis = $true<br>
$scanner.AdditionalAddInPaths = @($pwd) # in in local path as we place stylecop.csharp.rules.dll here<br>
$scanner.TreatViolationsErrorsAsWarnings = $treatViolationsErrorsAsWarnings</p>
<p># look for .csproj files<br>
foreach ($projfile in Get-ChildItem $sourcefolder -Filter *.csproj -Recurse)<br>
{<br>
   write-host (&ldquo;Processing the folder [{0}]&rdquo; -f $projfile.Directory)</p>
<p>   # find a set of rules closest to the .csproj file<br>
   $settings = Join-Path -path $projfile.Directory -childpath &ldquo;settings.stylecop&rdquo;<br>
   if (Test-Path $settings)<br>
   {<br>
        write-host &ldquo;Using found settings.stylecop file same folder as .csproj file&rdquo;<br>
        $scanner.SettingsFile = $settings<br>
   }  else<br>
   {<br>
       $settings = Join-Path -path $sourcefolder -childpath &ldquo;settings.stylecop&rdquo;<br>
       if (Test-Path $settings)<br>
       {<br>
            write-host &ldquo;Using settings.stylecop file in solution folder&rdquo;<br>
            $scanner.SettingsFile = $settings<br>
       } else<br>
       {<br>
            write-host &ldquo;Cannot find a local settings.stylecop file, using default rules&rdquo;<br>
            $scanner.SettingsFile = &ldquo;.&rdquo; # we have to pass something as this is a required param<br>
       }<br>
   }</p>
<p>   $scanner.SourceFiles =  @($projfile.Directory)<br>
   $scanner.XmlOutputFile = (join-path $stagingfolder $projfile.BaseName) +&quot;.stylecop.xml&quot;<br>
   $scanner.LogFile =  (join-path $stagingfolder $projfile.BaseName) +&quot;.stylecop.log&quot;<br>
   <br>
   # Do the scan<br>
   $scanner.Scan()</p>
<p>    # Display the results<br>
    Write-Host (&quot;`n&quot;)<br>
    write-host (&ldquo;Base folder`t[{0}]&rdquo; -f $projfile.Directory) -ForegroundColor Green<br>
    write-host (&ldquo;Settings `t[{0}]&rdquo; -f $scanner.SettingsFile) -ForegroundColor Green<br>
    write-host (&ldquo;Succeeded `t[{0}]&rdquo; -f $scanner.Succeeded) -ForegroundColor Green<br>
    write-host (&ldquo;Violations `t[{0}]&rdquo; -f $scanner.ViolationCount) -ForegroundColor Green<br>
    Write-Host (&ldquo;Log file `t[{0}]&rdquo; -f $scanner.LogFile) -ForegroundColor Green<br>
    Write-Host (&ldquo;XML results`t[{0}]&rdquo; -f $scanner.XmlOutputFile) -ForegroundColor Green</p>
<p>    $totalViolations += $scanner.ViolationCount<br>
    $projectsScanned ++<br>
   <br>
    if ($scanner.Succeeded -eq $false)<br>
    {<br>
      # any failure fails the whole run<br>
      $overallSuccess = $false<br>
    }</p>
<p>}</p>
<p><code># the output summary   Write-Host (&quot;`n&quot;)   if ($overallSuccess -eq $false)   {      Write-Error (&quot;StyleCop found [{0}] violations across [{1}] projects&quot; -f $totalViolations, $projectsScanned)   }   elseif ($totalViolations -gt 0 -and $treatViolationsErrorsAsWarnings -eq $true)   {       Write-Warning (&quot;StyleCop found [{0}] violations warnings across [{1}] projects&quot; -f $totalViolations, $projectsScanned)   }   else   {      Write-Host (&quot;StyleCop found [{0}] violations warnings across [{1}] projects&quot; -f $totalViolations, $projectsScanned) -ForegroundColor Green   }</code></p>
<h3 id="step-4--adding-a-the-script-to-the-repo">Step 4 – Adding a the script to the repo</h3>
<p>To use the script it needs (and any associated files) to be placed in your source control. In my case it meant I create a folder called <strong>StyleCop</strong> off the root of my TFS 2015 CTP’s Git repo and in it placed the following files</p>
<ul>
<li>PowerShell.ps1 – my script file</li>
<li>StyleCop.dll – the main StyleCop assembly taken from <strong>c:program files (x86)StyleCop 4.7</strong>. By placing it here it means we don’t need to actually install StyleCop on the build machine</li>
<li>StyleCop.csharp.dll – also from <strong>c:program files (x86)StyleCop 4.7</strong></li>
<li>StyleCop.csharp.rules.dll – also from <strong>c:program files (x86)StyleCop 4.7</strong></li>
<li>StyleCopWrapper.dll – the wrapper assembly from my <a href="https://github.com/rfennell/StyleCopCmdLine">GitHub</a> site</li>
</ul>
<h3 id="step-5--adding-the-script-to-a-build-process">Step 5 – Adding the script to a build process</h3>
<p>Once the script is in the repo adding a new step to a vNext build is easy.</p>
<ul>
<li>
<p>In a browser select the Build.vNext menu options</p>
</li>
<li>
<p>The build explorer will be show, right click on the build you wish to add a step to and select edit</p>
</li>
<li>
<p>Press the ‘Add build step’ button. The list of steps will be show, pick PowerShell</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_237.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_233.png" title="image"></a></p>
</li>
<li>
<p>As the script is in the repo we can reference it in the new step. in my case I set the script file name to</p>
<p>_                      StyleCop/PowerShell.ps1<br>
_</p>
</li>
<li>
<p>My script takes one parameter, if we should treat StleCop violations as warnings, this is set as the script argument. Note I am using a build variable <em><strong>$(ViolationsAsWarnings)</strong></em> set to a string value ‘True’ or ‘False’, so I have one setting for the whole build script. Though a boolean parameter would be nice it seems I can only pass in strings as build variables, so I do the conversion to a boolean inside the script.</p>
<p>                      _-TreatStyleCopViolationsErrorsAsWarnings $(ViolationsAsWarnings)</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_238.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_234.png" title="image"></a>_</p>
</li>
</ul>
<h3 id="step-6---running-the-build">Step 6 - Running the build</h3>
<p>My test solution has two projects, with different <strong>settings.stylecop</strong> files. Once the new step was added to my build I could queue a build, by altering <em><strong>$(ViolationsAsWarnings)</strong></em>  variable I could make the build pass for fail.</p>
<p><a href="/blogs/rfennell/image.axd?picture=image_241.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_237.png" title="image"></a></p>
<p>       <a href="/blogs/rfennell/image.axd?picture=image_243.png"><img alt="image" loading="lazy" src="/blogs/rfennell/image.axd?picture=image_thumb_239.png" title="image"></a></p>
<p>The detailed StyleCop result are available in the build log and are also placed in the drops folder in an XML format.</p>
<p><strong>Note</strong>: One strange behaviour is that when you test the script outside TFS build you get a .XML and .LOG file for each project scanned. In TFS build you only see the .XML file in the drops folder, this I think is because the .LOG has been redirected into the main TFS vNext build logs.</p>
<h3 id="summary">Summary</h3>
<p>So now I have a way to run StyleCop within a TFS vNext build.</p>
<p>Using these techniques there are no end of tools that can be wired into the build process, and I must say it is far easier than the TFS 2010, 2012, 2013 style workflow customisation.</p>
]]></content:encoded>
    </item>
    <item>
      <title>All change - new SKUs for Visual Studio 2015</title>
      <link>https://blog.richardfennell.net/posts/all-change-new-skus-for-visual-studio-2015/</link>
      <pubDate>Tue, 31 Mar 2015 17:42:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/all-change-new-skus-for-visual-studio-2015/</guid>
      <description>&lt;p&gt;All change again on Visual Studio SKU licensing, &lt;a href=&#34;http://blogs.msdn.com/b/visualstudio/archive/2015/03/31/announcing-the-visual-studio-2015-product-line.aspx&#34;&gt;have a look at the blog post on new SKUs for the 2015 release&lt;/a&gt; and also the &lt;a href=&#34;https://www.visualstudio.com/products/vs-2015-product-editions&#34;&gt;2015 product site&lt;/a&gt;. Basically it is a simplification.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-01-29-92-metablogapi/4442.VisualStudio2015ProductOfferings2_5F00_33936BA7.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;You can find the detailed feature break down &lt;a href=&#34;https://www.visualstudio.com/products/compare-visual-studio-2015-products-vs&#34; title=&#34;https://www.visualstudio.com/products/compare-visual-studio-2015-products-vs&#34;&gt;on the comparison site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>All change again on Visual Studio SKU licensing, <a href="http://blogs.msdn.com/b/visualstudio/archive/2015/03/31/announcing-the-visual-studio-2015-product-line.aspx">have a look at the blog post on new SKUs for the 2015 release</a> and also the <a href="https://www.visualstudio.com/products/vs-2015-product-editions">2015 product site</a>. Basically it is a simplification.</p>
<p><img loading="lazy" src="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-01-29-92-metablogapi/4442.VisualStudio2015ProductOfferings2_5F00_33936BA7.png"></p>
<p>You can find the detailed feature break down <a href="https://www.visualstudio.com/products/compare-visual-studio-2015-products-vs" title="https://www.visualstudio.com/products/compare-visual-studio-2015-products-vs">on the comparison site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Cross platform build with TFS 2015 vNext Build</title>
      <link>https://blog.richardfennell.net/posts/cross-platform-build-with-tfs-2015-vnext-build/</link>
      <pubDate>Mon, 30 Mar 2015 19:11:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cross-platform-build-with-tfs-2015-vnext-build/</guid>
      <description>&lt;p&gt;I have been preparing for my &lt;a href=&#34;http://www.techorama.be/&#34;&gt;Techorama&lt;/a&gt; session on &lt;a href=&#34;http://vsalmdocs.azurewebsites.net/tfs/build&#34;&gt;TFS vNext build&lt;/a&gt;. One of the demo’s I am planning is to use the &lt;a href=&#34;https://github.com/Microsoft/vso-agent&#34;&gt;Node based cross platform build agent&lt;/a&gt; to build something on a Linux VM. Turns out this takes a few undocumented steps to get this going with the &lt;a href=&#34;https://www.visualstudio.com/en-us/news/tfs2015-vs.aspx&#34;&gt;CTP of TFS 2015&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The process I followed was:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I installed a &lt;a href=&#34;http://www.linuxmint.com/download.php&#34;&gt;Mint 17&lt;/a&gt; VM&lt;/li&gt;
&lt;li&gt;On the VM, I installed the Node VSOAgent as detailed in the &lt;a href=&#34;https://www.npmjs.com/package/vsoagent-installer&#34;&gt;npm documentation&lt;/a&gt; (or I could have built it from &lt;a href=&#34;https://github.com/Microsoft/vso-agent&#34;&gt;from source from GitHub&lt;/a&gt; to get the bleeding edge version)&lt;/li&gt;
&lt;li&gt;I created a new agent instance&lt;br&gt;
         &lt;em&gt;vsoagent-installer&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;I then tried to run the configuration, but hit a couple of issues&lt;br&gt;
        _  node vsoagent_&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id=&#34;url-error&#34;&gt;URL error&lt;/h3&gt;
&lt;p&gt;The first problem was I was told the URL I provided was invalid. I had tried the URL of my local TFS 2015 CTP VM&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been preparing for my <a href="http://www.techorama.be/">Techorama</a> session on <a href="http://vsalmdocs.azurewebsites.net/tfs/build">TFS vNext build</a>. One of the demo’s I am planning is to use the <a href="https://github.com/Microsoft/vso-agent">Node based cross platform build agent</a> to build something on a Linux VM. Turns out this takes a few undocumented steps to get this going with the <a href="https://www.visualstudio.com/en-us/news/tfs2015-vs.aspx">CTP of TFS 2015</a></p>
<p>The process I followed was:</p>
<ul>
<li>I installed a <a href="http://www.linuxmint.com/download.php">Mint 17</a> VM</li>
<li>On the VM, I installed the Node VSOAgent as detailed in the <a href="https://www.npmjs.com/package/vsoagent-installer">npm documentation</a> (or I could have built it from <a href="https://github.com/Microsoft/vso-agent">from source from GitHub</a> to get the bleeding edge version)</li>
<li>I created a new agent instance<br>
         <em>vsoagent-installer</em></li>
<li>I then tried to run the configuration, but hit a couple of issues<br>
        _  node vsoagent_</li>
</ul>
<h3 id="url-error">URL error</h3>
<p>The first problem was I was told the URL I provided was invalid. I had tried the URL of my local TFS 2015 CTP VM</p>
<blockquote>
<p><a href="http://typhoontfs:8080/tfs">http://typhoontfs:8080/tfs</a></p></blockquote>
<p>The issue is that the vsoagent was initially developed for <a href="https://www.visualstudio.com/en-gb/products/what-is-visual-studio-online-vs">VSO</a> and is expecting a fully qualified URL. To get around this, as I was on a local test network, I just added an entry to my Linux OS’s local <em>/etc/hosts</em> file, so I could call</p>
<blockquote>
<p><a href="http://typhoontfs.local:8080/tfs">http://typhoontfs.local:8080/tfs</a></p></blockquote>
<p>This URL was accepted</p>
<h3 id="401-permissions-error">401 Permissions Error</h3>
<p>Once the URL was accepted, the next problem was I got a 401 permission error.</p>
<p>Now the release notes make it clear that you have to enable alternate credentials on your VSO account, but this is not a option for on premises TFS.</p>
<p>The solution is easy though (at least for a trial system). In IIS Manager on your TFS server enable basic authentication for the TFS application, you are warned this is not secure as passwords are sent in clear text, so probably not something to do on a production system</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_236.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_232.png" title="image"></a></p>
<p>Once this was set the configuration of the client worked and I had an vsoagent running on my Linux client.</p>
<p>I could then go into the web based TFS Build.vNext interface and create a new empty build, adding the build tool I required, in my case Ant, using an Ant script stored with my project code in my TFS based Git repo.</p>
<p>When I ran the build it errored, as expected, my  Linux VM was missing all the build tools, but this was fixed by running <em>apt-get</em> on my Linux VM to install ant, ant-optional and the Java JDK. Obviously you need to install the tools you need.</p>
<p>So I have working demo, my Java application builds and resultant files dropped back into TFS. OK the configuration is not perfect at present, but from the GitHub site you can see the client  is being rapidly iterated</p>
]]></content:encoded>
    </item>
    <item>
      <title>We are hosting a free event on Visual Studio ALM on the 18th March</title>
      <link>https://blog.richardfennell.net/posts/we-are-hosting-a-free-event-on-visual-studio-alm-on-the-18th-march/</link>
      <pubDate>Sat, 28 Feb 2015 14:20:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/we-are-hosting-a-free-event-on-visual-studio-alm-on-the-18th-march/</guid>
      <description>&lt;p&gt;Black Marble are hosting, at our offices, a Microsoft event on the 18th March on ALM with Visual Studio &amp;amp; TFS 2013. This is a repeat of the sell out event we hosted in the autumn.&lt;/p&gt;
&lt;p&gt;For details of the event and registration see the &lt;a href=&#34;http://bit.ly/ALM18March2015&#34;&gt;booking site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Black Marble are hosting, at our offices, a Microsoft event on the 18th March on ALM with Visual Studio &amp; TFS 2013. This is a repeat of the sell out event we hosted in the autumn.</p>
<p>For details of the event and registration see the <a href="http://bit.ly/ALM18March2015">booking site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>My DSC session is up at TechDays Online 2015 On-Demand</title>
      <link>https://blog.richardfennell.net/posts/my-dsc-session-is-up-at-techdays-online-2015-on-demand/</link>
      <pubDate>Tue, 17 Feb 2015 11:45:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-dsc-session-is-up-at-techdays-online-2015-on-demand/</guid>
      <description>&lt;p&gt;A couple of weeks ago I presented on DSC and Release Management as part of the Microsoft UK TechDays Online 2015 event. All the sessions from this three day event are now available on demand at &lt;a href=&#34;http://l.email.microsoft.co.uk/rts/go2.aspx?h=36055&amp;amp;tp=i-H43-84-oQ-8AA4-2G-3bw-1c-89YP-32yzA&amp;amp;x=%7c3126%7c1945688&#34;&gt;TechDays Online 2015 on-demand sessions&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;You do seem to have to register/login to see the content, so I can’t deep link to my session, but browsing the catalogue is a good idea there are some great sessions&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A couple of weeks ago I presented on DSC and Release Management as part of the Microsoft UK TechDays Online 2015 event. All the sessions from this three day event are now available on demand at <a href="http://l.email.microsoft.co.uk/rts/go2.aspx?h=36055&amp;tp=i-H43-84-oQ-8AA4-2G-3bw-1c-89YP-32yzA&amp;x=%7c3126%7c1945688">TechDays Online 2015 on-demand sessions</a>.</p>
<p>You do seem to have to register/login to see the content, so I can’t deep link to my session, but browsing the catalogue is a good idea there are some great sessions</p>
]]></content:encoded>
    </item>
    <item>
      <title>Build arguments are not returned for a build definition via the TFS API if they are left as default values</title>
      <link>https://blog.richardfennell.net/posts/build-arguments-are-not-returned-for-a-build-definition-via-the-tfs-api-if-they-are-left-as-default-values/</link>
      <pubDate>Wed, 11 Feb 2015 16:15:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/build-arguments-are-not-returned-for-a-build-definition-via-the-tfs-api-if-they-are-left-as-default-values/</guid>
      <description>&lt;p&gt;We use my &lt;a href=&#34;https://tfsalertsdsl.codeplex.com/&#34;&gt;TFS Alerts DSL&lt;/a&gt; to perform tasks when our TFS build complete, one of these is a job to increment the minor version number and reset the version start date (the value that generates third field – days since a point in time) if a build is set to the quality ‘release’ e.g. &lt;strong&gt;1.2.99.[unique build id]&lt;/strong&gt; where 99 is the days count since some past date could change to &lt;strong&gt;1.3.0.[unique build id] (&lt;/strong&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/07/27/Making-the-drops-location-for-a-TFS-build-match-the-assembly-version-number.aspx&#34;&gt;see this old post on how we do this in the build process&lt;/a&gt;&lt;strong&gt;)&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We use my <a href="https://tfsalertsdsl.codeplex.com/">TFS Alerts DSL</a> to perform tasks when our TFS build complete, one of these is a job to increment the minor version number and reset the version start date (the value that generates third field – days since a point in time) if a build is set to the quality ‘release’ e.g. <strong>1.2.99.[unique build id]</strong> where 99 is the days count since some past date could change to <strong>1.3.0.[unique build id] (</strong><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/07/27/Making-the-drops-location-for-a-TFS-build-match-the-assembly-version-number.aspx">see this old post on how we do this in the build process</a><strong>)</strong></p>
<p>I have just found a bug (feature?) in the way the DSL does this; turns out if you did not set the major and minor version argument values in the build editor (you just left them to their default values of 1 and 0) then the DSL fails as defaulted argument are not returned in the property set of the build definiation we process in the DSL. You would expect to get a 0 back, but you in fact get a null.</p>
<p>So if you have a build where you expect the version to increment and it does not, check the build definition and make sure the <strong>MajorVersion</strong>, <strong>MinorVersion</strong> (or whatever you called them) and <strong>version start date</strong> are all in <strong>bold</strong></p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002_4.jpg"><img alt="clip_image002" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002_thumb_4.jpg" title="clip_image002"></a></p>
<p>I have updated the code on Codeplex so that it gives a better error message in the event log if problem occurs with a build.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Choices DDD South West or the Maker Faire?</title>
      <link>https://blog.richardfennell.net/posts/choices-ddd-south-west-or-the-maker-faire/</link>
      <pubDate>Mon, 09 Feb 2015 12:50:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/choices-ddd-south-west-or-the-maker-faire/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.dddsouthwest.com/Blog/Post/58/Speaker-Registration-is-now-open!&#34;&gt;Speaker registration for DDD South West&lt;/a&gt; has opened, problem is the event is the same weekend as the &lt;a href=&#34;http://www.makerfaireuk.com/&#34;&gt;Maker Faire in Newcastle&lt;/a&gt;, don’t think I can do both.&lt;/p&gt;
&lt;p&gt;Both are great options though, good to have choices&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.makerfaireuk.com/&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_234.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;          &lt;a href=&#34;http://www.dddsouthwest.com/&#34;&gt;&lt;img alt=&#34;dddsw_large.jpg&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddsouthwest.com/SiteAssets/badge/dddsw_large.jpg&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.dddsouthwest.com/Blog/Post/58/Speaker-Registration-is-now-open!">Speaker registration for DDD South West</a> has opened, problem is the event is the same weekend as the <a href="http://www.makerfaireuk.com/">Maker Faire in Newcastle</a>, don’t think I can do both.</p>
<p>Both are great options though, good to have choices</p>
<p><a href="http://www.makerfaireuk.com/"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_234.png" title="image"></a>          <a href="http://www.dddsouthwest.com/"><img alt="dddsw_large.jpg" loading="lazy" src="http://www.dddsouthwest.com/SiteAssets/badge/dddsw_large.jpg"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows 10 to be an option on the Raspberry PI 2</title>
      <link>https://blog.richardfennell.net/posts/windows-10-to-be-an-option-on-the-raspberry-pi-2/</link>
      <pubDate>Mon, 02 Feb 2015 13:05:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-10-to-be-an-option-on-the-raspberry-pi-2/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.theregister.co.uk/2015/02/02/microsoft_eyes_slice_of_raspberry_pi_with_free_windows_10/&#34;&gt;Seem Windows 10 will be an option on the new Raspberry PI 2&lt;/a&gt;, that provides some more nice  IoT options.&lt;/p&gt;
&lt;p&gt;It will be interesting to see which tools Microsoft look to target onto this board, I guess remote development from a PC as per their Galileo board. But there might be enough power for a bit more, we shall see&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.theregister.co.uk/2015/02/02/microsoft_eyes_slice_of_raspberry_pi_with_free_windows_10/">Seem Windows 10 will be an option on the new Raspberry PI 2</a>, that provides some more nice  IoT options.</p>
<p>It will be interesting to see which tools Microsoft look to target onto this board, I guess remote development from a PC as per their Galileo board. But there might be enough power for a bit more, we shall see</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for timeout exporting a SQL Azure DB using PowerShell or SQLPackage.exe</title>
      <link>https://blog.richardfennell.net/posts/fix-for-timeout-exporting-a-sql-azure-db-using-powershell-or-sqlpackage-exe/</link>
      <pubDate>Mon, 02 Feb 2015 12:55:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-timeout-exporting-a-sql-azure-db-using-powershell-or-sqlpackage-exe/</guid>
      <description>&lt;p&gt;I have been trying to export a SQL Azure DB as a .BACPAC using the command line&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;C:Program Files (x86)Microsoft SQL Server120DACbinSqlPackage.exe&amp;rdquo;&lt;br&gt;
                              /action:Export&lt;br&gt;
                             /sourceservername:myserver.database.windows.net&lt;br&gt;
                             /sourcedatabasename:websitecontentdb&lt;br&gt;
                             /sourceuser:sa@myserver /sourcepassword:&lt;password&gt; /targetfile:db.bacpac&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The problem is the command times out after around an hour, at the ‘Extracting schema from database’ stage.&lt;/p&gt;
&lt;p&gt;I got exactly the same issue if I use &lt;a href=&#34;http://fabriccontroller.net/blog/posts/backup-and-restore-your-sql-azure-database-using-powershell/&#34;&gt;PowerShell as discussed in Sandrino Di Mattia’s post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The issue is the Azure service  tier level I am running the SQL DB on.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been trying to export a SQL Azure DB as a .BACPAC using the command line</p>
<blockquote>
<p>&ldquo;C:Program Files (x86)Microsoft SQL Server120DACbinSqlPackage.exe&rdquo;<br>
                              /action:Export<br>
                             /sourceservername:myserver.database.windows.net<br>
                             /sourcedatabasename:websitecontentdb<br>
                             /sourceuser:sa@myserver /sourcepassword:<password> /targetfile:db.bacpac</p></blockquote>
<p>The problem is the command times out after around an hour, at the ‘Extracting schema from database’ stage.</p>
<p>I got exactly the same issue if I use <a href="http://fabriccontroller.net/blog/posts/backup-and-restore-your-sql-azure-database-using-powershell/">PowerShell as discussed in Sandrino Di Mattia’s post</a>.</p>
<p>The issue is the Azure service  tier level I am running the SQL DB on.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_233.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_230.png" title="image"></a></p>
<p>If it is set to basic I get the error, if it is set to standard (even at the lowest settings) it works, and in my case the backup takes a couple of minutes.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/10/15/Unable-to-reconnect-to-database-Timeout-expired-error-when-using-SQLPackageexe-to-deploy-to-Azure-SQL.aspx">I have seen similar problem trying to deploy a DACPAC to SQL Azure</a>, and as I said in that post</p>
<p><em>‘Now the S0 instance is just over</em> <a href="http://azure.microsoft.com/en-us/pricing/details/sql-database/"><em>2x the cost of a Basic</em></a> <em>, so if I was really penny pinching I could consider moving it back to Basic now the deployment is done.’</em></p>
<p>So the choice is mine, change the tier each time I want a export, or pay the extra cost</p>
]]></content:encoded>
    </item>
    <item>
      <title>Wrong package location when reusing a Release Management component</title>
      <link>https://blog.richardfennell.net/posts/wrong-package-location-when-reusing-a-release-management-component/</link>
      <pubDate>Sat, 31 Jan 2015 15:46:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/wrong-package-location-when-reusing-a-release-management-component/</guid>
      <description>&lt;p&gt;Whilst setting up a new agent based deployment pipeline in Release Management I decided I to reuse an existing component as it already had the correct package location set and correct transforms for the MSDeploy package. Basically this pipeline was a new copy of an existing website with different branding (css file etc.), but the same configuration options.&lt;/p&gt;
&lt;p&gt;I had just expected this work, but I kept getting ‘file not found’ errors when MSDeploy was run. On investigation I found that the package location for the component was wrong, it was the build drop root, not the sub folder I had specified.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst setting up a new agent based deployment pipeline in Release Management I decided I to reuse an existing component as it already had the correct package location set and correct transforms for the MSDeploy package. Basically this pipeline was a new copy of an existing website with different branding (css file etc.), but the same configuration options.</p>
<p>I had just expected this work, but I kept getting ‘file not found’ errors when MSDeploy was run. On investigation I found that the package location for the component was wrong, it was the build drop root, not the sub folder I had specified.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_232.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_229.png" title="image"></a> </p>
<p>I have no idea why.</p>
<p>The fix was to copy the component, and use this copy in the pipeline. It is probably what I should have done anyway, as I expect this web site to diverge from original one, so I will need to edit the web.config transforms, but not something I thought I would have had to do now to get it working.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for cannot run Windows 8.1 units test on a TFS 2013 Build Agent</title>
      <link>https://blog.richardfennell.net/posts/fix-for-cannot-run-windows-8-1-units-test-on-a-tfs-2013-build-agent/</link>
      <pubDate>Sat, 31 Jan 2015 12:20:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-cannot-run-windows-8-1-units-test-on-a-tfs-2013-build-agent/</guid>
      <description>&lt;p&gt;I recently hit a problem that on one of our TFS 2013 build agents we could not run Windows 8.1 unit tests. Now as we know the build agent needs some care and attention to build Windows 8.1 at all, &lt;a href=&#34;https://msdn.microsoft.com/en-us/library/hh691189.aspx&#34;&gt;but we had followed this process&lt;/a&gt;. However, we still saw the issue that the project compiled but the tests failed with the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;‘&lt;em&gt;Unit tests for Windows Store apps cannot be run with Limited User Account disabled. Enable it to run tests.’&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently hit a problem that on one of our TFS 2013 build agents we could not run Windows 8.1 unit tests. Now as we know the build agent needs some care and attention to build Windows 8.1 at all, <a href="https://msdn.microsoft.com/en-us/library/hh691189.aspx">but we had followed this process</a>. However, we still saw the issue that the project compiled but the tests failed with the error</p>
<blockquote>
<p>‘<em>Unit tests for Windows Store apps cannot be run with Limited User Account disabled. Enable it to run tests.’</em></p></blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_231.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_228.png" title="image"></a></p>
<p>I checked UAC settings and the build accounts rights (it ran as a local admin) all to no effect.</p>
<p>The answer it seems, thanks to the product group for the pointer, is that you have to make sure of the registry setting</p>
<blockquote>
<p>HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem</p>
<p>&ldquo;EnableLUA&rdquo; =  1</p></blockquote>
<p>On my failing VM this was set to zero.</p>
<p>I then had to reboot the the VM and also delete all contents of the c:builds folder on my VM as due to the chance in UAC setting these old files had become read only to the build process.</p>
<p>Once this was all done my Windows 8.1 builds work correctly. Hope this post saves some other people some time</p>
]]></content:encoded>
    </item>
    <item>
      <title>Living with a DD-WRT virtual router – three months and one day on (static DHCP leases)</title>
      <link>https://blog.richardfennell.net/posts/living-with-a-dd-wrt-virtual-router-three-months-and-one-day-on-static-dhcp-leases/</link>
      <pubDate>Sat, 31 Jan 2015 11:59:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/living-with-a-dd-wrt-virtual-router-three-months-and-one-day-on-static-dhcp-leases/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 28 Feb 2015 – Added bit on static addresses&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_211.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/01/29/Living-with-a-DD-WRT-virtual-router-three-months-on.aspx&#34;&gt;When using a DD-WRT virtual router&lt;/a&gt;, I have realised it is worth setting static a MAC address in Hyper-V and DHCP lease on the router for any server  VMs  you want access to from your base system OS. In my case this is TFS demo VM a connect to all the time.&lt;/p&gt;
&lt;p&gt;If you don’t do this the address of the VM seems to vary more than you might expect. So you keep having to edit the HOSTS file on your base OS to reference the VM by name.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 28 Feb 2015 – Added bit on static addresses</strong></p>
<p><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_211.png"></p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/01/29/Living-with-a-DD-WRT-virtual-router-three-months-on.aspx">When using a DD-WRT virtual router</a>, I have realised it is worth setting static a MAC address in Hyper-V and DHCP lease on the router for any server  VMs  you want access to from your base system OS. In my case this is TFS demo VM a connect to all the time.</p>
<p>If you don’t do this the address of the VM seems to vary more than you might expect. So you keep having to edit the HOSTS file on your base OS to reference the VM by name.</p>
<p>You set the static MAC address in the Hyper-V setting</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_229.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_226.png" title="image"></a></p>
<p>And the DHCP lease in the router Services tab, to make it a permanent lease leave the time  field empty</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_230.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_227.png" title="image"></a></p>
<p>And finally the hosts file add an entry</p>
<blockquote>
<p># For the VM 00:15:5d:0b:27:05<br>
192.168.1.99        typhoontfs</p></blockquote>
<p>On down side of this is that if you are using <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/01/29/Living-with-a-DD-WRT-virtual-router-three-months-on.aspx">snaphots as I am to address DHCP Wifi issues</a>, you need to add the lease to any old snapshots you have, but once it is set there should be no more host file editing</p>
<p><strong>Updated 28 Feb 2015</strong></p>
<p>I have still found problems with strange routes in my routing table due to the internal switch issuing an address (and gateway) via DHCP; these seem to cause problems for my Microsoft direct access (a VPN) . Today I had the realisation I can avoid this problem by using a static address for my host PC’s connection to the internal router e.g. 192.168.1.50 set on the Windows adaptor, as opposed to DHCP. By making it static I avoid the issue of extra routes or DNS entries by simply not adding them.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_235.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_231.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Living with a DD-WRT virtual router – three months on</title>
      <link>https://blog.richardfennell.net/posts/living-with-a-dd-wrt-virtual-router-three-months-on/</link>
      <pubDate>Thu, 29 Jan 2015 21:17:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/living-with-a-dd-wrt-virtual-router-three-months-on/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/26/Living-with-a-DD-WRT-virtual-router-one-month-on.aspx&#34;&gt;I posted in the past on my experience with DD-WRT router running in Hyper-V to allow my VMs internet access&lt;/a&gt;. A couple of months on I am still using it and I think have got around the worst of the issues.&lt;/p&gt;
&lt;p&gt;The big problem is not with the DD-WRT router, but the way Hyper-V virtual switches use WiFi for some operating systems. &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/07/DHCP-does-not-seem-to-work-on-Ubuntu-for-wireless-based-Hyper-V-virtual-switches.aspx&#34;&gt;Basically the summary is DHCP does not work for Linux VMs&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/26/Living-with-a-DD-WRT-virtual-router-one-month-on.aspx">I posted in the past on my experience with DD-WRT router running in Hyper-V to allow my VMs internet access</a>. A couple of months on I am still using it and I think have got around the worst of the issues.</p>
<p>The big problem is not with the DD-WRT router, but the way Hyper-V virtual switches use WiFi for some operating systems. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/07/DHCP-does-not-seem-to-work-on-Ubuntu-for-wireless-based-Hyper-V-virtual-switches.aspx">Basically the summary is DHCP does not work for Linux VMs</a>.</p>
<p>The best solution I have found to this problem is to use Hyper-V snapshots in which I hard code the correct IP settings for various networks, thus removing the need for DHCP.</p>
<p>At present I have three snapshots that I swap between as needed</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_228.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_225.png" title="image"></a></p>
<ul>
<li>One is set to use DHCP – I use this when my ‘external’ virtual switch is linked to a non-WIfi adaptor, usually the Ethernet in the office</li>
<li>One is hard coded for an IP address on my home router’s network, with suitable gateway and DNS setting</li>
<li>The final one is hard coded for my phone when it is being a Mifi</li>
</ul>
<p>I can add  more as I need them, but as I find I am using hotel and client Wifi less and less as I am on an ‘all you can eat’ 4G mobile contract, I doubt I will need many more.</p>
<p>Seems to be working, i will report back if I learn more</p>
]]></content:encoded>
    </item>
    <item>
      <title>When trying to load Office document from SharePoint I keep ending up in the Office Web Application</title>
      <link>https://blog.richardfennell.net/posts/when-trying-to-load-office-document-from-sharepoint-i-keep-ending-up-in-the-office-web-application/</link>
      <pubDate>Tue, 27 Jan 2015 16:53:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/when-trying-to-load-office-document-from-sharepoint-i-keep-ending-up-in-the-office-web-application/</guid>
      <description>&lt;p&gt;Whenever I tried to load an Office 2013 document from our SharePoint 2010 instance I kept ending up in the Office Web Application, the Office application was not being launched.&lt;/p&gt;
&lt;p&gt;If I tried the use the ‘Open in Word’ context menu I got the following error (and before you ask yes I was in IE, IE11 in fact, and Office 2013 was installed)&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_226.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_223.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;My PC has been build of our standard System Center managed image, others using the same best image seemed OK, so what had gone wrong for me?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whenever I tried to load an Office 2013 document from our SharePoint 2010 instance I kept ending up in the Office Web Application, the Office application was not being launched.</p>
<p>If I tried the use the ‘Open in Word’ context menu I got the following error (and before you ask yes I was in IE, IE11 in fact, and Office 2013 was installed)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_226.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_223.png" title="image"></a></p>
<p>My PC has been build of our standard System Center managed image, others using the same best image seemed OK, so what had gone wrong for me?</p>
<p>The launching of Office application features is managed by the ‘SharePoint OpenDocument Class’ IE add in (IE &gt; Settings &gt; Manage Add-ins). On my PC this whole add-in was missing, don’t know why.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_227.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_224.png" title="image"></a></p>
<p>The fix it turns out was to got into Control Panel &gt; Add remove Programs &gt; Office 2013 &gt; Change and do a repair and a reboot. Once this was done Office launched as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Techorama in May</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-techorama-in-may/</link>
      <pubDate>Tue, 27 Jan 2015 09:15:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-techorama-in-may/</guid>
      <description>&lt;p&gt;I am really pleased to be &lt;a href=&#34;http://www.techorama.be/agenda-2015/?utm_source=TECHORAMA&amp;#43;Members&amp;amp;utm_campaign=dd63635b07-Techorama_2015_50_of_agenda_live_1_26_2015&amp;amp;utm_medium=email&amp;amp;utm_term=0_3c47b62345-dd63635b07-101818625&#34;&gt;speaking at Techorama again this year&lt;/a&gt;. This is a great friendly conference that covers a wide range of subjects.&lt;/p&gt;
&lt;p&gt;Like last year the conference is in Mechelen, Belgium on the 12 and 13th of May. &lt;a href=&#34;http://www.techorama.be/agenda-2015/?utm_source=TECHORAMA&amp;#43;Members&amp;amp;utm_campaign=dd63635b07-Techorama_2015_50_of_agenda_live_1_26_2015&amp;amp;utm_medium=email&amp;amp;utm_term=0_3c47b62345-dd63635b07-101818625&#34;&gt;Hope to see some of you there&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/website-header-22%5B1%5D_1.jpg&#34;&gt;&lt;img alt=&#34;website-header-22[1]&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/website-header-22%5B1%5D_thumb_1.jpg&#34; title=&#34;website-header-22[1]&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really pleased to be <a href="http://www.techorama.be/agenda-2015/?utm_source=TECHORAMA&#43;Members&amp;utm_campaign=dd63635b07-Techorama_2015_50_of_agenda_live_1_26_2015&amp;utm_medium=email&amp;utm_term=0_3c47b62345-dd63635b07-101818625">speaking at Techorama again this year</a>. This is a great friendly conference that covers a wide range of subjects.</p>
<p>Like last year the conference is in Mechelen, Belgium on the 12 and 13th of May. <a href="http://www.techorama.be/agenda-2015/?utm_source=TECHORAMA&#43;Members&amp;utm_campaign=dd63635b07-Techorama_2015_50_of_agenda_live_1_26_2015&amp;utm_medium=email&amp;utm_term=0_3c47b62345-dd63635b07-101818625">Hope to see some of you there</a></p>
<p><a href="/wp-content/uploads/sites/2/historic/website-header-22%5B1%5D_1.jpg"><img alt="website-header-22[1]" loading="lazy" src="/wp-content/uploads/sites/2/historic/website-header-22%5B1%5D_thumb_1.jpg" title="website-header-22[1]"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for ‘An unexpected error occurred. Close the windows and try again’ error adding Azure subscription to Visual Studio Release Management Tools</title>
      <link>https://blog.richardfennell.net/posts/fix-for-an-unexpected-error-occurred-close-the-windows-and-try-again-error-adding-azure-subscription-to-visual-studio-release-management-tools/</link>
      <pubDate>Mon, 19 Jan 2015 20:59:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-an-unexpected-error-occurred-close-the-windows-and-try-again-error-adding-azure-subscription-to-visual-studio-release-management-tools/</guid>
      <description>&lt;p&gt;In preparation for my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/01/10/Speaking-at-Microsoft-Techdays-on-the-5th-of-February.aspx&#34;&gt;Techdays session&lt;/a&gt; next month, I have been sorting demos using the various Release Management clients.&lt;/p&gt;
&lt;p&gt;When I tried to &lt;a href=&#34;http://www.visualstudio.com/en-us/get-started/deploy-to-azure-vs&#34;&gt;create a release from within Visual Studio&lt;/a&gt; using the ‘Release Management tools for Visual Studio I found I could not add my Azure subscriptions. I saw the error ‘An unexpected error occurred. Close the windows and try again’&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_222.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_219.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I could download and import the subscription file, it showed the available storage accounts, but when I pressed save I got the rather unhelpful error ‘Object reference not set to an instance of an object’&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In preparation for my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/01/10/Speaking-at-Microsoft-Techdays-on-the-5th-of-February.aspx">Techdays session</a> next month, I have been sorting demos using the various Release Management clients.</p>
<p>When I tried to <a href="http://www.visualstudio.com/en-us/get-started/deploy-to-azure-vs">create a release from within Visual Studio</a> using the ‘Release Management tools for Visual Studio I found I could not add my Azure subscriptions. I saw the error ‘An unexpected error occurred. Close the windows and try again’</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_222.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_219.png" title="image"></a></p>
<p>I could download and import the subscription file, it showed the available storage accounts, but when I pressed save I got the rather unhelpful error ‘Object reference not set to an instance of an object’</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_223.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_220.png" title="image"></a></p>
<p>Turns out the issue was a simple one, rights. The LiveID I had signed into Visual Studio as had no rights for Release Management on the VSO account running the Release Management service, even though it was a TPC administrator.</p>
<p>It is easier to understand the problem in the Release Management client. When I tried to set the Release Management Server Url (RM &gt; Administration &gt; Settings) to the required VSO Url as the LiveID I was using in Visual Studio I got the nice clear error shown below.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_224.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_221.png" title="image"></a></p>
<p>The solution was in the Release Management client to use the LiveID of the VSO account owner. I could then connect the Url in the Release Management client and then add my previously failing LiveID as a user for the release service.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_225.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_222.png" title="image"></a></p>
<p>Once this was done I was able to use this original LiveID in Visual Studio without a problem.</p>
]]></content:encoded>
    </item>
    <item>
      <title>VSO gets ISO 27001 Certification and European Model Clauses</title>
      <link>https://blog.richardfennell.net/posts/vso-gets-iso-27001-certification-and-european-model-clauses/</link>
      <pubDate>Mon, 19 Jan 2015 16:06:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vso-gets-iso-27001-certification-and-european-model-clauses/</guid>
      <description>&lt;p&gt;If the reason you have not been using &lt;a href=&#34;http://tfs.visualstudio.com&#34;&gt;VSO&lt;/a&gt; was concern over where it is hosted, then last week Microsoft made an announcement that could ease some of your worries, or at least your legal departments.  VSO now has ISO 27001 Certification and European Model Clauses; &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2015/01/15/visual-studio-online-iso-27001-certification-and-european-model-clauses.aspx&#34;&gt;for more details see Brian Harry’s blog&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This added to the fact that since the end of last October you have been able to choose to &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2014/10/28/visual-studio-online-is-in-europe.aspx&#34;&gt;host your VSO instance in Europe&lt;/a&gt; could well make VSO a more compelling option for many organisations who don’t want to have their own TFS servers on premises&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If the reason you have not been using <a href="http://tfs.visualstudio.com">VSO</a> was concern over where it is hosted, then last week Microsoft made an announcement that could ease some of your worries, or at least your legal departments.  VSO now has ISO 27001 Certification and European Model Clauses; <a href="http://blogs.msdn.com/b/bharry/archive/2015/01/15/visual-studio-online-iso-27001-certification-and-european-model-clauses.aspx">for more details see Brian Harry’s blog</a>.</p>
<p>This added to the fact that since the end of last October you have been able to choose to <a href="http://blogs.msdn.com/b/bharry/archive/2014/10/28/visual-studio-online-is-in-europe.aspx">host your VSO instance in Europe</a> could well make VSO a more compelling option for many organisations who don’t want to have their own TFS servers on premises</p>
]]></content:encoded>
    </item>
    <item>
      <title>Guest post on the Microsoft UK Developers Site  on DSC and Release Management prior to Microsoft TechDays event</title>
      <link>https://blog.richardfennell.net/posts/guest-post-on-the-microsoft-uk-developers-site-on-dsc-and-release-management-prior-to-microsoft-techdays-event/</link>
      <pubDate>Fri, 16 Jan 2015 08:47:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/guest-post-on-the-microsoft-uk-developers-site-on-dsc-and-release-management-prior-to-microsoft-techdays-event/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/12/24/Thoughts-in-vNext-deployment-in-Release-Management.aspx&#34;&gt;One of my blog posts&lt;/a&gt; has been re-posted on the Microsoft  UK Developers blog  &lt;a href=&#34;http://www.microsoft.com/en-gb/developers/articles/week03jan15/resolving-the-dcs-two-step-release-management-vnext-deployment&#34;&gt;Resolving the DCS two-step - Release Management vNext deployment&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This post covers part of what I will be talking about at &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/01/10/Speaking-at-Microsoft-Techdays-on-the-5th-of-February.aspx&#34;&gt;Microsoft Techdays on the 5th of February&lt;/a&gt; .&lt;/p&gt;
&lt;p&gt;You can &lt;a href=&#34;https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032604684&amp;amp;Culture=en-GB&amp;amp;community=0&#34;&gt;register here for this free online event&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/12/24/Thoughts-in-vNext-deployment-in-Release-Management.aspx">One of my blog posts</a> has been re-posted on the Microsoft  UK Developers blog  <a href="http://www.microsoft.com/en-gb/developers/articles/week03jan15/resolving-the-dcs-two-step-release-management-vnext-deployment">Resolving the DCS two-step - Release Management vNext deployment</a></p>
<p>This post covers part of what I will be talking about at <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2015/01/10/Speaking-at-Microsoft-Techdays-on-the-5th-of-February.aspx">Microsoft Techdays on the 5th of February</a> .</p>
<p>You can <a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032604684&amp;Culture=en-GB&amp;community=0">register here for this free online event</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Microsoft Techdays on the 5th of February</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-microsoft-techdays-on-the-5th-of-february/</link>
      <pubDate>Sat, 10 Jan 2015 16:15:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-microsoft-techdays-on-the-5th-of-february/</guid>
      <description>&lt;p&gt;On the 5th of February I will be presenting at &lt;a href=&#34;http://thebeebs.co.uk/tech-days-online-5th-of-february/&#34;&gt;Microsoft’s online Techdays event.&lt;/a&gt; My session is entitled ‘How are you going to deploy that? A look at configuration as code’&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“It does not matter what platform you are developing for, you are going to have to deploy your product in the end. Too often in the past the question of deployment has been an afterthought. But it need not be this way, there are tools available that can help with deployment of your code and importantly the provisioning the underlying systems they need too. In this session we will look at Visual Studio Release Management’s vNext release pipeline seeing how it can leverage Desired State Configuration to provision environments and deploy applications”&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On the 5th of February I will be presenting at <a href="http://thebeebs.co.uk/tech-days-online-5th-of-february/">Microsoft’s online Techdays event.</a> My session is entitled ‘How are you going to deploy that? A look at configuration as code’</p>
<blockquote>
<p><em>“It does not matter what platform you are developing for, you are going to have to deploy your product in the end. Too often in the past the question of deployment has been an afterthought. But it need not be this way, there are tools available that can help with deployment of your code and importantly the provisioning the underlying systems they need too. In this session we will look at Visual Studio Release Management’s vNext release pipeline seeing how it can leverage Desired State Configuration to provision environments and deploy applications”</em></p></blockquote>
<p>There an loads of great sessions spread cross the three days of this online conference, why not <a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032604684&amp;Culture=en-GB&amp;community=0">register for the event online</a> now? I am sure there will be something of interest.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to edit registered Release Management deployment agent IP addresses if a VMs IP address changes</title>
      <link>https://blog.richardfennell.net/posts/how-to-edit-registered-release-management-deployment-agent-ip-addresses-if-a-vms-ip-address-changes/</link>
      <pubDate>Wed, 07 Jan 2015 14:04:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-edit-registered-release-management-deployment-agent-ip-addresses-if-a-vms-ip-address-changes/</guid>
      <description>&lt;p&gt;I have &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx&#34;&gt;posted in the past&lt;/a&gt; that we have a number of agent based deployments using Release Management 2013.4 that point to network isolated Lab Management environments. Over Christmas we did some maintenance on our underlying Hyper-V servers, so everything got fully stopped and restarted. When the network isolated environment were restarted their DHCP assigned IP addresses on our company domain all changed (maybe we should have had longer DHCP lease times set?)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">posted in the past</a> that we have a number of agent based deployments using Release Management 2013.4 that point to network isolated Lab Management environments. Over Christmas we did some maintenance on our underlying Hyper-V servers, so everything got fully stopped and restarted. When the network isolated environment were restarted their DHCP assigned IP addresses on our company domain all changed (maybe we should have had longer DHCP lease times set?)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_219.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_216.png" title="image"></a></p>
<p>Worst of all some were reused and were actually swapped between environments, so an IP address that used to connect to a <strong>Server1</strong> in environment <strong>Lab1</strong> could be assigned to <strong>Server2</strong> in environment <strong>Lab2</strong>. So basically all our deployments failed, usually because there server could not connect to the agent, but sometimes because the wrong VM responded.</p>
<p>Now for general Lab Management operations this was not an issue; inside the environment nothing had changed, the network range was still 192.168.23.x and externally SCVMM, MTM and the Test Controllers all knew what is going on and sorted themselves out. The problem was the Release Management deployment agent’s registration with the Release Management server. As I detailed in my previous <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">post</a> you have manually register the agents using shadow accounts. This means they are registered with their IP address at the time of registration, it does not change if the VMs IP address is reassigned with DHCP. It is up to you to fix it.</p>
<p>But how?</p>
<p>And that is the problem, there is no way to edit the IP addresses of the registered server’s deployment agents inside the Release Management admin tool. The only option I could find would be deleted the registered server and re-add them, but this requires them to be removed from any release pipelines. Something I did not want to do, to much work when I just wanted to fix an IP address.</p>
<p>The solution I found was to edit the <strong>IPAddress</strong> column in the underlying <strong>Server</strong> table in the <strong>ReleaseManagement</strong> DB. I did this with SQL Management Studio, nothing special. The only thing to note is that you cannot have duplicate IP addresses, so they had to be edited in an order to avoid duplication, using a temporary IP address during the edit process as I shuffled addresses around.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_220.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_217.png" title="image"></a></p>
<p>Once this was done everything leapt into life. I did not even need to restart the Release Management Server, just press the refresh button on the Server tab and saw all the agents had reconnected.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_221.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_218.png" title="image"></a></p>
<p>So a good dirty fix, but something I would have hoped would have been easier if the tools provided a means to edit the IP addresses</p>
<p><strong>Note</strong>: This problem is specific to agent based deployment in Release Management. If you are using <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/12/24/vNext-Release-Management-and-Network-Isolation.aspx">vNext DSC based deployment to network isolated VMs</a> are registered using their DNS names on the corporate LAN e.g. <strong>VSLM-1344-e7858e28-77cf-4163-b6ba-1df2e91bfcab.lab.blackmarble.co.uk</strong> so the problem does not occur</p>
]]></content:encoded>
    </item>
    <item>
      <title>Failing to unblock downloaded ZIP files causes really strange errors</title>
      <link>https://blog.richardfennell.net/posts/failing-to-unblock-downloaded-zip-files-causes-really-strange-errors/</link>
      <pubDate>Mon, 05 Jan 2015 21:55:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/failing-to-unblock-downloaded-zip-files-causes-really-strange-errors/</guid>
      <description>&lt;p&gt;Twice recently I have hit the problem that I needed to a unblock ZIP files downloaded from a VSO source repository before I extracted the contents. One was a DSC modules, the other a PowerShell script with associated .NET assemblies.&lt;/p&gt;
&lt;p&gt;In both cases the error messages I got were confusing and misleading. In the case of the DSC module the error was &amp;ldquo;cannot be loaded because you opted not to run this software now&amp;rdquo;. The other project just suffered mixed .NET assembly loading errors.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Twice recently I have hit the problem that I needed to a unblock ZIP files downloaded from a VSO source repository before I extracted the contents. One was a DSC modules, the other a PowerShell script with associated .NET assemblies.</p>
<p>In both cases the error messages I got were confusing and misleading. In the case of the DSC module the error was &ldquo;cannot be loaded because you opted not to run this software now&rdquo;. The other project just suffered mixed .NET assembly loading errors.</p>
<p>So really try to remember after a download to right click into the properties and make sure the ZIP file does not need unblocking</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_218.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_215.png" title="image"></a></p>
<p>If it does, click the unblock button prior to extracting the ZIP, else you will see similar strange errors to the ones I have seen</p>
]]></content:encoded>
    </item>
    <item>
      <title>vNext Release Management and Network Isolation</title>
      <link>https://blog.richardfennell.net/posts/vnext-release-management-and-network-isolation/</link>
      <pubDate>Wed, 24 Dec 2014 11:12:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vnext-release-management-and-network-isolation/</guid>
      <description>&lt;p&gt;If you are trying to use Release Management or any deployment tool with a network isolated Lab Management setup you will have authentication issues. Your isolated domain is not part of your production domain, so you have to provide credentials. In the past this meant &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx&#34;&gt;Shadow Accounts&lt;/a&gt; or the simple expedient of  running a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx&#34;&gt;NET USE&lt;/a&gt; at the start of your deployment script to provide a login to the drops location.&lt;/p&gt;
&lt;p&gt;In &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2014/11/11/what-s-new-in-release-management-for-vs-2013-update-4.aspx&#34;&gt;Release Management 2013.4&lt;/a&gt; we get a new option to address this issue if you are using DSC based deployment. This is &lt;strong&gt;Deploy from a build drop using a shared UNC path&lt;/strong&gt;. In this model the Release Management server copies the contents of the drops folder to a known share and passes credentials to access it down to the DSC client (you set these as parameters on the server).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are trying to use Release Management or any deployment tool with a network isolated Lab Management setup you will have authentication issues. Your isolated domain is not part of your production domain, so you have to provide credentials. In the past this meant <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">Shadow Accounts</a> or the simple expedient of  running a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">NET USE</a> at the start of your deployment script to provide a login to the drops location.</p>
<p>In <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2014/11/11/what-s-new-in-release-management-for-vs-2013-update-4.aspx">Release Management 2013.4</a> we get a new option to address this issue if you are using DSC based deployment. This is <strong>Deploy from a build drop using a shared UNC path</strong>. In this model the Release Management server copies the contents of the drops folder to a known share and passes credentials to access it down to the DSC client (you set these as parameters on the server).</p>
<p>This is a I nice formalisation of the tricks we had to pull by hand in the past. And something I had missed when the Update 4 came out</p>
]]></content:encoded>
    </item>
    <item>
      <title>Thoughts in vNext deployment in Release Management</title>
      <link>https://blog.richardfennell.net/posts/thoughts-in-vnext-deployment-in-release-management/</link>
      <pubDate>Wed, 24 Dec 2014 10:58:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/thoughts-in-vnext-deployment-in-release-management/</guid>
      <description>&lt;h3 id=&#34;the-dcs-two-step&#34;&gt;The DCS two step&lt;/h3&gt;
&lt;p&gt;When working with DSC a difficult concept can be that your desired state script is ‘compiled’ to a MOF file that is then ‘run’ by the desired state manager on the target machine. This is a two step affair and you have no real input on the second part. This is made more complex when Release Management is involved.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://colinsalmcorner.com/post/using-webdeploy-in-vnext-releases&#34;&gt;Colin Dembovsky did an excellent pair of posts on getting Release Management working with DSC based vNext templates.&lt;/a&gt; The core of the first post is that he made use of the Script DSC resource to run SQLPackage.EXE and MSDeploy to do the actual deployment of a system as well as using it to manage the transformation of the configuration files in his MSDeploy package.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="the-dcs-two-step">The DCS two step</h3>
<p>When working with DSC a difficult concept can be that your desired state script is ‘compiled’ to a MOF file that is then ‘run’ by the desired state manager on the target machine. This is a two step affair and you have no real input on the second part. This is made more complex when Release Management is involved.</p>
<p><a href="http://colinsalmcorner.com/post/using-webdeploy-in-vnext-releases">Colin Dembovsky did an excellent pair of posts on getting Release Management working with DSC based vNext templates.</a> The core of the first post is that he made use of the Script DSC resource to run SQLPackage.EXE and MSDeploy to do the actual deployment of a system as well as using it to manage the transformation of the configuration files in his MSDeploy package.</p>
<p>This is where the two step issue raises it head. Even with his post I managed to get myself very confused.</p>
<p>The problem is Release Management passes variable into the DSC script and these are evaluated when the MOF is compiled. For most resources this is fine, but for the Script resource you have to be very aware that the string that is the actual script is treated as just a string, not code, so variables are not evaluated until the MOF is run by the desired state manager, by which point there are no Release Management variables set (they are long gone).</p>
<p><a href="http://www.colinsalmcorner.com/post/real-config-handling-for-dsc-in-rm">Colin’s provides an answer to this problem with his cScriptWithParams resource</a>. This resource takes the Release Management provided properties and passes them into as parameters into the MOF compilation, forcing their evaluation, neatly side stepping the problem. He uses this  technique for the SetParameters.XML transform.</p>
<p>This is all good, but it got me thinking, his post has a number of hard coded paths, and also copies the deployment files to a ‘known location’. Is this really all required if we can pass in the Release Management $ApplicationPath?</p>
<p>So I swapped all my Script resources to use the cScriptWithParams resource passing in the applicationpath thus removing the need to copy the files from their default location.</p>
<pre tabindex="0"><code>      cScriptWithParams SetConStringDeployParam  
        {  
            GetScript = { @{ Name = &#34;SetDeployParams&#34; } }  
            TestScript = { $false }  
            SetScript = {  
                $paramFilePath = &#34;$folder\_PublishedWebsitesWcfService\_PackageWcfService.SetParameters.xml&#34;  
   
                $paramsToReplace = @{  
                      &#34;\_\_DBContext\_\_&#34; = $context  
                      &#34;\_\_SiteName\_\_&#34; = $siteName  
                }  
   
                $content = gc $paramFilePath  
                $paramsToReplace.GetEnumerator() | % {  
                    $content = $content.Replace($\_.Key, $\_.Value)  
                }  
                sc -Path $paramFilePath -Value $content  
            }  
            cParams =  
            @{  
                context = $context;  
                siteName = $siteName;  
                folder = $ApplicationPath;  

            }  
        }

 

          
        cScriptWithParams DeploySite  
        {  
            GetScript = { @{ Name = &#34;DeploySite&#34; } }  
            TestScript = { $false }  
            SetScript = {  
                &amp; &#34;$folder\_PublishedWebsitesWcfService\_PackageWcfService.deploy.cmd&#34; /Y  
            }  
              
            cParams =  
            @{  
                folder = $ApplicationPath;  
            }

 

            DependsOn = &#34;\[cScriptWithParams\]SetConStringDeployParam&#34;  
              
        } 
</code></pre><p>I think this gave a easier to follow script, though I do wonder about my naming convention, maybe I need to adopt a nomenclature for inner script variables as opposed to global ones</p>
<h3 id="where-are-my-parameters-stored">Where are my parameters stored?</h3>
<p>However, this does raise the question of where do these ‘global’ parameters come from? We have two options</p>
<ul>
<li>A PowerShell Configuration data file (the standard DSC way)</li>
<li>Release management parameters</li>
</ul>
<p>Either are valid, if you want to source control all  your configuration the first option is good. However what happens if you need to store secrets? In this case the ability to store a value encrypted within Release Management is useful.</p>
<p>In reality I expect will will use a combination. maybe with everything bar secrets in the configuration file.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Setting a build version in a JAR file from TFS build</title>
      <link>https://blog.richardfennell.net/posts/setting-a-build-version-in-a-jar-file-from-tfs-build/</link>
      <pubDate>Tue, 16 Dec 2014 13:36:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-a-build-version-in-a-jar-file-from-tfs-build/</guid>
      <description>&lt;p&gt;Whilst helping a Java based team (part of larger organisation that used many sets of both Microsoft and non-Microsoft tools) to migrate from Subversion to TFS I had to tackle their Jenkins/Ant based builds.&lt;/p&gt;
&lt;p&gt;They could have stayed on Jenkins and switched to the &lt;a href=&#34;https://wiki.jenkins-ci.org/display/JENKINS/Team&amp;#43;Foundation&amp;#43;Server&amp;#43;Plugin&#34;&gt;TFS source provider&lt;/a&gt;, but they wanted to at least look at how TFS build would better allow them to  trace their builds against TFS work items.&lt;/p&gt;
&lt;p&gt;All went well, we setup a build controller and agent specifically for their team and installed Java onto it as well the &lt;a href=&#34;https://visualstudiogallery.msdn.microsoft.com/2011f516-15a7-4f9a-8b86-1e0894a75739&#34;&gt;TFS build extensions&lt;/a&gt;. We were very quickly able to get our test Java project building on the new build system.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst helping a Java based team (part of larger organisation that used many sets of both Microsoft and non-Microsoft tools) to migrate from Subversion to TFS I had to tackle their Jenkins/Ant based builds.</p>
<p>They could have stayed on Jenkins and switched to the <a href="https://wiki.jenkins-ci.org/display/JENKINS/Team&#43;Foundation&#43;Server&#43;Plugin">TFS source provider</a>, but they wanted to at least look at how TFS build would better allow them to  trace their builds against TFS work items.</p>
<p>All went well, we setup a build controller and agent specifically for their team and installed Java onto it as well the <a href="https://visualstudiogallery.msdn.microsoft.com/2011f516-15a7-4f9a-8b86-1e0894a75739">TFS build extensions</a>. We were very quickly able to get our test Java project building on the new build system.</p>
<p>One feature that their old Ant scripts used was to store the build name/number into the Manifest of any JAR files created, a good plan as it is always good to know where something came from.</p>
<p>When asked as how to do this with TFS build I thought ‘no problem I will just use <a href="http://msdn.microsoft.com/en-gb/library/hh850448.aspx">TFS build environment variable’</a> and add something like the following</p>
<pre tabindex="0"><code>&lt;property environment=&#34;env&#34;/&gt; 

&lt;target name=&#34;jar&#34;&gt;  
        &lt;jar destfile=&#34;${basedir}/javasample.jar&#34; basedir=&#34;${basedir}/bin&#34;&gt;  
            &lt;manifest&gt;  
                &lt;attribute name=&#34;Implementation-Version&#34; value=&#34;${env.TF\_BUILD\_BUILDNUMBER}&#34; /&gt;  
            &lt;/manifest&gt;      
        &lt;/jar&gt;  
&lt;/target&gt;
</code></pre><p>But this did not work, I just saw the text ${env.TF_BUILD_BUILDNUMBER}&quot; in my manifest, basically the environment variable could not be resolved.</p>
<p>After a bit more of think I realised the problem is that the Ant/Maven build extensions for TFS are based on TFS 2008 style builds, the build environment variables are a TFS 2012 and later feature, so of course they are not set.</p>
<p>A quick look in the automatically generated TFSBuild.proj file generated for the build showed that the MSBuild $(BuildNumber) was passed into the Ant script as a property, so it could be referenced in the Ant Jar target (note the brackets change from () to {})</p>
<pre tabindex="0"><code>

&lt;target name=&#34;jar&#34;&gt;  
        &lt;jar destfile=&#34;${basedir}/javasmaple.jar&#34; basedir=&#34;${basedir}/bin&#34;&gt;  
            &lt;manifest&gt;  
                &lt;attribute name=&#34;Implementation-Version&#34; value=&#34;${BuildNumber}&#34; /&gt;  
            &lt;/manifest&gt;      
        &lt;/jar&gt;  
&lt;/target&gt;
</code></pre><p>Once this change was made I then got the manifest I expected including the build number</p>
<pre tabindex="0"><code>Manifest-Version: 1.0  
Ant-Version: Apache Ant 1.9.4  
Created-By: 1.8.0\_25-b18 (Oracle Corporation)  
Implementation-Version: JavaSample.Ant.Manual\_20141216.7  
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Great book full of easily accessible tips to apply the concept of user stories to your team</title>
      <link>https://blog.richardfennell.net/posts/great-book-full-of-easily-accessible-tips-to-apply-the-concept-of-user-stories-to-your-team/</link>
      <pubDate>Thu, 04 Dec 2014 13:17:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/great-book-full-of-easily-accessible-tips-to-apply-the-concept-of-user-stories-to-your-team/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.amazon.co.uk/Fifty-Quick-Ideas-Improve-Stories/dp/0993088104/ref=as_sl_pc_ss_til?tag=buitwoonmypc-21&amp;amp;linkCode=w01&amp;amp;linkId=AYSQ7OIGICPH33HV&amp;amp;creativeASIN=0993088104&#34;&gt;&lt;img alt=&#34;Book cover&#34; loading=&#34;lazy&#34; src=&#34;https://m.media-amazon.com/images/I/51hYnA4nsAL._SX342_SY445_PQ25_.jpg&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;As with many concepts it is not the the idea that is hard but it’s application. ‘&lt;a href=&#34;http://www.amazon.co.uk/Fifty-Quick-Ideas-Improve-Stories/dp/0993088104/ref=as_sl_pc_ss_til?tag=buitwoonmypc-21&amp;amp;linkCode=w01&amp;amp;linkId=AYSQ7OIGICPH33HV&amp;amp;creativeASIN=0993088104&#34;&gt;Fifty Quick Ideas to Improve Your User Stories’&lt;/a&gt; by  &lt;a href=&#34;http://www.amazon.co.uk/Gojko-Adzic/e/B004P9W8G6/ref=dp_byline_cont_book_1&#34;&gt;Gojko Adzic&lt;/a&gt; and &lt;a href=&#34;http://www.amazon.co.uk/David-Evans/e/B00OM4JKQU/ref=dp_byline_cont_book_2&#34;&gt;David Evans&lt;/a&gt; provides some great tips to apply the concept of user stories to real world problems. Highlighting where they work and where they don’t, and what you can do about it.&lt;/p&gt;
&lt;p&gt;I think this book is well worth a read for anyone, irrespective of their role in a team; it’s short chapters (usually a couple of pages per idea) means it easy to pickup and put down when you get a few minutes. Perfect for that commute&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.amazon.co.uk/Fifty-Quick-Ideas-Improve-Stories/dp/0993088104/ref=as_sl_pc_ss_til?tag=buitwoonmypc-21&amp;linkCode=w01&amp;linkId=AYSQ7OIGICPH33HV&amp;creativeASIN=0993088104"><img alt="Book cover" loading="lazy" src="https://m.media-amazon.com/images/I/51hYnA4nsAL._SX342_SY445_PQ25_.jpg"></a></p>
<p>As with many concepts it is not the the idea that is hard but it’s application. ‘<a href="http://www.amazon.co.uk/Fifty-Quick-Ideas-Improve-Stories/dp/0993088104/ref=as_sl_pc_ss_til?tag=buitwoonmypc-21&amp;linkCode=w01&amp;linkId=AYSQ7OIGICPH33HV&amp;creativeASIN=0993088104">Fifty Quick Ideas to Improve Your User Stories’</a> by  <a href="http://www.amazon.co.uk/Gojko-Adzic/e/B004P9W8G6/ref=dp_byline_cont_book_1">Gojko Adzic</a> and <a href="http://www.amazon.co.uk/David-Evans/e/B00OM4JKQU/ref=dp_byline_cont_book_2">David Evans</a> provides some great tips to apply the concept of user stories to real world problems. Highlighting where they work and where they don’t, and what you can do about it.</p>
<p>I think this book is well worth a read for anyone, irrespective of their role in a team; it’s short chapters (usually a couple of pages per idea) means it easy to pickup and put down when you get a few minutes. Perfect for that commute</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can’t build SSDT projects in a TFS build</title>
      <link>https://blog.richardfennell.net/posts/cant-build-ssdt-projects-in-a-tfs-build/</link>
      <pubDate>Wed, 03 Dec 2014 13:24:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cant-build-ssdt-projects-in-a-tfs-build/</guid>
      <description>&lt;p&gt;Whilst building a new TFS build agent VM using our standard scripts I hit a problem that SSDT projects would not build, but was fine on our existing agents. The error was&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;C:Program Files (x86)MSBuildMicrosoftVisualStudiov12.0SSDTMicrosoft.Data.Tools.Schema.SqlTasks.targets (513): The &amp;ldquo;SqlBuildTask&amp;rdquo; task failed unexpectedly.&lt;br&gt;
System.MethodAccessException: Attempt by method &amp;lsquo;Microsoft.Data.Tools.Schema.Sql.Build.SqlTaskHost.OnCreateCustomSchemaData(System.String, System.Collections.Generic.Dictionary`2&amp;lt;System.String,System.String&amp;gt;)&amp;rsquo; to access method &amp;lsquo;Microsoft.Data.Tools.Components.Diagnostics.SqlTracer.ShouldTrace(System.Diagnostics.TraceEventType)&amp;rsquo; failed.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;The problem was fixed by doing an update via the Visual Studio &amp;gt; Tools &amp;gt; Extensions and Updates. Once this was completed the build was fine.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst building a new TFS build agent VM using our standard scripts I hit a problem that SSDT projects would not build, but was fine on our existing agents. The error was</p>
<blockquote>
<p>C:Program Files (x86)MSBuildMicrosoftVisualStudiov12.0SSDTMicrosoft.Data.Tools.Schema.SqlTasks.targets (513): The &ldquo;SqlBuildTask&rdquo; task failed unexpectedly.<br>
System.MethodAccessException: Attempt by method &lsquo;Microsoft.Data.Tools.Schema.Sql.Build.SqlTaskHost.OnCreateCustomSchemaData(System.String, System.Collections.Generic.Dictionary`2&lt;System.String,System.String&gt;)&rsquo; to access method &lsquo;Microsoft.Data.Tools.Components.Diagnostics.SqlTracer.ShouldTrace(System.Diagnostics.TraceEventType)&rsquo; failed.</p></blockquote>
<p>The problem was fixed by doing an update via the Visual Studio &gt; Tools &gt; Extensions and Updates. Once this was completed the build was fine.</p>
<p><a href="http://stackoverflow.com/questions/25505887/visual-studio-2013-database-project-msbuild-error">Seems there may have been an issue with the Update 3 generation SSDT tools</a>, so older and newer versions seem OK. Our existing agents had already been patched.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Living with a DD-WRT virtual router – one month on</title>
      <link>https://blog.richardfennell.net/posts/living-with-a-dd-wrt-virtual-router-one-month-on/</link>
      <pubDate>Wed, 26 Nov 2014 12:09:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/living-with-a-dd-wrt-virtual-router-one-month-on/</guid>
      <description>&lt;h4 id=&#34;i-posted-a-month-or-so-ago-about-my-experiences-using-a-dd-wrt-router-with-hyper-v-well-i-have-been-living-with-it-over-a-month-how-has-it-been-going&#34;&gt;I posted a month or so ago about my ‘&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/10/07/Experiences-using-a-DD-WRT-router-with-Hyper-V.aspx&#34;&gt;Experiences using a DD-WRT router with Hyper-V&lt;/a&gt;’, well I have been living with it over a month? How has it been going?&lt;/h4&gt;
&lt;p&gt;Like the &lt;a href=&#34;http://en.wikipedia.org/wiki/Curate%27s_egg&#34;&gt;curate’s egg ‘good in parts’&lt;/a&gt;. It seems OK for while and then everything would get a bit slow to stop.&lt;/p&gt;
&lt;p&gt;Just as a reminder this is what I had ended up with&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_214.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_211.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In essence, a pair of virtual switches, one internal using DHCP on the DD-WRT virtual router, and a second one connected to an active external network (usually Ethernet, as &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/07/DHCP-does-not-seem-to-work-on-Ubuntu-for-wireless-based-Hyper-V-virtual-switches.aspx&#34;&gt;DHCP with virtual switches and WIFI in Hyper-V seem a very hit and miss affair&lt;/a&gt;).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h4 id="i-posted-a-month-or-so-ago-about-my-experiences-using-a-dd-wrt-router-with-hyper-v-well-i-have-been-living-with-it-over-a-month-how-has-it-been-going">I posted a month or so ago about my ‘<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/10/07/Experiences-using-a-DD-WRT-router-with-Hyper-V.aspx">Experiences using a DD-WRT router with Hyper-V</a>’, well I have been living with it over a month? How has it been going?</h4>
<p>Like the <a href="http://en.wikipedia.org/wiki/Curate%27s_egg">curate’s egg ‘good in parts’</a>. It seems OK for while and then everything would get a bit slow to stop.</p>
<p>Just as a reminder this is what I had ended up with</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_214.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_211.png" title="image"></a></p>
<p>In essence, a pair of virtual switches, one internal using DHCP on the DD-WRT virtual router, and a second one connected to an active external network (usually Ethernet, as <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/07/DHCP-does-not-seem-to-work-on-Ubuntu-for-wireless-based-Hyper-V-virtual-switches.aspx">DHCP with virtual switches and WIFI in Hyper-V seem a very hit and miss affair</a>).</p>
<p>From my Hyper-V VMs the virtual router seems to be fine, they all have a single network adaptor linked to the virtual switch that issue IP addresses via DHCP. The issues have been for the host operating system. I wanted to connect this to the internal virtual switch to allow easy access to my VMs (without the management complexity of punching holes in the router firewall), but when I did this I got inconsistent performance (made harder to diagnose due to moving house from a fast Virgin cable based Internet connection to a slow BT ADSL based link who’s performance profile varies greatly based on the hour of the day. I was never sure if it was problem with my router or BT’s service).</p>
<p>The main problem I saw was that it seemed the first time I accessed a site it was slow, but then was often OK. So a lookup issue, DNS?</p>
<p>Reaching back into my distant memory as network engineer (early 90s some IP but mostly IPX and NETBIOS) I suspected routing or DNS look up issue. Routing you can do something about via routing tables and metrics, but DNS is harder to control with multiple network connections.</p>
<p>The best option to manage DNS appeared to be <a href="http://support.microsoft.com/kb/894564" title="http://support.microsoft.com/kb/894564">changing the binding order for my various physical and virtual network adaptors</a> so the virtual switches were the lowest priority.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_215.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_212.png" title="image"></a></p>
<p>This at least made most DNS requests go via physical devices.</p>
<p>Note: Also on my Virtual Network Switch adaptor on the host machine I told it not to use the DNS settings provided from the virtual router, but this seemed to have little effect as when using <strong>nslookup</strong> it still picked the virtual router, until I changed the binding order.</p>
<p>On the routing front, I set the manual metric on IP4 traffic via the virtual router adaptor to a large number, to make it the least likely route anywhere. Doing this should mean only traffic  to the internal 192.168.1.x network should use that adaptor</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_216.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_213.png" title="image"></a></p>
<p>This meant my routing table on my host operating system looks as follows when the system is working OK</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_217.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_214.png" title="image"></a></p>
<h3 id="outstanding-issues">Outstanding Issues</h3>
<p><strong>Routing</strong></p>
<p>I did see some problem if the route via the virtual switch appeared first in the list, this can happen when you change WIFI hotspot. The fix is to delete the unwanted route (0.0.0.0 to 192.168.1.1)</p>
<blockquote>
<p>route delete 0.0.0.0 MASK 0.0.0.0 192.168.1.1</p></blockquote>
<p>But most of the time fixed the binding order seemed enough, so I did not need to do this</p>
<p><strong>External DHCP Refresh</strong></p>
<p>If you swap networks, going from work to home, your external network will have a different IP address.  You do have to restart the router VM (or manually renew DHCP to get a new address)</p>
<p><strong>DHCP and WIFI</strong></p>
<p>There is still the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/10/07/Experiences-using-a-DD-WRT-router-with-Hyper-V.aspx">problem getting DHCP working over Hyper-V virtual switched</a>. You can do some <a href="http://www.hurryupandwait.io/blog/running-an-ubuntu-guest-on-hyper-v-assigned-an-ip-via-dhcp-over-a-wifi-connection">tricks with bridging</a>, but it is not great.</p>
<p>The solution I have used is to use Hyper-V checkpoint on my router VM. One set for DHCP and another with the static IP settings for my home network. Again not great but workable for me most of the time. I am happier editing the router VM rather than many guest VMs.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why am I getting ‘cannot access outlook.ost’ issues with Office 365 Lync?</title>
      <link>https://blog.richardfennell.net/posts/why-am-i-getting-cannot-access-outlook-ost-issues-with-office-365-lync/</link>
      <pubDate>Mon, 24 Nov 2014 17:13:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-am-i-getting-cannot-access-outlook-ost-issues-with-office-365-lync/</guid>
      <description>&lt;p&gt;We use O365 to provide Lync messaging. So when I rebuilt my PC I thought I needed to re-install the client; so I logged into the O365 web site and selected the install option. Turns out this was a mistake. I had Office 2013 installed, so I already had the client, I just had not noticed.&lt;/p&gt;
&lt;p&gt;If you do install O365 Lync client (as well as Office 2013 one) you get file access errors reported with your outlook.ost files. If this occurs, just un-install the O376 client and use the one in Office 2013, the errors go away&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We use O365 to provide Lync messaging. So when I rebuilt my PC I thought I needed to re-install the client; so I logged into the O365 web site and selected the install option. Turns out this was a mistake. I had Office 2013 installed, so I already had the client, I just had not noticed.</p>
<p>If you do install O365 Lync client (as well as Office 2013 one) you get file access errors reported with your outlook.ost files. If this occurs, just un-install the O376 client and use the one in Office 2013, the errors go away</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS announcements roundup</title>
      <link>https://blog.richardfennell.net/posts/tfs-announcements-roundup/</link>
      <pubDate>Sat, 22 Nov 2014 12:18:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-announcements-roundup/</guid>
      <description>&lt;p&gt;There have been a load on announcements about TFS, VSO and Visual Studio in general in the past couple of week, mostly at the &lt;a href=&#34;http://channel9.msdn.com/events/Visual-Studio/Connect-event-2014&#34;&gt;Connect() event&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Just to touch on a few items&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://go.microsoft.com/fwlink/?linkid=390465&#34;&gt;Visual Studio and Team Foundation Server 2013 Update 4&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;The first public &lt;a href=&#34;http://go.microsoft.com/fwlink/?linkid=517106&#34;&gt;preview of Visual Studio 2015 and .NET 2015&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;The .NET Framework is going &lt;a href=&#34;https://github.com/dotnet/corefx&#34;&gt;open source&lt;/a&gt; and cross platform&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://go.microsoft.com/fwlink/?LinkId=518338&#34;&gt;Visual Studio Community 2013&lt;/a&gt; – a new edition of Visual Studio that combines everything in all the Express products and adds extensibility support.  &lt;a href=&#34;http://www.visualstudio.com/news/vs2013-community-vs&#34;&gt;Learn more…&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.visualstudio.com/news/news-overview-vs&#34;&gt;New improvements to Visual Studio Online&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Release Management as service on VSO&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you have not had a chance to have look at these features try the &lt;a href=&#34;http://channel9.msdn.com/events/Visual-Studio/Connect-event-2014&#34;&gt;videos of all the sessions on Channel9, the keynotes are a good place to start&lt;/a&gt;. Also look, as usual, at the various &lt;a href=&#34;http://blogs.msdn.com/b/bharry/&#34;&gt;posts on Brian Harry’s Blog&lt;/a&gt;. It is time of rapid change in ALM tooling&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There have been a load on announcements about TFS, VSO and Visual Studio in general in the past couple of week, mostly at the <a href="http://channel9.msdn.com/events/Visual-Studio/Connect-event-2014">Connect() event</a>.</p>
<p>Just to touch on a few items</p>
<ul>
<li><a href="http://go.microsoft.com/fwlink/?linkid=390465">Visual Studio and Team Foundation Server 2013 Update 4</a>. </li>
<li>The first public <a href="http://go.microsoft.com/fwlink/?linkid=517106">preview of Visual Studio 2015 and .NET 2015</a></li>
<li>The .NET Framework is going <a href="https://github.com/dotnet/corefx">open source</a> and cross platform</li>
<li><a href="http://go.microsoft.com/fwlink/?LinkId=518338">Visual Studio Community 2013</a> – a new edition of Visual Studio that combines everything in all the Express products and adds extensibility support.  <a href="http://www.visualstudio.com/news/vs2013-community-vs">Learn more…</a></li>
<li><a href="http://www.visualstudio.com/news/news-overview-vs">New improvements to Visual Studio Online</a></li>
<li>Release Management as service on VSO</li>
</ul>
<p>If you have not had a chance to have look at these features try the <a href="http://channel9.msdn.com/events/Visual-Studio/Connect-event-2014">videos of all the sessions on Channel9, the keynotes are a good place to start</a>. Also look, as usual, at the various <a href="http://blogs.msdn.com/b/bharry/">posts on Brian Harry’s Blog</a>. It is time of rapid change in ALM tooling</p>
]]></content:encoded>
    </item>
    <item>
      <title>Errors running tests via TCM as part of a Release Management pipeline</title>
      <link>https://blog.richardfennell.net/posts/errors-running-tests-via-tcm-as-part-of-a-release-management-pipeline/</link>
      <pubDate>Fri, 21 Nov 2014 11:48:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/errors-running-tests-via-tcm-as-part-of-a-release-management-pipeline/</guid>
      <description>&lt;p&gt;Whilst getting integration tests running as part of a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx&#34;&gt;Release Management  pipeline within Lab Management&lt;/a&gt; I hit a problem that TCM triggered tests failed as the tool claimed it could not access the TFS build drops location, and that no .TRX (test results) were being produced. This was strange as it used to work (the RM system had worked when it was 2013.2, seems to have started to be issue with 2013.3 and 2013.4, but this might be a coincidence)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst getting integration tests running as part of a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">Release Management  pipeline within Lab Management</a> I hit a problem that TCM triggered tests failed as the tool claimed it could not access the TFS build drops location, and that no .TRX (test results) were being produced. This was strange as it used to work (the RM system had worked when it was 2013.2, seems to have started to be issue with 2013.3 and 2013.4, but this might be a coincidence)</p>
<p>The issue was two fold..</p>
<h3 id="permissionspath-problems-accessing-the-build-drops-location">Permissions/Path Problems accessing the build drops location</h3>
<p>The build drops location passed is passed into the component using the argument <strong>$(PackageLocation)</strong>. This is pulled from the component properties, it is the TFS provided build drop with a appended on the end.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_213.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_210.png" title="image"></a> </p>
<p>Note that the in the text box is there as the textbox cannot be empty. It tells the component to uses the root of the drops location. This is the issue, as when you are in a network isolated environment and had to use NET USE to authenticate with a the TFS drops share the trailing causes a permissions error (might occur in other scenarios too I have not tested it).</p>
<p>Removing the slash or adding a . (period) after the fixes the path issue, so..</p>
<ul>
<li>\serverDropsServices.ReleaseServices.Release_1.0.227.19779        -  works</li>
<li>\serverDropsServices.ReleaseServices.Release_1.0.227.19779      - fails </li>
<li>\serverDropsServices.ReleaseServices.Release_1.0.227.19779.     - works </li>
</ul>
<p>So the answer is add a . (period) in the pipeline workflow component so the build location is <strong>$(PackageLocation).</strong> as opposed to <strong>$(PackageLocation)</strong> or to edit the PS1 file that is run to do some validation to strip out any trailing characters. I chose the later, making the edit</p>
<pre tabindex="0"><code>if (\[string\]::IsNullOrEmpty($BuildDirectory))  
    {  
        $buildDirectoryParameter = \[string\]::Empty  
    } else  
    {  
        # make sure we remove any trailing slashes as the cause permission issues  
        $BuildDirectory = $BuildDirectory.Trim()  
        while ($BuildDirectory.EndsWith(&#34;&#34;))  
        {  
            $BuildDirectory = $BuildDirectory.Substring(0,$BuildDirectory.Length-1)  
        }  
        $buildDirectoryParameter = &#34;/builddir:&#34;&#34;$BuildDirectory&#34;&#34;&#34;   
</code></pre><pre><code>}  
</code></pre>
<p>   </p>
<pre tabindex="0"><code>
### Cannot find the TRX file even though it is present

Once the tests were running I still had an issue that even though TCM had run the tests, produced a .TRX file and published it’s contents back to TFS, the script claimed the file did not exist and so could not pass the test results back to Release Management.

The issue was the call being used to check for the file existence.

&gt; **\[System.IO.File\]::Exists($testRunResultsTrxFileName)**

As soon as I swapped to the recommended PowerShell way to check for files

&gt; **Test-Path($testRunResultsTrxFileName)**

it all worked.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Passed the Scaled Agile Framework assessment</title>
      <link>https://blog.richardfennell.net/posts/passed-the-scaled-agile-framework-assessment/</link>
      <pubDate>Fri, 21 Nov 2014 11:15:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/passed-the-scaled-agile-framework-assessment/</guid>
      <description>&lt;p&gt;Whilst over at the MVP Summit I had the chance to do a &lt;a href=&#34;http://scaledagileframework.com/&#34;&gt;Scaled Agile Framework&lt;/a&gt; workshop. I am pleased to say I passed the assessment.&lt;/p&gt;
&lt;p&gt;Certainly has some interesting ideas in scaling Agile to larger teams working on a single product stream where a simple Scrum of Scrums is not enough.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://www.dessler.de/media/2013/05/scaled_agile_framework.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst over at the MVP Summit I had the chance to do a <a href="http://scaledagileframework.com/">Scaled Agile Framework</a> workshop. I am pleased to say I passed the assessment.</p>
<p>Certainly has some interesting ideas in scaling Agile to larger teams working on a single product stream where a simple Scrum of Scrums is not enough.</p>
<p><img loading="lazy" src="http://www.dessler.de/media/2013/05/scaled_agile_framework.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>‘Test run must be created with at least one test case’ error when using TCM</title>
      <link>https://blog.richardfennell.net/posts/test-run-must-be-created-with-at-least-one-test-case-error-when-using-tcm/</link>
      <pubDate>Thu, 20 Nov 2014 22:30:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/test-run-must-be-created-with-at-least-one-test-case-error-when-using-tcm/</guid>
      <description>&lt;p&gt;I have been setting up some integration tests as part of a release pipeline. I am using TCM.EXE to trigger tests from the command line. Something along the lines&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;TCM.exe run /create /title:&amp;#34;EventTests&amp;#34; /collection:&amp;#34;[http://myserver:8080/tfs](http://myserver:8080/tfs)” /teamproject:myteamproject /testenvironment:&amp;#34;Integration&amp;#34; /builddir:[\\serverDropsBuild\_1.0.226.1975”](//\serverDropsBuild_1.0.226.1975”)  /include /planid:26989  /suiteid:27190 /configid:1  
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;I kept getting the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;‘A test run must be created with at least one test case’&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Strange thing was my test suite did contains a number of test, &lt;a href=&#34;http://stackoverflow.com/questions/11797341/a-test-run-must-be-created-with-at-least-one-test-case&#34;&gt;and they were marked as active.&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been setting up some integration tests as part of a release pipeline. I am using TCM.EXE to trigger tests from the command line. Something along the lines</p>
<pre tabindex="0"><code>TCM.exe run /create /title:&#34;EventTests&#34; /collection:&#34;[http://myserver:8080/tfs](http://myserver:8080/tfs)” /teamproject:myteamproject /testenvironment:&#34;Integration&#34; /builddir:[\\serverDropsBuild\_1.0.226.1975”](//\serverDropsBuild_1.0.226.1975”)  /include /planid:26989  /suiteid:27190 /configid:1  
</code></pre><p>I kept getting the error</p>
<blockquote>
<p>‘A test run must be created with at least one test case’</p></blockquote>
<p>Strange thing was my test suite did contains a number of test, <a href="http://stackoverflow.com/questions/11797341/a-test-run-must-be-created-with-at-least-one-test-case">and they were marked as active.</a></p>
<p>The issue was actually the configid it was wrong, there is no easy way to check them from the UI. use the following command to get a list of valid IDs</p>
<pre tabindex="0"><code>TCM.exe configs /list   /collection:&#34;[http://myserver:8080/tfs](http://myserver:8080/tfs)” /teamproject:myteamproject
</code></pre><blockquote>
<p>Id        Name<br>
-&mdash;&mdash;&ndash; &mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;-<br>
35        Windows 8.1 ARM<br>
36        Windows 8.1 64bit<br>
37        Windows 8.1 ATOM<br>
38        Default configuration created @ 11/03/2014 12:58:15<br>
39        Windows Phone 8.1</p></blockquote>
<p>Your can now use the correct ID, not one you had to guess</p>
]]></content:encoded>
    </item>
    <item>
      <title>Linking VSO  to your Azure Subscription and Azure Active Directory</title>
      <link>https://blog.richardfennell.net/posts/linking-vso-to-your-azure-subscription-and-azure-active-directory/</link>
      <pubDate>Thu, 20 Nov 2014 12:10:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/linking-vso-to-your-azure-subscription-and-azure-active-directory/</guid>
      <description>&lt;p&gt;I have a few old Visual Studio Online (VSO) accounts (dating back to TFSPreview.com days). We use them to collaborate with third parties, it was long overdue that I tidied them up; as a problem historically has been that all access to VSO has been using a Microsoft Accounts (LiveID, MSA), these are hard to police, especially if users mix personal and business ones.&lt;/p&gt;
&lt;p&gt;The solution is to link your VSO instance to an Azure Active Directory (AAD). This means that only users listed in the AAD can connect to the VSO instance. As this AAD can be federated to an on-prem company AD it means that the VSO users can be either&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have a few old Visual Studio Online (VSO) accounts (dating back to TFSPreview.com days). We use them to collaborate with third parties, it was long overdue that I tidied them up; as a problem historically has been that all access to VSO has been using a Microsoft Accounts (LiveID, MSA), these are hard to police, especially if users mix personal and business ones.</p>
<p>The solution is to link your VSO instance to an Azure Active Directory (AAD). This means that only users listed in the AAD can connect to the VSO instance. As this AAD can be federated to an on-prem company AD it means that the VSO users can be either</p>
<ul>
<li>Company domain users</li>
<li>MSA accounts specifically added to AAD</li>
</ul>
<p>Either way it gives the AAD administrator an easy way to manage access to VSO. A user with a MSA, even if an administrator in VSO cannot add any unknown users to VSO. <a href="http://www.visualstudio.com/get-started/manage-organization-access-for-your-account-vs">For details see MSDN.</a> All straight forward you would think, but it I had a few issues.</p>
<p>The problem was I had setup my VSO accounts using a MSA in the form <a href="mailto:user@mycompany.co.uk">user@mycompany.co.uk</a>, this was also linked to my MSDN subscription.  As part of the VSO/AAD linking process I needed to add the MSA <a href="mailto:user@mycompany.co.uk">user@mycompany.co.uk</a> to our AAD, but I could not. The AAD was setup for federation of accounts in the mycompany.com domain, so you would have thought I would be OK, but back in our on-prem AD (the one it was federated to) I had  <a href="mailto:user@mycompany.co.uk">user@mycompany.co.uk</a> as an email alias for <a href="mailto:user@mycompany.com">user@mycompany.com</a>. Thus blocked the adding of the user to AAD, hence I could got link VSO to Azure.</p>
<p>The answer was to</p>
<ol>
<li>Add another MSA account to the VSO instance, one unknown to our AD even as an alias e.g. <a href="mailto:user@live.co.uk">user@live.co.uk</a> </li>
<li>Make this user the owner of the VSO instance.</li>
<li>Add the <a href="mailto:user@live.co.uk">user@live.co.uk</a> MSA to the AAD directory</li>
<li>Make them an Azure Subscription administrator.</li>
<li>Login to the Azure portal as this MSA, once this was done the VSO could be linked to the AAD directory.</li>
<li>I could then make an AAD user (<a href="mailto:user@mycompany.com">user@mycompany.com</a>) a VSO user and then the VSO owner</li>
<li>The <a href="mailto:user@live.co.uk">user@live.co.uk</a> MSA could then be deleted from VSO and AAD</li>
<li>I could then login to VSO as  my <a href="mailto:user@mycompany.com">user@mycompany.com</a> AAD account, as opposed to the old <a href="mailto:user@mycompany.co.uk">user@mycompany.co.uk</a> MSA account</li>
</ol>
<p>Simple wasn’t it!</p>
<p>We still had one problem, and that was <a href="mailto:user@mycompany.com">user@mycompany.com</a> was showing as a basic user in VSO, if you tried to set it to MSDN eligible flipped back to basic.</p>
<p>The problem here was we had not associated the AAD account <a href="mailto:user@mycompany.com">user@mycompany.com</a> with the MSA account <a href="mailto:user@mycompany.co.uk">user@mycompany.co.uk</a> in the MSDN portal (see <a href="http://www.visualstudio.com/get-started/link-msdn-subscription-to-organizational-account-vs">MSDN</a>).</p>
<p>Once this was done it all worked as expected, VSO picking up that my AAD account had a full MSDN subscription.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Video card issues during install of Windows 8.1 causes very strange issues</title>
      <link>https://blog.richardfennell.net/posts/video-card-issues-during-install-of-windows-8-1-causes-very-strange-issues/</link>
      <pubDate>Wed, 19 Nov 2014 17:54:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/video-card-issues-during-install-of-windows-8-1-causes-very-strange-issues/</guid>
      <description>&lt;p&gt;Whilst repaving my Lenovo W520 I had some &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/15/Issues-repaving-the-Lenovo-W520-with-Windows-81-again.aspx&#34;&gt;issues with video cards&lt;/a&gt;. During the initial setup of Windows the PC hung. I rebooted, re-enabled in the BIOS the problematic video card and I thought all was OK. The installation appeared to pickup where it left off. However, I started to get some very strange problems.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;My LiveID settings did not sync from my other Windows 8.1 devices&lt;/li&gt;
&lt;li&gt;I could not change my profile picture&lt;/li&gt;
&lt;li&gt;I could not change my desktop background&lt;/li&gt;
&lt;li&gt;I could not change my screen saver&lt;/li&gt;
&lt;li&gt;And most importantly Windows Update would not run&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I found a few posts that said all of these problems could be seen when Windows was activated, but that was not the issue for me. It showed as being activated, changing the product key had no effect.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst repaving my Lenovo W520 I had some <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/15/Issues-repaving-the-Lenovo-W520-with-Windows-81-again.aspx">issues with video cards</a>. During the initial setup of Windows the PC hung. I rebooted, re-enabled in the BIOS the problematic video card and I thought all was OK. The installation appeared to pickup where it left off. However, I started to get some very strange problems.</p>
<ul>
<li>My LiveID settings did not sync from my other Windows 8.1 devices</li>
<li>I could not change my profile picture</li>
<li>I could not change my desktop background</li>
<li>I could not change my screen saver</li>
<li>And most importantly Windows Update would not run</li>
</ul>
<p>I found a few posts that said all of these problems could be seen when Windows was activated, but that was not the issue for me. It showed as being activated, changing the product key had no effect.</p>
<p>In the end I re-paved my PC again, making sure my video cards were correctly enabled so there was no hanging, and this time I seem to have a good Windows installation</p>
]]></content:encoded>
    </item>
    <item>
      <title>Issues repaving the Lenovo W520 with Windows 8.1 - again</title>
      <link>https://blog.richardfennell.net/posts/issues-repaving-the-lenovo-w520-with-windows-8-1-again/</link>
      <pubDate>Sat, 15 Nov 2014 14:37:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/issues-repaving-the-lenovo-w520-with-windows-8-1-again/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 19th Nov 2014&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Every few months I find a PC needs to be re-paved – just too much beta code has accumulated. I reached this point again on my main 4 year old Lenovo W520 recently. Yes it is getting on a bit in computer years but it does the job; the keyboard is far nicer than the W530 or W540’s we have and until an ultrabook is shipped with 16Gb of memory (I need local VMs, too many places I go to don’t allow me to get to VMs on Azure) I am keeping it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 19th Nov 2014</strong></p>
<p>Every few months I find a PC needs to be re-paved – just too much beta code has accumulated. I reached this point again on my main 4 year old Lenovo W520 recently. Yes it is getting on a bit in computer years but it does the job; the keyboard is far nicer than the W530 or W540’s we have and until an ultrabook is shipped with 16Gb of memory (I need local VMs, too many places I go to don’t allow me to get to VMs on Azure) I am keeping it.</p>
<p>I have posted in the past about the issue with the W520 (or any laptop that uses the Nvidia Optimus system), well that struck again, with a slight twist to confuse me.</p>
<p>Our IT team have moved to System Center to give a self provisioning system to our staff, so I …</p>
<ul>
<li>Connected my PC (that had Windows 8.1 on it) to the LAN with Ethernet</li>
<li>Booted using PXI boot (pressed the blue ThinkVantage button, then F12 to pick the boot device)</li>
<li>As the PC was registered with our System Center it found a boot image, reformatted by disk and loaded our standard Windows 8.1 image</li>
<li>It rebooted and then it hung….</li>
</ul>
<p>It was the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/14/first-try-with-windows8-and-it-won-t-boot.aspx">old video card issue</a>. The W520 has a Intel GPU on the i7 CPU and also a separate Nvidia Quatro GPU. Previously I had the Intel GPU disabled in BIOS as I have found that having both enabled means it is very hard to connect to projector when presenting (but remember you do need both GPUs enabled if you wish to use two external monitors and the laptop display, but I don’t do this). However, you do need the Intel GPU to install Windows. The problem is Windows setup gets confused if it just sees the Nvidia for some reason. You would expect it to treat it as basic VGA until it gets drivers, but it just locks.</p>
<ul>
<li>So I rebooted the PC, enable the Intel GPU in BIOS (leaving the Nvidia enabled too) and Windows setup picked up where it left off and I thought I had a rebuild my PC.</li>
</ul>
<p>Even with the problems this was very quick to get a domain joined PC. I then started to install the applications using a mixture of System Center Software Center and <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/11/05/Upgrading-my-Lenovo-W520-to-Windows-81.aspx">Chocolatey</a>.</p>
<p>However I knew I would hit the same problem with projectors, so I went back into BIOS and disabled the Intel GPU. The PC booted fine, worked for a minute or two then hung. This was strange as this same configuration had been working with Windows 8,1 before the re-format!</p>
<p>So I re-enabled the Intel GPU, and all seemed OK, until I tried to use Visual Studio 2013. This loaded OK, but crashed within a few seconds. The error log showed</p>
<pre tabindex="0"><code>Faulting application name: devenv.exe, version: 12.0.30723.0, time stamp: 0x53cf6f00  
Faulting module name: igdumd32.dll\_unloaded, version: 9.17.10.3517, time stamp: 0x532b0b5b
</code></pre><p>The igdum32.dll is an Intel driver. So I disabled the Intel adaptor, this time via Admin Tools &gt; Computer Manager &gt; Device Manager. Visual Studio now loaded OK. I found I could re-enable the Intel GPU after Visual Studio  loaded without issue. So the problem was something to do with the extended load process.</p>
<p>So I had a usable system, but still had problems when using a projector.</p>
<p>The solution in the end was simple – remove the Intel Drivers</p>
<ul>
<li>In Admin Tools &gt; Computer Manager &gt; Device Manager delete the Intel GPU – Select the option to delete the drivers too</li>
<li>Reboot the PC and in BIOS disable the integrated Intel GPU</li>
<li>When the PC reboot it will just use the Nvidia GPU</li>
</ul>
<p>The key here is to delete the Intel drivers, the basic fact of their presence, whether running or not, cause the problems either to the operating system or Visual Studio depending on your BIOS settings</p>
<p><strong>Updated 19th Nov 2014</strong></p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/11/19/Video-card-issues-during-install-of-Windows-81-causes-very-strange-issues.aspx">Turns out my repave had other issues</a>, the issue with the Intel drivers during initial Windows setup had corrupted the OS, I had to start again. On this second attempt I got different results. I found <strong>I DID NOT</strong> have to remove the Intel drivers. I just needed to disable the Intel GPU in the BIOS to get my laptop working OK with projectors</p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft Connect() event on the 12 and 13 November</title>
      <link>https://blog.richardfennell.net/posts/microsoft-connect-event-on-the-12-and-13-november/</link>
      <pubDate>Mon, 10 Nov 2014 14:48:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-connect-event-on-the-12-and-13-november/</guid>
      <description>&lt;p&gt;Microsoft are running a free web-based event &lt;a href=&#34;http://blogs.msdn.com/b/visualstudio/archive/2014/11/10/save-the-date-connect-november-12-amp-13-is-almost-here.aspx&#34;&gt;Connect()&lt;/a&gt; on the 12th and 13th of November. Should be well worth a watch to see the planned direction of developer tooling.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Microsoft are running a free web-based event <a href="http://blogs.msdn.com/b/visualstudio/archive/2014/11/10/save-the-date-connect-november-12-amp-13-is-almost-here.aspx">Connect()</a> on the 12th and 13th of November. Should be well worth a watch to see the planned direction of developer tooling.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot see a TFS drops location from inside a network isolated environment for Release Management</title>
      <link>https://blog.richardfennell.net/posts/cannot-see-a-tfs-drops-location-from-inside-a-network-isolated-environment-for-release-management/</link>
      <pubDate>Fri, 31 Oct 2014 15:21:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-see-a-tfs-drops-location-from-inside-a-network-isolated-environment-for-release-management/</guid>
      <description>&lt;p&gt;I have &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx&#34;&gt;posted before about using Release Management with Lab Management network isolation&lt;/a&gt;. They key is that you must issue a NET USE command at the start of the pipeline to allow the VMs in the isolated environment the ability to see the TFS drops location.&lt;/p&gt;
&lt;p&gt;I hit a problem today that a build failed even though I had issued the NET USE.&lt;/p&gt;
&lt;p&gt;I got the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Package location &amp;lsquo;\storedropsSabs.Main.CISabs.Main.CI_2.5.178.19130\&amp;rsquo; does not exist or Deployer user does not have access.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">posted before about using Release Management with Lab Management network isolation</a>. They key is that you must issue a NET USE command at the start of the pipeline to allow the VMs in the isolated environment the ability to see the TFS drops location.</p>
<p>I hit a problem today that a build failed even though I had issued the NET USE.</p>
<p>I got the error</p>
<blockquote>
<p>Package location &lsquo;\storedropsSabs.Main.CISabs.Main.CI_2.5.178.19130\&rsquo; does not exist or Deployer user does not have access.</p></blockquote>
<p>Turns out the problem was I had issued the NET USE as <code>\\store.blackmarble.co.ukdrops</code> not <code>\\storedrops</code> it is vital the names match up. Once I changed this my NET USE all was OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>Ordering rows that use the format 1.2.3 in a SQL query</title>
      <link>https://blog.richardfennell.net/posts/ordering-rows-that-use-the-format-1-2-3-in-a-sql-query/</link>
      <pubDate>Thu, 30 Oct 2014 11:31:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ordering-rows-that-use-the-format-1-2-3-in-a-sql-query/</guid>
      <description>&lt;p&gt;Whilst working on a SSRS based report I had hit a sort order problem. Entries in a main report tablix needed to be sorted by their &lt;em&gt;OrderNumber&lt;/em&gt; but this was in the form 1.2.3; so a neither a numeric or alpha sort gave the correct order i.e. a numeric sort fails as 1.2.3 is not a number and an alpha sort worked but give the incorrect order 1.3, 1.3.1, 1.3.10, 1.3.11, 1.3.12, 1.3.2.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst working on a SSRS based report I had hit a sort order problem. Entries in a main report tablix needed to be sorted by their <em>OrderNumber</em> but this was in the form 1.2.3; so a neither a numeric or alpha sort gave the correct order i.e. a numeric sort fails as 1.2.3 is not a number and an alpha sort worked but give the incorrect order 1.3, 1.3.1, 1.3.10, 1.3.11, 1.3.12, 1.3.2.</p>
<p>When I checked the underlying SQL it turned out the <em>OrderNumber</em> was being generated, it was not a table column. The raw data was in a single table that contained all the leaf nodes in the hierarchy, the returned data was built by a SPROC using a recursive call.</p>
<p>The solution was to also calculate a <em>SortOrder</em> as well as the <em>OrderNumber</em>. I did this using a Power function on each block of the <em>OrderNumber</em> and added the results together. In the code shown below we can have any number of entries in the first block and up to 999 entries in the second or third block. You could have more by altering the Power function parameters</p>
<pre tabindex="0"><code>declare @EntryID as nvarchar(50) = ‘ABC1234&#39;;
</code></pre><p>WITH SummaryList AS<br>
    (<br>
        SELECT<br>
        NI.ItemID,<br>
        NI.CreationDate,<br>
        &lsquo;P&rsquo; as ParentOrChild,<br>
        cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as nvarchar(100)) as OrderNumber,<br>
        cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) *  Power(10,6) as int) as SortOrder<br>
        FROM dbo.NotebookItem AS NI<br>
        WHERE NI.ParentID IS NULL AND NI.EntryID = @EntryID</p>
<p>        UNION ALL</p>
<p>        SELECT<br>
        NI.ItemID,<br>
        NI.CreationDate,<br>
        &lsquo;L&rsquo; as ParentOrChild,<br>
        cast(SL.OrderNumber + &lsquo;.&rsquo; + cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as nvarchar(100)) as nvarchar(100)) as OrderNumber,<br>
        SL.SortOrder + (cast(ROW_NUMBER() OVER(ORDER BY NI.CreationDate) as int) * power(10 ,6 - (3* LEN(REPLACE(sl.OrderNumber, &lsquo;.&rsquo;, &lsquo;&rsquo;))))) as SortOrder<br>
        FROM dbo.NotebookItem AS NI<br>
        INNER JOIN SummaryList as SL<br>
            ON NI.ParentID = SL.ItemID<br>
    )</p>
<p>    SELECT<br>
        SL.ItemID,<br>
        SL.ParentOrChild,<br>
        SL.CreationDate,<br>
        SL.OrderNumber,<br>
        SL.SortOrder<br>
    FROM SummaryList AS SL<br>
    ORDER BY SL.SortOrder</p>
<pre tabindex="0"><code>
This query returns the following with all the data correctly ordered

ItemID

ParentOrChild

CreationDate

OrderNumber

SortOrder

22F72F9E-E34C-45AB-A4D9-C7D9B742CD2C

P

29 October 2014

1

1000000

E0B74D61-4B69-46B0-B0A9-F08BE2886675

L

29 October 2014

1.1

1001000

CB90233C-4940-4312-81D1-A26CB540DF2A

L

29 October 2014

1.2

1002000

35CCC2A1-E00F-43C6-9CB3-732342EE18DA

L

29 October 2014

1.3

1003000

7A920ABE-A2E2-4CF1-B36E-DE177A7B8681

L

29 October 2014

1.3.1

1003001

C5E863A1-5A92-4F64-81C6-6946146F2ABA

L

29 October 2014

1.3.2

1003002

23D89CFF-C9A3-405E-A7EE-7CAACCA58CC2

L

29 October 2014

1.3.3

1003003

CE4F9F6B-3A58-4F78-9C1F-4780883F6995

L

29 October 2014

1.3.4

1003004

8B2A137F-C311-419A-8812-76E87D8CFA40

L

29 October 2014

1.3.5

1003005

F8487463-302E-4225-8A06-7C8CDCC23B45

L

29 October 2014

1.3.6

1003006

D365A402-D3CC-4242-B1B9-356FB41AABC1

L

29 October 2014

1.3.7

1003007

DFD4D688-080C-4FF0-B1D0-EBE63B6F99FD

L

29 October 2014

1.3.8

1003008

272A46C6-E326-47E8-AEE4-952AF746A866

L

29 October 2014

1.3.9

1003009

F073AFFA-F9A1-46ED-AC4B-E92A7160EB21

L

29 October 2014

1.3.10

1003010

140744E7-4950-43F8-BA0C-0E541550F14B

L

29 October 2014

1.3.11

1003011

93AA3C05-E95A-4201-AE03-190DDBF31B47

L

29 October 2014

1.3.12

1003012

5CED791D-4695-440F-ABC4-9127F1EE2A55

L

29 October 2014

1.4

1004000

FBC38F00-E2E8-4724-A716-AE419907A681

L

29 October 2014

1.5

1005000

2862FED9-8916-4139-9577-C858F75C768A

P

29 October 2014

2

2000000

8265A2BE-D2DD-4825-AE0A-7930671D4641

P

29 October 2014

3

3000000
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>VSO now available in a European Azure Data Center</title>
      <link>https://blog.richardfennell.net/posts/vso-now-available-in-a-european-azure-data-center/</link>
      <pubDate>Tue, 28 Oct 2014 15:02:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vso-now-available-in-a-european-azure-data-center/</guid>
      <description>&lt;p&gt;&lt;em&gt;I don’t normal do posts that are just re-posts of TFS announcements, it is much better to get the information first hand from the original post, but this one is significant for us in Europe…&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Up to now there has been a barrier to adoption of &lt;a href=&#34;http://www.visualstudio.com/en-us/products/what-is-visual-studio-online-vs.aspx&#34;&gt;VSO&lt;/a&gt; that the underlying data will be hosted in the USA. Now there are all the usual Azure Microsoft guarantees about data security, but this has not been enough for some clients for legal, regulatory or their own reasons. This has made VSO a non-starter for many European’s where it at first appears a great match.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>I don’t normal do posts that are just re-posts of TFS announcements, it is much better to get the information first hand from the original post, but this one is significant for us in Europe…</em></p>
<p>Up to now there has been a barrier to adoption of <a href="http://www.visualstudio.com/en-us/products/what-is-visual-studio-online-vs.aspx">VSO</a> that the underlying data will be hosted in the USA. Now there are all the usual Azure Microsoft guarantees about data security, but this has not been enough for some clients for legal, regulatory or their own reasons. This has made VSO a non-starter for many European’s where it at first appears a great match.</p>
<p>As of today you can now choose to host your new VSO account in Europe (Amsterdam data center). It won’t remove everyone&rsquo;s worries of cloud hosting, but certainly is a major step in the right direction from a European point of view, addressing many regulatory barriers.</p>
<p>Unfortunately we will have to wait a few sprints to be able to migrate any existing VSO instances, but you can’t have everything in one go!</p>
<p>For the full details have a look at <a href="http://blogs.msdn.com/b/bharry/archive/2014/10/28/visual-studio-online-is-in-europe.aspx">Brian Harry’s</a> and <a href="http://visualstudio.com/en-us/news/2014-oct-28-vso">Jamie Cool’s</a> posts</p>
]]></content:encoded>
    </item>
    <item>
      <title>Having fun with my HDMIPI</title>
      <link>https://blog.richardfennell.net/posts/having-fun-with-my-hdmipi/</link>
      <pubDate>Sun, 26 Oct 2014 22:03:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/having-fun-with-my-hdmipi/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://hdmipi.com/&#34;&gt;HDMIPI&lt;/a&gt; screen I supported on &lt;a href=&#34;https://www.kickstarter.com/projects/697708033/hdmipi-affordable-9-high-def-screen-for-the-raspbe/posts&#34;&gt;KickStarter&lt;/a&gt; arrived this week. After a bit of a false start with a driver board that failed after a couple of minutes it is now all up and running. The replacement board arrived in 48Hours – great support service&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_20141026_002.jpg&#34;&gt;&lt;img alt=&#34;WP_20141026_002&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_20141026_002_thumb.jpg&#34; title=&#34;WP_20141026_002&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now to get down to some projects – that toy robot does look like it needs automating via an IR transmitter and my &lt;a href=&#34;http://www.piface.org.uk/&#34;&gt;PiFace&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://hdmipi.com/">HDMIPI</a> screen I supported on <a href="https://www.kickstarter.com/projects/697708033/hdmipi-affordable-9-high-def-screen-for-the-raspbe/posts">KickStarter</a> arrived this week. After a bit of a false start with a driver board that failed after a couple of minutes it is now all up and running. The replacement board arrived in 48Hours – great support service</p>
<p><a href="/wp-content/uploads/sites/2/historic/WP_20141026_002.jpg"><img alt="WP_20141026_002" loading="lazy" src="/wp-content/uploads/sites/2/historic/WP_20141026_002_thumb.jpg" title="WP_20141026_002"></a></p>
<p>Now to get down to some projects – that toy robot does look like it needs automating via an IR transmitter and my <a href="http://www.piface.org.uk/">PiFace</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio crashes when trying to add an item to a TFS build workflow</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-crashes-when-trying-to-add-an-item-to-a-tfs-build-workflow/</link>
      <pubDate>Fri, 24 Oct 2014 15:56:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-crashes-when-trying-to-add-an-item-to-a-tfs-build-workflow/</guid>
      <description>&lt;p&gt;There has for a long time been an issue that when you &lt;a href=&#34;https://github.com/tfsbuildextensions/CustomActivities/wiki/Integrate%20Build%20Activities&#34;&gt;try to add a new activity to the toolbox&lt;/a&gt; when editing a TFS build workflow Visual Studio can crash. I have seen it many times and never got to the bottom of it. It seems to be machine specific, as one machine can work while another supposedly identical will fail, but I could never track down the issue.&lt;/p&gt;
&lt;p&gt;Today I was on a machine that was failing, but …&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There has for a long time been an issue that when you <a href="https://github.com/tfsbuildextensions/CustomActivities/wiki/Integrate%20Build%20Activities">try to add a new activity to the toolbox</a> when editing a TFS build workflow Visual Studio can crash. I have seen it many times and never got to the bottom of it. It seems to be machine specific, as one machine can work while another supposedly identical will fail, but I could never track down the issue.</p>
<p>Today I was on a machine that was failing, but …</p>
<p>But I found a workaround in a really old <a href="http://www.tech-archive.net/Archive/VisualStudio/microsoft.public.vsnet.ide/2008-03/msg00083.html">forum post</a>. The workaround is to load the IDE from the command line with the /safemode flag</p>
<pre tabindex="0"><code>C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDEdevenv.exe /safemode
</code></pre><p>Once you do this you can edit the contents of your toolbox without crashes, and also your template if you wish. The best part is that once you exit the IDE and reload it as normal your new toolbox contents are still there.</p>
<p>Not perfect, but a good workaround</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting Release Management to fail a release when using a custom PowerShell component</title>
      <link>https://blog.richardfennell.net/posts/getting-release-management-to-fail-a-release-when-using-a-custom-powershell-component/</link>
      <pubDate>Mon, 20 Oct 2014 12:55:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-release-management-to-fail-a-release-when-using-a-custom-powershell-component/</guid>
      <description>&lt;p&gt;If you have a custom PowerShell script you wish to run you can create a tool in release Management (Inventory &amp;gt; Tools) for the script which deploys the .PS1, PSM files etc. and defines the command line to run it.&lt;/p&gt;
&lt;p&gt;The problem we hit was that our script failed, but did not fail the build step as the PowerShell.EXE running the script exited without error. The script had thrown an exception which was in the output log file, but it was marked as a completed step.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you have a custom PowerShell script you wish to run you can create a tool in release Management (Inventory &gt; Tools) for the script which deploys the .PS1, PSM files etc. and defines the command line to run it.</p>
<p>The problem we hit was that our script failed, but did not fail the build step as the PowerShell.EXE running the script exited without error. The script had thrown an exception which was in the output log file, but it was marked as a completed step.</p>
<p>The solution was to use a try/catch in the .PS1  script that as well as writing a message to Write-Error also set the exit code to something other than 0 (Zero). So you end up with something like the following in your .PS1 file</p>
<pre tabindex="0"><code>param  
(  
\[string\]$Param1 ,  
\[string\]$Param2 ) 

try  
{  
    # some logic here

 

} catch  
{  
    Write-Error $\_.Exception.Message  
    exit 1 # to get an error flagged so it can be seen by RM  
}  
</code></pre><p>Once this change was made an exception in the PowerShell caused the release step to fail as required. The output from the script appeared as the Command Output.</p>
]]></content:encoded>
    </item>
    <item>
      <title>ALM Ranger’s release DevOps Guidance for PowerShell DSC – perfectly timed for DDDNorth</title>
      <link>https://blog.richardfennell.net/posts/alm-rangers-release-devops-guidance-for-powershell-dsc-perfectly-timed-for-dddnorth/</link>
      <pubDate>Sat, 18 Oct 2014 11:59:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alm-rangers-release-devops-guidance-for-powershell-dsc-perfectly-timed-for-dddnorth/</guid>
      <description>&lt;p&gt;In a beautiful synchronicity the &lt;a href=&#34;https://vsardevops.codeplex.com/releases&#34;&gt;ALM Rangers DevOps guidance for PowerShell DSC has been release&lt;/a&gt; at at the same time as I am doing my DDDNorth session ‘&lt;a href=&#34;http://www.dddnorth.co.uk/Sessions/Details/97&#34;&gt;What is Desired State Configuration and how does it help me?’&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This Rangers project has been really interesting to work on, and provide much of the core of my session for DDDNorth.&lt;/p&gt;
&lt;p&gt;Well worth a look if you want to create your own DSC resources.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In a beautiful synchronicity the <a href="https://vsardevops.codeplex.com/releases">ALM Rangers DevOps guidance for PowerShell DSC has been release</a> at at the same time as I am doing my DDDNorth session ‘<a href="http://www.dddnorth.co.uk/Sessions/Details/97">What is Desired State Configuration and how does it help me?’</a></p>
<p>This Rangers project has been really interesting to work on, and provide much of the core of my session for DDDNorth.</p>
<p>Well worth a look if you want to create your own DSC resources.</p>
]]></content:encoded>
    </item>
    <item>
      <title>&#34;The handshake failed due to an unexpected packet format&#34; with Release Management</title>
      <link>https://blog.richardfennell.net/posts/the-handshake-failed-due-to-an-unexpected-packet-format-with-release-management/</link>
      <pubDate>Fri, 17 Oct 2014 21:47:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-handshake-failed-due-to-an-unexpected-packet-format-with-release-management/</guid>
      <description>&lt;p&gt;Whilst configuring a Release Management 2013.3 system I came across a confusing error. All seemed OK, the server, client and deployment agents were all installed and seemed to be working, but when I tried to select a build to deploy from both the Team Projects and Build drop downs were empty.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_209.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_206.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A check of the Windows event log on the server showed the errors&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The underlying connection was closed: An unexpected error occurred on a send&lt;br&gt;
The handshake failed due to an unexpected packet format&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst configuring a Release Management 2013.3 system I came across a confusing error. All seemed OK, the server, client and deployment agents were all installed and seemed to be working, but when I tried to select a build to deploy from both the Team Projects and Build drop downs were empty.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_209.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_206.png" title="image"></a></p>
<p>A check of the Windows event log on the server showed the errors</p>
<blockquote>
<p>The underlying connection was closed: An unexpected error occurred on a send<br>
The handshake failed due to an unexpected packet format</p></blockquote>
<p>Turns out the issue was an incorrectly set value when the Release Management server was configured. HTTPS had been incorrectly selected, in fact there was no SSL certificate on the box so HTTPS could not work</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_210.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_207.png" title="image"></a></p>
<p>As this had been done in error we did not use HTTPS at any other point in the installation. We always used the URL <a href="http://typhoontfs:1000">http://typhoontfs:1000</a> . The strange part of the problem was that the only time this mistake caused a problem was for the Team Project drop down, everything else seemed fine, clients and deployment agents all could see the server.</p>
<p>Once the Release Management server was reconfigured with the correct HTTP setting all was OK</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_211.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_208.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot build a SSRS project in TFS build due to expired license</title>
      <link>https://blog.richardfennell.net/posts/cannot-build-a-ssrs-project-in-tfs-build-due-to-expired-license/</link>
      <pubDate>Wed, 15 Oct 2014 12:31:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-build-a-ssrs-project-in-tfs-build-due-to-expired-license/</guid>
      <description>&lt;p&gt;If you want to get your TFS build process to product SSRS RDL files you need to call the vsDevEnv custom activity to run Visual Studio (&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/04/24/Getting-SQL-2012-SSIS-packages-built-on-TFS-20122.aspx&#34;&gt;just like for SSIS packages&lt;/a&gt;). On our new TFS2013.3 based build agents this step started to fail, turns out the issue was not incorrect versions of DLLs or a some badly applied update, but that the license for Visual Studio on the build agent had expire.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you want to get your TFS build process to product SSRS RDL files you need to call the vsDevEnv custom activity to run Visual Studio (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/04/24/Getting-SQL-2012-SSIS-packages-built-on-TFS-20122.aspx">just like for SSIS packages</a>). On our new TFS2013.3 based build agents this step started to fail, turns out the issue was not incorrect versions of DLLs or a some badly applied update, but that the license for Visual Studio on the build agent had expire.</p>
<p>I found it by looking at diagnostic logs in the TFS build web UI.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_208.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_205.png" title="image"></a></p>
<p>To be able to build BI project with Visual Studio you do need a licensed copy of Visual Studio on the build agent. You can use a trial license, but it will expire. Also remember if you license VS by logging in with your MSDN Live ID that too needs to be refreshed from time to time (that is what go me), so better to use a product key.</p>
]]></content:encoded>
    </item>
    <item>
      <title>‘Unable to reconnect to database: Timeout expired’ error when using SQLPackage.exe to deploy to Azure SQL</title>
      <link>https://blog.richardfennell.net/posts/unable-to-reconnect-to-database-timeout-expired-error-when-using-sqlpackage-exe-to-deploy-to-azure-sql/</link>
      <pubDate>Wed, 15 Oct 2014 12:05:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/unable-to-reconnect-to-database-timeout-expired-error-when-using-sqlpackage-exe-to-deploy-to-azure-sql/</guid>
      <description>&lt;p&gt;I have trying to update a Azure hosted SQL DB using Release Management and the SSDT SQLPackage tools. All worked fine on my test Azure instance, but then I wanted to deploy to production I got the following error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;\*\*\* An error occurred during deployment plan generation. Deployment cannot continue.  
Failed to import target model \[dbname\]. Detailed message Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.  
Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.  
Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.  
The wait operation timed out
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Turns out SQLPackage.exe was connecting OK, as if I entered an invalid password it gave a different error, so it had made the connection and then died.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have trying to update a Azure hosted SQL DB using Release Management and the SSDT SQLPackage tools. All worked fine on my test Azure instance, but then I wanted to deploy to production I got the following error</p>
<pre tabindex="0"><code>\*\*\* An error occurred during deployment plan generation. Deployment cannot continue.  
Failed to import target model \[dbname\]. Detailed message Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.  
Unable to reconnect to database: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.  
Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.  
The wait operation timed out
</code></pre><p>Turns out SQLPackage.exe was connecting OK, as if I entered an invalid password it gave a different error, so it had made the connection and then died.</p>
<p>Seems I am not alone in seeing this problem, and most people seem to suggest changing a <a href="https://social.msdn.microsoft.com/Forums/vstudio/en-US/b00f58d4-cb6e-42ba-b48e-594517288555/ssdt-timeout-when-deploying-a-database?forum=ssdt">timeout in the registry</a> or your exported DACPAC. However, neither of these techniques worked for me.</p>
<p>I compare my test and production Azure DB instances, and found the issue. My test SQL DB was created using a SQL Export from the production Azure subscription imported into my MSDN one. I had done a quick ‘next &gt; next &gt; next’ import and the DB has been setup as an Standard (S2) service tier. The production DB had been running as the old retired Web service tier, but had recently been swapped to a Basic tier (it is a very small DB). When I re-imported my backup, but this time set it to be a Basic tier I got exactly the the same error message.</p>
<p>So on my test DB I changed it’s service tier from Basic to Standard (S0) and my deployment worked. The same solution work for my production DB.</p>
<p>Now the S0 instance is just over <a href="http://azure.microsoft.com/en-us/pricing/details/sql-database/">2x the cost of a Basic</a> , so if I was really penny pinching I could consider moving it back to Basic now the deployment is done. I did try this, and the deployment error returned; so I think it feels a false economy as want a stable release pipeline until Microsoft sort out why I cannot use SSDT to deploy to a basic instance</p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft ALM event to Black Marble offices on the 26th of November</title>
      <link>https://blog.richardfennell.net/posts/microsoft-alm-event-to-black-marble-offices-on-the-26th-of-november/</link>
      <pubDate>Wed, 15 Oct 2014 07:58:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-alm-event-to-black-marble-offices-on-the-26th-of-november/</guid>
      <description>&lt;p&gt;Black Marble is hosting a free Microsoft ALM event at our offices on the 26th of November. For once I will not be speaker, &lt;a href=&#34;http://blogs.msdn.com/b/visualstudiouk/&#34;&gt;Giles and Colin from the Microsoft UK Visual Studio Team&lt;/a&gt; are coming up to deliver the session.&lt;/p&gt;
&lt;p&gt;For more details and registration see &lt;a href=&#34;http://bit.ly/ALMLeeds&#34;&gt;http://bit.ly/ALMLeeds&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Black Marble is hosting a free Microsoft ALM event at our offices on the 26th of November. For once I will not be speaker, <a href="http://blogs.msdn.com/b/visualstudiouk/">Giles and Colin from the Microsoft UK Visual Studio Team</a> are coming up to deliver the session.</p>
<p>For more details and registration see <a href="http://bit.ly/ALMLeeds">http://bit.ly/ALMLeeds</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences using a DD-WRT router with Hyper-V</title>
      <link>https://blog.richardfennell.net/posts/experiences-using-a-dd-wrt-router-with-hyper-v/</link>
      <pubDate>Tue, 07 Oct 2014 21:41:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-using-a-dd-wrt-router-with-hyper-v/</guid>
      <description>&lt;p&gt;I have been playing around with the idea of using a &lt;a href=&#34;http://www.dd-wrt.com/site/support/router-database&#34;&gt;DD-WRT-V router&lt;/a&gt; on a Hyper-V VM to connect my local virtual machines to the Internet as discussed by &lt;a href=&#34;http://nakedalm.com/run-router-hyper-v/&#34;&gt;Martin Hinshlewood in his blog post&lt;/a&gt;. I learned a few things that might be of use to others trying the same setup.&lt;/p&gt;
&lt;h3 id=&#34;what-i-used-to-do&#34;&gt;What I used to do&lt;/h3&gt;
&lt;p&gt;Prior to using the router I had been using three virtual switches on my Windows 8 Hyper-V setup with multiple network adaptors to connect both my VMs and host machine to the switches and networks&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been playing around with the idea of using a <a href="http://www.dd-wrt.com/site/support/router-database">DD-WRT-V router</a> on a Hyper-V VM to connect my local virtual machines to the Internet as discussed by <a href="http://nakedalm.com/run-router-hyper-v/">Martin Hinshlewood in his blog post</a>. I learned a few things that might be of use to others trying the same setup.</p>
<h3 id="what-i-used-to-do">What I used to do</h3>
<p>Prior to using the router I had been using three virtual switches on my Windows 8 Hyper-V setup with multiple network adaptors to connect both my VMs and host machine to the switches and networks</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_202.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_199.png" title="image"></a></p>
<p>So I had</p>
<ul>
<li>
<p>One internal virtual switch only accessible on my host machine and my VMs</p>
</li>
<li>
<p>Two external virtual switches</p>
</li>
<li>
<p>one linked to my physical Ethernet adaptor</p>
</li>
<li>
<p>the other linked to my physical WiFi adaptor</p>
</li>
</ul>
<p>Arguably I could have had just one ‘public’ virtual switch and connected it to either my Ethernet or Wifi as needed. However, I found it easier to swap virtual switch in the VM settings rather than swap the network adaptor inside the virtual switch settings. I cannot really think of a compelling reason to pick one method over another, just person taste or habit I guess.</p>
<p>This setup had worked OK, if I needed to access a VM from my host PC I used the internal switch. This switch had no DHCP server on it, so I used the <a href="http://technet.microsoft.com/en-us/library/cc725638.aspx">alternate configuration IP addresses</a> assigned by Windows, managing machine IP addresses via a local hosts file. To allow the VMs to access the internet I added a second network adaptor to each VM which I bound to one of the externally connected switches, the choice being dependant which Internet connection I had at any given time.</p>
<p>However, all was not perfect, I have had <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/09/10/Moving-from-Ubuntu-to-Mint-for-my-TEE-demos.aspx">problems</a> with some Linux distributions running in Hyper-V with then not getting an IP address via DHCP over Wifi. There was also the complexity of having to add second network adaptors to each VM.</p>
<p>So would a virtual router help? I thought it worth a try, so <a href="http://nakedalm.com/run-router-hyper-v/">I followed Martin’s process</a>. But hit a few problems.</p>
<h3 id="setting-up-the-dd-wrt-router">Setting up the DD-WRT router</h3>
<p>As <a href="http://nakedalm.com/run-router-hyper-v/">Martin said his post</a> more work was needed to fully configure the router to allow external access. The problem I had for a long time was that as soon as I enabled the WAN port I seemed to lose connection. After much fiddling this was the process that worked for me</p>
<ol>
<li>
<p>Install the router as detailed in <a href="http://nakedalm.com/run-router-hyper-v/">Martin’s post</a></p>
</li>
<li>
<p>Link your internal Hyper-V switch to the first Ethernet (Eth0) port on the router VM. This seems a bit counter intuitive as the <a href="http://www.dd-wrt.com/wiki/index.php/Main_Page">DD-WRT wiki</a> says the first port is for the WAN – more on that later</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_203.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_200.png" title="image"></a></p>
</li>
<li>
<p>Boot the router, your should be able to login on the address <strong>192.168.1.1</strong> as <strong>root</strong> with the password <strong>admin</strong> on both the console or via a web browser from your host PC</p>
</li>
<li>
<p>On the basic setup tab (the default page) enable the WAN by selecting ‘Automatic Configuration (DHCP)’ and save the change</p>
<p>It is at this point I kept getting disconnected. I then realised it was because the ports were being reassigned, at this point Eth0 had indeed become the WAN port and Eth1 the internal port</p>
</li>
<li>
<p>So in Hyper-V manager</p>
</li>
</ol>
<ul>
<li>
<p>Re-assign the first Ethernet port (eth0) to external hyper-v switch (in turn connected to your Internet connection)</p>
</li>
<li>
<p>Assign the second Ethernet port (Eth1) to the internal virtual switch</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_204.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_201.png" title="image"></a></p>
</li>
</ul>
<ol start="7">
<li>You can now re-connect to 192.168.1.1 in a browser from your host machine to complete your configuration</li>
</ol>
<p>So now all my VMs connected to the virtual switch could get a 192.168.1.x address via DHCP (using their single network adaptors) but they could not see the internet. However, on the plus side DHCP seems to work OK for all operating Systems, so my Linux issues seemed to be fixed</p>
<p>It is fair to say I now had a fairly complex network, so it was no unsurprising I had routing issues.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_205.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_202.png" title="image"></a></p>
<p>The issue seems to have been that the VMs were not being passed the correct default router and DNS entries by DHCP. I had expect this to be set by default by the router, but it was not the case. They seem to need to be set by hand as shown below.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_206.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_203.png" title="image"></a></p>
<p>Once these were set, the change saved on the router and the VMs renewed their DHCP settings they had Internet access</p>
<p>At one point I thought I had also lost Internet access from my host PC, or the Internet access was much slower. I though I had developed a routing loop with all traffic passing through the router whether it was needed or not. However, once the above router gateway IP settings were set these problem when away.</p>
<p>When I checked my Windows 8 host’s routing table using <strong>netstat –r</strong> it showed two default roots (0.0.0.0), my primary one (192.168.0.1) my home router and my Hyper-V router (192.168.1.1 ). The second one had a much higher metric, so should not be used unless sending packets to the 192.168.100.x network, all other traffic should go out the primary link it should be using.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_207.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_204.png" title="image"></a></p>
<p>It was at this time I noticed the problem of getting a DHCP based IP address from Wifi had not gone away completely. If I had my router’s WAN port connected to my WiFI virtual switch, depending on the model/setup of WiFI router DHCP worked sometimes and sometimes not. I think this was mostly down to an authentication issue; not a major issues as thus far the only place I have a problem is our office where our WiFi is secured via radius server based AD authentication. Here I just switched to using either our guest WiFi or our Ethernet which both worked.</p>
<h3 id="so-is-this-a-workable-solution">So is this a workable solution?</h3>
<p>It seems to be OK this far, but there were more IP address/routing issues during the setup than I would like, you need to know your IPV4.</p>
<p>There are many option on the DD-WRT console I am unfamiliar with. By default it is running just like a home one, in a Network Address Translation (NAT) mode. This has the advantage of hiding the internal switch, but I was thinking would it be easier to run the DD-WRT as simple router?</p>
<p>The problem with that mode of operation is I need to make sure my internal virtual LAN  does  not conflict with anything on networks I connect to, and with automated router protocols such as RIP could get interesting fast; making me a few enemies with IT managers who networks I connect too.</p>
<p>A niggle is that whenever I connect my PC to a new network I need to make sure I remember do a DHCP renew of my WAN port (Status &gt; WAN &gt; DHCP Renew), it does  not automatically detect the change in connection.</p>
<p>Also I still need to manage my VMs IP addresses with a host file on the host Windows PC. As I don’t want to edit this file too often, it a good idea to increase the DHCP lease time on the router (Setup &gt; Basic Setup) to a few days instead of a day.</p>
<p>As to how well this work we shall see, but it seems OK for now</p>
]]></content:encoded>
    </item>
    <item>
      <title>Version stamping Windows 8 Store App manifests in TFS build</title>
      <link>https://blog.richardfennell.net/posts/version-stamping-windows-8-store-app-manifests-in-tfs-build/</link>
      <pubDate>Tue, 07 Oct 2014 17:44:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/version-stamping-windows-8-store-app-manifests-in-tfs-build/</guid>
      <description>&lt;p&gt;We have for a long time used the &lt;a href=&#34;https://github.com/tfsbuildextensions/CustomActivities/wiki/Getting%20started%20with%20the%20TfsVersion%20activity&#34;&gt;TFSVersion custom build activity&lt;/a&gt; to stamp all our TFS builds with a unique version number that matches out build number. However, this only edits the &lt;strong&gt;AssemblyInfo.cs&lt;/strong&gt; file. As we are now building more and more Windows 8 Store Apps we also need to edit the XML in the &lt;strong&gt;Package.appxmanifest&lt;/strong&gt; files used to build the packages too. Just like a Wix MSI project it is a good idea the package version matches some aspect of the assemblies it contains. We need to automate the update of this manifest as people too often forget to increment the version, causing confusion all down the line.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have for a long time used the <a href="https://github.com/tfsbuildextensions/CustomActivities/wiki/Getting%20started%20with%20the%20TfsVersion%20activity">TFSVersion custom build activity</a> to stamp all our TFS builds with a unique version number that matches out build number. However, this only edits the <strong>AssemblyInfo.cs</strong> file. As we are now building more and more Windows 8 Store Apps we also need to edit the XML in the <strong>Package.appxmanifest</strong> files used to build the packages too. Just like a Wix MSI project it is a good idea the package version matches some aspect of the assemblies it contains. We need to automate the update of this manifest as people too often forget to increment the version, causing confusion all down the line.</p>
<p>Now I could have written a new TFS custom activity to do the job, or edited the existing one, but both options seemed a poor choice. We all know that <a href="http://vsarbuildguide.codeplex.com/">custom activity writing is awkward</a> and a pain to support going forward. So I decided to use the hooks in the 2013 generation build process template to just call a custom PowerShell script to do the job.</p>
<p>I added a PreBuildScript.PS1 file as a solution item to my solution.</p>
<p>I placed the following code in the file. It uses the <a href="http://msdn.microsoft.com/en-us/library/hh850448.aspx">TFS environment variables</a> to get the build location and version; using these to find and edit the manifest files. The only gotcha is files on the build box are read only (it is a server workspace) so the manifest file has to be set it to allow it to be written back too.</p>
<pre tabindex="0"><code>

\# get the build number, we assume the format is Myproject.Main.CI\_1.0.0.18290  
\# where the version is set using the TFSVersion custom build activity (see other posts)  
$buildnum = $env:TF\_BUILD\_BUILDNUMBER.Split(&#39;\_&#39;)\[1\]  
\# get the manifest file paths  
$files = Get-ChildItem -Path $env:TF\_BUILD\_BUILDDIRECTORY -Filter &#34;Package.appxmanifest&#34; -Recurse  
foreach ($filepath in $files)   
{  
    Write-Host &#34;Updating the Store App Package &#39;$filepath&#39; to version &#39; $buildnum &#39;&#34;   
    # update the identity value  
    $XMLfile=NEW-OBJECT XML  
    $XMLfile.Load($filepath.Fullname)  
    $XMLFile.Package.Identity.Version=$buildnum  
    # set the file as read write  
    Set-ItemProperty $filepath.Fullname -name IsReadOnly -value $false  
    $XMLFile.save($filepath.Fullname)  
}
</code></pre><p>Note that any output sent via Write-Host will only appear in the diagnostic log of TFS. If you use Write-Error (or errors are thrown) these messages will appear in the build summary, but the build will not fail, but will be marked as a partial success.</p>
<p>Once this file was checked in i was able to reference the file in the build template</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_201.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_198.png" title="image"></a></p>
<p>The build could not be run and got my Windows 8 Store packages with the required version number</p>
]]></content:encoded>
    </item>
    <item>
      <title>Updated blog server to BlogEngine.NET 3.1</title>
      <link>https://blog.richardfennell.net/posts/updated-blog-server-to-blogengine-net-3-1/</link>
      <pubDate>Wed, 01 Oct 2014 09:02:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updated-blog-server-to-blogengine-net-3-1/</guid>
      <description>&lt;p&gt;Last night I upgraded this blog server to &lt;a href=&#34;https://blogengine.codeplex.com/releases/view/133254&#34;&gt;BlogEngine.NET 3.1&lt;/a&gt;. I used the new built in automated update tool, in an offline backup copy of course.&lt;/p&gt;
&lt;p&gt;It did most of the job without any issues. The only extra things I needed to do was&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Removed a &amp;lt;add name=&amp;ldquo;XmlRoleProvider&amp;rdquo; …&amp;gt; entry in the web.config. I have had to do this before on every install.&lt;/li&gt;
&lt;li&gt;Run the SortOrderUpdate.sql script to add the missing column and index (see &lt;a href=&#34;https://blogengine.codeplex.com/workitem/12543&#34;&gt;issue 12543&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once done and tested locally I upload the tested site to my production server. Just a point to note, that the upgrade creates some backup ZIPs of your site before the upgrade, you don’t need to copy these around as they are large.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Last night I upgraded this blog server to <a href="https://blogengine.codeplex.com/releases/view/133254">BlogEngine.NET 3.1</a>. I used the new built in automated update tool, in an offline backup copy of course.</p>
<p>It did most of the job without any issues. The only extra things I needed to do was</p>
<ul>
<li>Removed a &lt;add name=&ldquo;XmlRoleProvider&rdquo; …&gt; entry in the web.config. I have had to do this before on every install.</li>
<li>Run the SortOrderUpdate.sql script to add the missing column and index (see <a href="https://blogengine.codeplex.com/workitem/12543">issue 12543</a>)</li>
</ul>
<p>Once done and tested locally I upload the tested site to my production server. Just a point to note, that the upgrade creates some backup ZIPs of your site before the upgrade, you don’t need to copy these around as they are large.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Swapping the the Word template in a VSTO project</title>
      <link>https://blog.richardfennell.net/posts/swapping-the-the-word-template-in-a-vsto-project/</link>
      <pubDate>Mon, 22 Sep 2014 22:09:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/swapping-the-the-word-template-in-a-vsto-project/</guid>
      <description>&lt;p&gt;We have recently swapped the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/07/Upgrading-a-VSTO-project-from-VS-2008-to-2013.aspx&#34;&gt;Word template we use to make sure all our proposals and other documents are consistent&lt;/a&gt;. The changes are all cosmetic, fonts, footers etc. to match our new &lt;a href=&#34;http://www.blackmarble.co.uk/&#34;&gt;website&lt;/a&gt;; it still makes use of the same VSTO automation to do much of the work. The problem was I needed to swap the .DOTX file within the VSTO Word Add-in project, we had not been editing the old DOTX template in the project, but had created a new one based on a copy outside of Visual Studio.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have recently swapped the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/07/Upgrading-a-VSTO-project-from-VS-2008-to-2013.aspx">Word template we use to make sure all our proposals and other documents are consistent</a>. The changes are all cosmetic, fonts, footers etc. to match our new <a href="http://www.blackmarble.co.uk/">website</a>; it still makes use of the same VSTO automation to do much of the work. The problem was I needed to swap the .DOTX file within the VSTO Word Add-in project, we had not been editing the old DOTX template in the project, but had created a new one based on a copy outside of Visual Studio.</p>
<p>To swap in the new .DOTX file for the VSTO project I had to&hellip;</p>
<ul>
<li>
<p>Copy the new TEMPLATE2014.DOTX file to project folder</p>
</li>
<li>
<p>Opened the VSTO Word add-in .CSPROJ file on a text editor and replaced all the occurrences of the old template name with the new e.g. TEMPLATE.DOTX for TEMPLATE2014.DOTX</p>
</li>
<li>
<p>Reload the project in Visual Studio 2013, should be no errors and the new template is listed in place of the old</p>
</li>
<li>
<p>However, when I tried to compile the project I got a <em>DocumentAlreadyCustomizedException.</em> I did not know, but the template in the VSTO project needs to a copy with no association with any VSTO automation. The automation links are applied during the build process, makes sense when you think about it. As we had edited a copy of our old template, outside of Visual Studio, our copy already had the old automation links embedded. These needed to be removed, the fix was to</p>
</li>
<li>
<p>Open the .DOTX file in Word</p>
</li>
<li>
<p>On the File menu &gt; Info &gt; Right click on Properties (top right) to get the advanced list of properties</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_200.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_197.png" title="image"></a></p>
</li>
<li>
<p>Delete the <strong>_AssemblyName</strong> and <strong>_AssemblyLocation</strong> custom properties</p>
</li>
<li>
<p>Save the template</p>
</li>
<li>
<p>Open the VSTO project in Visual Studio and the you should be able to build the project</p>
</li>
<li>
<p>The only other thing I had to do was make sure my VSTO project was the start-up project for the solution. Once this was done I could F5/Debug the template VSTO combination</p>
</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Using MSDEPLOY from Release Management to deploy Azure web sites</title>
      <link>https://blog.richardfennell.net/posts/using-msdeploy-from-release-management-to-deploy-azure-web-sites/</link>
      <pubDate>Thu, 18 Sep 2014 19:44:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-msdeploy-from-release-management-to-deploy-azure-web-sites/</guid>
      <description>&lt;p&gt;Whilst developing our new set of websites we have been using MSDeploy to package up the websites for deployment to test and production Azure accounts. These deployments were being triggered directly using Visual Studio. Now we know this is not best practice, you don’t want developers shipping to production from their development PCs, so I have been getting around to migrating these projects to Release Management.&lt;/p&gt;
&lt;p&gt;I wanted to minimise change, as we like MSDeploy, I just wanted to pass the extra parameters to allow a remote deployment as opposed to a local one using the built in WebDeploy component in Release Management&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst developing our new set of websites we have been using MSDeploy to package up the websites for deployment to test and production Azure accounts. These deployments were being triggered directly using Visual Studio. Now we know this is not best practice, you don’t want developers shipping to production from their development PCs, so I have been getting around to migrating these projects to Release Management.</p>
<p>I wanted to minimise change, as we like MSDeploy, I just wanted to pass the extra parameters to allow a remote deployment as opposed to a local one using the built in WebDeploy component in Release Management</p>
<p>To do this I created a new component based on the WebDeploy tool. I then altered the arguments to</p>
<pre tabindex="0"><code>\_\_WebAppName\_\_.deploy.cmd /y /m:\_\_PublishUrl\_\_&#34; -allowUntrusted /u:&#34;\_\_PublishUser\_\_&#34; /p:&#34;\_\_PublishPassword\_\_&#34; /a:Basic
</code></pre><p><a href="/wp-content/uploads/sites/2/historic/image_199.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_196.png" title="image"></a></p>
<p>With these three extra publish parameters I can target the deployment to an Azure instance, assuming WebDeploy is installed on the VM running the Release Management deployment client.</p>
<p>The required values for these parameters can be obtained from the .PublishSettings you download from your Azure web site’s management page. If you open this file in a text editor you can read the values need (bits you need are highlighted in yellow)</p>
<p><publishData>&lt;publishProfile profileName=&ldquo;SomeSite - Web Deploy&rdquo; publishMethod=&ldquo;MSDeploy&rdquo; publishUrl=&ldquo;somesite.scm.azurewebsites.net:443&rdquo; msdeploySite=&ldquo;SomeSite&rdquo; userName=&quot;$SomeSite&quot; userPWD=&ldquo;m1234567890abcdefghijklmnopqrstu&rdquo; destinationAppUrl=http://somesite.azurewebsites.net SQLServerDBConnectionString=&quot;&quot; mySQLDBConnectionString=&quot;&quot; hostingProviderForumLink=&quot;&quot; controlPanelLink=&ldquo;<a href="http://windows.azure.com">http://windows.azure.com</a>&rdquo;&gt;<databases/></publishProfile>&lt;publishProfile profileName=&ldquo;SomeSite- FTP&rdquo; publishMethod=&ldquo;FTP&rdquo; publishUrl=ftp://site.ftp.azurewebsites.windows.net/site/wwwroot ftpPassiveMode=&ldquo;True&rdquo; userName=&ldquo;SomeSite$SomeSite&rdquo; userPWD=”m1234567890abcdefghijklmnopqrstu&quot; destinationAppUrl=http://somesite.azurewebsites.net SQLServerDBConnectionString=&quot;&quot; mySQLDBConnectionString=&quot;&quot; hostingProviderForumLink=&quot;&quot; controlPanelLink=&ldquo;<a href="http://windows.azure.com">http://windows.azure.com</a>&rdquo;&gt;<databases/></publishProfile></publishData></p>
<p>These values are used as follows</p>
<ul>
<li>WebAppName – This is the name of the MSDeploy package, this is exactly the same as a standard WebDeploy component.</li>
<li>PublishUrl - We need to add the https and the .axd to the start and end of the url e.g: <a href="https://somesite.scm.azurewebsites.net:443/MsDeploy.axd">https://somesite.scm.azurewebsites.net:443/MsDeploy.axd</a></li>
<li>PublishUser – e.g:  $SomeSite</li>
<li>PublishPassword – This is set as an encrypted parameter  so it cannot be viewed in the Release Management client e.g: m1234567890abcdefghijklmnopqrstu</li>
</ul>
<p>On top of these parameters, we can still pass in extra parameters to transform the web.config using the setparameters.xml file <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/05/01/Changing-WCF-bindings-for-MSDeploy-packages-when-using-Release-Management.aspx" title="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/05/01/Changing-WCF-bindings-for-MSDeploy-packages-when-using-Release-Management.aspx">as detailed in this other post</a>  this allowing to complete the steps we need to do the configuration for the various environments in our pipeline.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at PreEmptive’s event on Application Analytics in London on the 30th of September</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-preemptives-event-on-application-analytics-in-london-on-the-30th-of-september/</link>
      <pubDate>Thu, 18 Sep 2014 19:02:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-preemptives-event-on-application-analytics-in-london-on-the-30th-of-september/</guid>
      <description>&lt;p&gt;I am speaking at PreEmptive’s event on Application Analytics in at Microsoft’s office in London on the 30th of September. There are various speakers from PreEmptive, Microsoft and Black Marble at this free event.&lt;/p&gt;
&lt;p&gt;There are still spaces available, just follow the link: &lt;a href=&#34;https://www.eventbrite.co.uk/e/application-analytics-with-visual-studio-and-preemptive-analytics-event-tickets-13042288837&#34;&gt;Application Analytics with Visual Studio and PreEmptive Analytics&lt;/a&gt; for  more details and registration.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking at PreEmptive’s event on Application Analytics in at Microsoft’s office in London on the 30th of September. There are various speakers from PreEmptive, Microsoft and Black Marble at this free event.</p>
<p>There are still spaces available, just follow the link: <a href="https://www.eventbrite.co.uk/e/application-analytics-with-visual-studio-and-preemptive-analytics-event-tickets-13042288837">Application Analytics with Visual Studio and PreEmptive Analytics</a> for  more details and registration.</p>
]]></content:encoded>
    </item>
    <item>
      <title>“Communication with the deployer was lost during deployment” error with Release Management</title>
      <link>https://blog.richardfennell.net/posts/communication-with-the-deployer-was-lost-during-deployment-error-with-release-management/</link>
      <pubDate>Thu, 18 Sep 2014 12:55:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/communication-with-the-deployer-was-lost-during-deployment-error-with-release-management/</guid>
      <description>&lt;p&gt;Whilst developing  a new Release Management pipeline I did hit a problem that a component that published MSDeploy packages to Azure started to fail.  It had been working then suddenly I started seeing ‘communication with the deployer was lost during deployment’ messages as shown below.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb%5B1%5D.png&#34;&gt;&lt;img alt=&#34;image_thumb[1]&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb%5B1%5D_thumb.png&#34; title=&#34;image_thumb[1]&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;No errors were shown in any logs I could find, and no files appeared on the deployment target (you would expect the files/scripts to be deployed to appear on the machine running a RM Deployment client in the &lt;em&gt;C:users[account]localtempRM&lt;/em&gt; folder structure).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst developing  a new Release Management pipeline I did hit a problem that a component that published MSDeploy packages to Azure started to fail.  It had been working then suddenly I started seeing ‘communication with the deployer was lost during deployment’ messages as shown below.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_thumb%5B1%5D.png"><img alt="image_thumb[1]" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb%5B1%5D_thumb.png" title="image_thumb[1]"></a></p>
<p>No errors were shown in any logs I could find, and no files appeared on the deployment target (you would expect the files/scripts to be deployed to appear on the machine running a RM Deployment client in the <em>C:users[account]localtempRM</em> folder structure).</p>
<p>Rebooting of the Release Management server and deployment client had no effect.</p>
<p>The client in this case was a VM we use to do remote deployments e.g to Azure, SQL clusters etc. Place where we cannot install the RM Deployment Client. So it is used for a number of other pipelines, these were all working, so I doubted it was a communications issue.</p>
<p>In the end, out of frustration, I tried re-adding the component to the workflow and re-entering the parameters. Once this was done, and the old component instances delete, it all leapt into life.</p>
<p>I am not sure why I had the problems, I was trying to remember what I did exactly between the working and failing releases. All I can think is that I may have changed the password parameter to encrypted (I had forgotten to do this at first). Now I should have tried this sooner as <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/05/01/Release-Management-components-fail-to-deploy-with-a-timeout-if-a-variable-is-changed-from-standard-to-encrypted.aspx">I have posted on this error message before</a>. All I can assume is that changing this parameter setting corrupted my component.</p>
<p>I should have read my own blog sooner</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving our BlogEngine.NET server to Azure</title>
      <link>https://blog.richardfennell.net/posts/moving-our-blogengine-net-server-to-azure/</link>
      <pubDate>Tue, 16 Sep 2014 13:35:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-our-blogengine-net-server-to-azure/</guid>
      <description>&lt;p&gt;As part of our IT refresh we have decided to move this BlogEngine.NET server from a Hyper-V VM in our office to an Azure website.&lt;/p&gt;
&lt;p&gt;BlogEngine.NET is now a gallery item for Azure website, so a few clicks and your should be up and running.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_194.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_191.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;However, if you want to use SQL as opposed to XML as the datastore you need to do a bit more work. This process is well documented in the video ‘&lt;a href=&#34;https://www.youtube.com/watch?v=ynjax44fN-E&#34;&gt;Set BlogEngine.NET to use SQL provider in Azure&lt;/a&gt;’, but we found we needed to perform some extra steps due to where our DB was coming from.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As part of our IT refresh we have decided to move this BlogEngine.NET server from a Hyper-V VM in our office to an Azure website.</p>
<p>BlogEngine.NET is now a gallery item for Azure website, so a few clicks and your should be up and running.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_194.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_191.png" title="image"></a></p>
<p>However, if you want to use SQL as opposed to XML as the datastore you need to do a bit more work. This process is well documented in the video ‘<a href="https://www.youtube.com/watch?v=ynjax44fN-E">Set BlogEngine.NET to use SQL provider in Azure</a>’, but we found we needed to perform some extra steps due to where our DB was coming from.</p>
<h3 id="database-fixes">Database Fixes</h3>
<p>The main issue was that our on premises installation of BlogEngine.NET used a SQL 2012 availability group. This amongst other things, adds some extra settings that stop the ‘Deploy Database to Azure’ feature in SQL Management Studio from working. To address these issues I did the following:</p>
<p>Took a SQL backup of the DB from our production server and restored it to a local SQL 2012 Standard edition. I then tried the  Deploy to Azure</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_195.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_192.png" title="image"></a></p>
<p>But got the errors I was expecting</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_196.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_193.png" title="image"></a></p>
<p>There were three types</p>
<pre tabindex="0"><code>Error SQL71564: Element User: \[BLACKMARBLEAUser\] has an unsupported property AuthenticationType set and is not supported when used as part of a data package.  
Error SQL71564: Element Column: \[dbo\].\[be\_Categories\].\[CategoryID\] has an unsupported property IsRowGuidColumn set and is not supported when used as part of a data package.  
Error SQL71564: Table Table: \[dbo\].\[be\_CustomFields\] does not have a clustered index.  Clustered indexes are required for inserting data in this version of SQL Server.  
</code></pre><p>The first fixed by simply deleting the listed users in SQL Management Studio or via the query</p>
<pre tabindex="0"><code>DROP USER \[BLACKMARBLEAuser\]
</code></pre><p>The second were addressed by removing the  ‘IsRowGuidColumn’  property in Management Studio</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_197.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_194.png" title="image"></a></p>
<p>or via the query</p>
<pre tabindex="0"><code>ALTER TABLE dbo.be\_Categories SET (LOCK\_ESCALATION = TABLE)
</code></pre><p>Finally II had to replace the non-cluster index with a cluster one. I got the required definition form the setup folder of our BlogEngine.NET installation, and ran the command</p>
<pre tabindex="0"><code>DROP INDEX \[idx\_be\_CustomType\_ObjectId\_BlogId\_Key\] ON \[dbo\].\[be\_CustomFields\]

CREATE CLUSTERED INDEX \[idx\_be\_CustomType\_ObjectId\_BlogId\_Key\] ON \[dbo\].\[be\_CustomFields\]   
(  
    \[CustomType\] ASC,  
    \[ObjectId\] ASC,  
    \[BlogId\] ASC,  
    \[Key\] ASC  
)  
</code></pre><p>Once all this was done in Management Studio I could Deploy DB to Azure, so after a minute or two had a BlogEngine.NET DB on Azure</p>
<h3 id="azure-sql-login">Azure SQL Login</h3>
<p>The new DB did not have user accounts associated with it. So I had to create one</p>
<p>On the SQL server’s on Master  DB I ran</p>
<pre tabindex="0"><code>CREATE LOGIN usrBlog WITH password=&#39;a\_password&#39;;
</code></pre><p>And then on the new DB I ran</p>
<pre tabindex="0"><code>CREATE USER usrBlog FROM LOGIN usrBlog ;  
EXEC sp\_addrolemember N&#39;db\_owner&#39;, usrBlog 
</code></pre><h3 id="azure-website">Azure Website</h3>
<p>At this point we could have created a new Azure website using the BlogEngine.NET template in the gallery. However, I chose to create an empty site as our version of BlogEngine.NET (3.x) is newer than the version in the Azure gallery (2.9).</p>
<p>Due to the history of our blog server we have a non-default structure, the BlogEngine.NET code is not in the root. We retain some folders with redirection to allow old URLs to still work. So via an FTP client we create the following structure, copying up the content from our on premises server</p>
<ul>
<li>
<p>sitewwwroot  - the root site, we have a redirect here to the blogs folder</p>
</li>
<li>
<p>sitewwwrootbm-bloggers – again a redirect to the blogs folder, dating back to our first shared blog</p>
</li>
<li>
<p>sitewwwrootblogs – our actual server, this needs to be a virtual application</p>
<p>Next I set the virtual application on the Configure section for the new website, right at the bottom, of the page</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_198.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_195.png" title="image"></a></p>
<p>At this point I was back in line with the <a href="https://www.youtube.com/watch?v=ynjax44fN-E">video</a>, so need to link our web site to the DB. This is done using the link button on the Azure  web site’s management page. I entered the new credentials for the new SQL DB and the DB and web site were linked. I could then get the connection string for the DB and enter it into the web.config.</p>
</li>
</ul>
<p>Unlike  in the video the only edit I need to make was to the connection string, as all the other edits had already been made for the on premises SQL</p>
<p>Once the revised web.config was uploaded the site started up, and you should be seeing it now</p>
]]></content:encoded>
    </item>
    <item>
      <title>Publishing more than one Azure Cloud Service as part of a TFS build</title>
      <link>https://blog.richardfennell.net/posts/publishing-more-than-one-azure-cloud-service-as-part-of-a-tfs-build/</link>
      <pubDate>Wed, 10 Sep 2014 14:03:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/publishing-more-than-one-azure-cloud-service-as-part-of-a-tfs-build/</guid>
      <description>&lt;p&gt;Using the process in my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/07/14/Building-Azure-Cloud-Applications-on-TFS.aspx&#34;&gt;previous post&lt;/a&gt; you can get a TFS build to create the .CSCFG and .CSPKG files needed to publish a Cloud Service. However, you hit a problem if your solution contains more that one Cloud Service project; as opposed to a single cloud service project with multiple roles, which is not a problem.&lt;/p&gt;
&lt;p&gt;The method outlined in the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/07/14/Building-Azure-Cloud-Applications-on-TFS.aspx&#34;&gt;previous post&lt;/a&gt; drops the two files into a &lt;strong&gt;Packages&lt;/strong&gt; folder under the drops location. The .CSPKG files are fine, as they have unique names. However there is only one &lt;strong&gt;ServiceConfiguration.cscfg&lt;/strong&gt;, whichever one was created last.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Using the process in my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/07/14/Building-Azure-Cloud-Applications-on-TFS.aspx">previous post</a> you can get a TFS build to create the .CSCFG and .CSPKG files needed to publish a Cloud Service. However, you hit a problem if your solution contains more that one Cloud Service project; as opposed to a single cloud service project with multiple roles, which is not a problem.</p>
<p>The method outlined in the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/07/14/Building-Azure-Cloud-Applications-on-TFS.aspx">previous post</a> drops the two files into a <strong>Packages</strong> folder under the drops location. The .CSPKG files are fine, as they have unique names. However there is only one <strong>ServiceConfiguration.cscfg</strong>, whichever one was created last.</p>
<p>Looking in the cloud service projects I could find no way to rename the <strong>ServiceConfiguration</strong> file. It looks like it is like a <strong>app.config</strong> or <strong>web.config</strong> file i.e. it’s name is hard coded.</p>
<p>The only solution I could find was to add a custom target that is set to run after the publish target. This was added to the end of each .<strong>CCPROJ</strong> files using a text editor just before the closing <strong></project></strong></p>
<pre tabindex="0"><code> &lt;Target Name=&#34;CustomPostPublishActions&#34; AfterTargets=&#34;Publish&#34;&gt;  
    &lt;Exec Command=&#34;IF &#39;$(BuildingInsideVisualStudio)&#39;==&#39;true&#39; exit 0  
    echo Post-PUBLISH event: Active configuration is: $(ConfigurationName) renaming the .cscfg file to avoid name clashes  
    echo Renaming the .CSCFG file to match the project name $(ProjectName).cscfg  
    ren $(OutDir)PackagesServiceConfiguration.\*.cscfg $(ProjectName).cscfg  
    &#34; /&gt;  
  &lt;/Target&gt;  
   &lt;PropertyGroup&gt;  
    &lt;PostBuildEvent&gt;echo NOTE: This project has a post publish event&lt;/PostBuildEvent&gt;  
  &lt;/PropertyGroup&gt;
</code></pre><p>Using this I now get unique name for the .CSCFG files as well as for .CSPKG files in my drops location. All ready for Release Management to pickup</p>
<p>Notes:</p>
<ul>
<li>I echo out a message in the post build event too just as a reminder that I have added a custom target that cannot be seen in Visual Studio, so is hard to discover</li>
<li>I use an if test to make sure the commands are only run on the TFS build box, not on a local build. The main reason for this is the path names are different for local builds as opposed to TFS build. If you do want a rename on a local build you need to change the <strong>$(OutDir)Packages</strong> path to <strong>$(OutDir)app.publish.</strong> However, it seemed more sensible to leave the default behaviour occur when running locally</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Getting the correct path and name for a project to pass as an MSBuild argument in TFS Build</title>
      <link>https://blog.richardfennell.net/posts/getting-the-correct-path-and-name-for-a-project-to-pass-as-an-msbuild-argument-in-tfs-build/</link>
      <pubDate>Wed, 10 Sep 2014 10:59:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-the-correct-path-and-name-for-a-project-to-pass-as-an-msbuild-argument-in-tfs-build/</guid>
      <description>&lt;p&gt;I have been sorting out some builds for use with Release Management that include Azure Cloud Solutions. To get the correct packages built by TFS I have followed the process in my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/07/14/Building-Azure-Cloud-Applications-on-TFS.aspx&#34;&gt;past blog post&lt;/a&gt;. The problem was I kept getting the build error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;The target &amp;#34;Azure PackagesBlackMarble.Win8AppBuilder.AzureApi&amp;#34; does not exist in the project.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The issue was I could not get the solution folder/project name right for the MSBUILD target parameter. Was it the spaces in the folder? I just did not know.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been sorting out some builds for use with Release Management that include Azure Cloud Solutions. To get the correct packages built by TFS I have followed the process in my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/07/14/Building-Azure-Cloud-Applications-on-TFS.aspx">past blog post</a>. The problem was I kept getting the build error</p>
<pre tabindex="0"><code>The target &#34;Azure PackagesBlackMarble.Win8AppBuilder.AzureApi&#34; does not exist in the project.
</code></pre><p>The issue was I could not get the solution folder/project name right for the MSBUILD target parameter. Was it the spaces in the folder? I just did not know.</p>
<p>The solution was to check the .PROJ file that was actually being run by MSBUILD. As you may know a .SLN file is not in MSBUILD format so you can’t just open it in notepad and look (unlike a .CSPROJ or .VBPROJ files), it is created by MSBUILD on the fly. To see this generated code, at a developer’s command prompt, run the following commands</p>
<pre tabindex="0"><code>cd c:mysolutionroot  
Set MSBuildEmitSolution=1  
msbuild
</code></pre><p>When the MSBUILD command is run, whether the build works or not, there should be <strong>mysolution.sln.metaproj</strong>  file created. If you look in this file you will see the actual targets MSBUILD thinks it is dealing with.</p>
<p>In my case I could see</p>
<pre tabindex="0"><code>&lt;Target Name=&#34;Azure PackagesBlackMarble\_Win8AppBuilder\_AzureApi:Publish&#34;&gt;
</code></pre><p>So the first issue was my . were replaced by _</p>
<p>I changed my MSBUILD target argument to that shown in the file, but still had a problem. However, once I changed by space in the solution folder to %20 all was OK. So my final MSBUILD argument was</p>
<pre tabindex="0"><code>/t:Azure%20PackagesBlackMarble\_Win8AppBuilder\_AzureApi:Publish
</code></pre><p><a href="/wp-content/uploads/sites/2/historic/image_195.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_192.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Deploying a Windows service with Release Management</title>
      <link>https://blog.richardfennell.net/posts/deploying-a-windows-service-with-release-management/</link>
      <pubDate>Tue, 09 Sep 2014 14:29:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/deploying-a-windows-service-with-release-management/</guid>
      <description>&lt;p&gt; recently needed to deploy a Windows service as part of a Release Management pipeline. In the past, our internal systems I have only need to deploy DB (via SSDT Dacpacs) and Websites (via MSDeploy), so a new experience.&lt;/p&gt;
&lt;h3 id=&#34;wix-contents&#34;&gt;WIX Contents&lt;/h3&gt;
&lt;p&gt;The first step to to create an MSI installer for the service. This was done using &lt;a href=&#34;http://wixtoolset.org/&#34;&gt;WIX&lt;/a&gt;, with all the fun that usually entails. The key part was a component to do the actual registration and starting of the service&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p> recently needed to deploy a Windows service as part of a Release Management pipeline. In the past, our internal systems I have only need to deploy DB (via SSDT Dacpacs) and Websites (via MSDeploy), so a new experience.</p>
<h3 id="wix-contents">WIX Contents</h3>
<p>The first step to to create an MSI installer for the service. This was done using <a href="http://wixtoolset.org/">WIX</a>, with all the fun that usually entails. The key part was a component to do the actual registration and starting of the service</p>
<pre tabindex="0"><code>&lt;Component Id =&#34;ModuleHostInstall&#34; Guid=&#34;{3DF13451-6A04-4B62-AFCB-731A572C12C9}&#34; Win64=&#34;yes&#34;&gt;  
   &lt;CreateFolder /&gt;  
   &lt;Util:User Id=&#34;ModuleHostServiceUser&#34; CreateUser=&#34;no&#34; Name=&#34;\[SERVICEUSER\]&#34; Password=&#34;\[PASSWORD\]&#34; LogonAsService=&#34;yes&#34; /&gt;  
   &lt;File Id=&#34;CandyModuleHostService&#34; Name =&#34;DataFeed.ModuleHost.exe&#34; Source=&#34;$(var.ModuleHost.TargetDir)ModuleHost.exe&#34; KeyPath=&#34;yes&#34; Vital=&#34;yes&#34;/&gt;  
   &lt;ServiceInstall Id=&#34;CandyModuleHostService&#34; Name =&#34;ModuleHost&#34; DisplayName=&#34;Candy Module Host&#34; Start=&#34;auto&#34; ErrorControl=&#34;normal&#34; Type=&#34;ownProcess&#34;  Account=&#34;\[SERVICEUSER\]&#34; Password=&#34;\[PASSWORD\]&#34; Description=&#34;Manages the deployment of Candy modules&#34; /&gt;   
   &lt;ServiceControl Id=&#34;CandyModuleHostServiceControl&#34; Name=&#34;ModuleHost&#34; Start=&#34;install&#34; Stop=&#34;both&#34; Wait=&#34;yes&#34; Remove=&#34;uninstall&#34;/&gt;
</code></pre><p>So nothing that special here, but worth remembering if you miss out the <strong>ServiceControl</strong> block the service will not automatically start or be uninstalled with the MSI’s uninstall</p>
<p>You can see that we pass in the service account to be used to run the service as a property. This is an important technique for using WIX with Release Management, you will want to be able to pass in anything you may want to change as installation time as a parameter. This means we ended up with a good few properties such as</p>
<pre tabindex="0"><code>  &lt;Property Id=&#34;DBSERVER&#34; Value=&#34;.sqlexpress&#34; /&gt;  
  &lt;Property Id=&#34;DBNAME&#34; Value =&#34;=CandyDB&#34; /&gt;  
  &lt;Property Id=&#34;SERVICEUSER&#34; Value=&#34;Domainserviceuser&#34; /&gt;  
  &lt;Property Id=&#34;PASSWORD&#34; Value=&#34;Password1&#34; /&gt;  
</code></pre><p>These tended to equate to app.config settings. In all cases I tried to set sensible default values so in most cases I could avoid passing in an override value.</p>
<p>These property values were then used to re-write the app.config file after the copying of the files from the MSI onto the target server. This was done using the XMLFile tools and some XPath e.g.</p>
<pre tabindex="0"><code>&lt;Util:XmlFile Id=&#34;CacheDatabaseName&#34;   
     Action=&#34;setValue&#34;   
     Permanent=&#34;yes&#34;   
     File=&#34;\[#ModuleHost.exe.config\]&#34;   
     ElementPath=&#34;/configuration/applicationSettings/DataFeed.Properties.Settings/setting\[\[\]@name=&#39;CacheDatabaseName&#39;\[\]\]/value&#34; Value=&#34;\[CACHEDATABASENAME\]&#34; Sequence=&#34;1&#34; /&gt;  
 
</code></pre><h3 id="command-line-testing">Command Line Testing</h3>
<p>Once the MSI was built it could be tested from the command line using the form</p>
<pre tabindex="0"><code>msiexec /i Installer.msi /Lv msi.log SERVICEUSER=&#34;domainsvc\_acc&#34; PASSWORD=&#34;Password1&#34; DBSERVER=&#34;dbserver&#34; DBSERVER=&#34;myDB&#34; …..
</code></pre><p>I soon spotted a problem. As I was equating properties with app.config settings I was passing in connections strings and URLs, so the command line got long very quickly. It was really unwieldy to handle</p>
<p>A check of the log file I was creating, msi.log, showed the command line seemed to be truncated. This seemed to occur around 1000 characters. I am not sure if this was an artefact of the logging or the command line, but either way a good reason to try to shorten the property list.</p>
<p>I  therefore decided that I would not pass in whole connection strings, but just the properties that might change, especially effective for connection strings to things such as Entity Framework. This meant I did some string building in WIX during the transformation of the app.config file e.g.</p>
<pre tabindex="0"><code>&lt;Util:XmlFile Id=&#39;CandyManagementEntities1&#39;  
   Action=&#39;setValue&#39;  
   ElementPath=&#39;/configuration/connectionStrings/add\[\[\]@name=&#34;MyManagementEntities&#34;\[\]\]/@connectionString&#39;  
   File=&#39;\[#ModuleHost.exe.config\]&#39; Value=&#39;metadata=res://\*/MyEntities.csdl|res://\*/MyEntities.ssdl|res://\*/MyEntities.msl;provider=System.Data.SqlClient;provider connection string=&amp;quot;data source=\[DBSERVER\];initial catalog=\[DBNAME\];integrated security=True;MultipleActiveResultSets=True;App=EntityFramework&amp;quot;&#39; /&gt;
</code></pre><p>This technique had another couple of advantages</p>
<ul>
<li>It meant I did not need to worry over spaces in strings, I could therefore lose the “ in the command line – Turns out this is really important later.</li>
<li>As I was passing in just a ‘secret value’ as opposed to a whole URL I could use the encryption features of Release Management to hide certain values</li>
</ul>
<p>It is at this point I was delayed for a long time. You have to be really careful when installing Windows services via an MSI that your service can actually start. If it cannot then you will get errors saying <em>&quot;… could not be installed. Verify that you have sufficient privileges to install system services&quot;</em>. This is probably not really a rights issue, just that some configuration setting is wrong so the service has failed to start. In my case it was down to an incorrect connection string, stray commas and quotes, and a missing DLL that should have been in the installer. You often end up working fairly blind at this point as Windows services don’t give too much information when they fail to load. Persistence, <a href="http://technet.microsoft.com/en-gb/sysinternals/bb545021.aspx">SysInternals Tools</a> and comparing to the settings/files on a working development PC are the best options</p>
<h3 id="release-management-component">Release Management Component</h3>
<p>Once I had working command line I could create a component in Release Management. On the Configure Apps &gt; Components page I already had a MDI Deployer, but this did not expose any properties. I therefore copied this component to create a MSI deployer specific to my new service installer and started to edit it.</p>
<p>All the edits were on the deployment tab, adding the extra properties that could be configured.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_194.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_191.png" title="image"></a></p>
<p><strong>Note</strong>: Now it might be possible to do something with the <a href="http://www.colinsalmcorner.com/post/webdeploy-and-release-management--the-proper-way">pre/post deployment configuration variables as we do with MSDeploy</a>, allowing the MSI to run then editing the app.config later. However, given that MSI service installers tends to fail they cannot start the new service I think passing in the correct properties into MSIEXEC is a better option. Also means it is consistent for anyone using the MSI via the command line.</p>
<p>On the Deployment tab I changed the Arguments to</p>
<pre tabindex="0"><code>\-File ./msiexec.ps1 -MsiFileName &#34;\_\_Installer\_\_&#34;  -MsiCustomArgs ‘SERVICEUSER=”\_\_SERVICEUSER\_\_”  PASSWORD=”\_\_PASSWORD\_\_” DBSERVER=”\_\_DBSERVER\_\_”  DBNAME=”\_\_DBNAME\_\_” …. ’
</code></pre><p>I had initially assumed I needed the quotes around property values. Turns out I didn’t, and due to the way Release Management runs the component they made matters much, much worse. MSIEXEC kept failing instantly. if I ran the command line by hand on the target machine it was actually showing the Help dialog, so I knew the command line was invalid.</p>
<p>Turns out the issue is Release Management calls PowerShell.EXE to run the script passing in the Arguments. This in turn calls a PowerShell Script which does some argument processing before running a process to run MSIEXEC.EXE with some parameters. You can see there are loads of places where the escaping and quotes around parameters could get confused.</p>
<p>After much fiddling, swapping ‘ for “ I realised I could just forget most of the quotes. I had already edited my WIX package to build complex strings, so the actual values were simple with no spaces. Hence my command line became</p>
<pre tabindex="0"><code>\-File ./msiexec.ps1 -MsiFileName &#34;\_\_Installer\_\_&#34;  -MsiCustomArgs “SERVICEUSER=\_\_SERVICEUSER\_\_  PASSWORD=\_\_PASSWORD\_\_ DBSERVER=\_\_DBSERVER\_\_  DBNAME=\_\_DBNAME\_\_ …. “
</code></pre><p>Once this was set my release pipeline worked resulting in a system with DBs, web services and window service all up and running.</p>
<p>As is often the case it took a while to get this first MSI running, but I am sure the next one will be much easier.</p>
]]></content:encoded>
    </item>
    <item>
      <title>PowerShell Summit Europe 2014</title>
      <link>https://blog.richardfennell.net/posts/powershell-summit-europe-2014/</link>
      <pubDate>Mon, 08 Sep 2014 09:58:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/powershell-summit-europe-2014/</guid>
      <description>&lt;p&gt;I find I am spending more time with PowerShell these days, as we aim to automated more of our releases and specifically with DSC in PowerShell 4, as I am sure many of us are&lt;/p&gt;
&lt;p&gt;Give that fact, the &lt;a href=&#34;http://eventmgr.azurewebsites.net/event/home/PSEU14&#34;&gt;PowerShell Summit Europe 2014&lt;/a&gt; at the end of the month looks interesting. I only found out about it too late and I have diary clashes but might be of interest to some of you. Looks like a really good hands event.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I find I am spending more time with PowerShell these days, as we aim to automated more of our releases and specifically with DSC in PowerShell 4, as I am sure many of us are</p>
<p>Give that fact, the <a href="http://eventmgr.azurewebsites.net/event/home/PSEU14">PowerShell Summit Europe 2014</a> at the end of the month looks interesting. I only found out about it too late and I have diary clashes but might be of interest to some of you. Looks like a really good hands event.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Got around to updating my Nokia 820 to WP81 Update 1</title>
      <link>https://blog.richardfennell.net/posts/got-around-to-updating-my-nokia-820-to-wp81-update-1/</link>
      <pubDate>Wed, 03 Sep 2014 17:44:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/got-around-to-updating-my-nokia-820-to-wp81-update-1/</guid>
      <description>&lt;p&gt;I had been suffering with the 0x80188308 error when I tried to update my Nokia 820 to the WP81 Update 1 because I had the developer preview installed. I had been putting off what appeared to be the only solution of &lt;a href=&#34;http://answers.microsoft.com/en-us/winphone/forum/wpdp-wpupdate/0x80188308-error-on-81-dp-update-1-install/ccfe96a7-4a60-4bbc-b211-7450b134b41b&#34;&gt;doing a reset as discussed in the forums&lt;/a&gt; as it seem a bit drastic, thought I would wait for Microsoft to sort out the process. I got bored waiting..&lt;/p&gt;
&lt;p&gt;Turns out as long as you do the backup first it is fairly painless, took about an hour of uploads and downloads over WiFi&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I had been suffering with the 0x80188308 error when I tried to update my Nokia 820 to the WP81 Update 1 because I had the developer preview installed. I had been putting off what appeared to be the only solution of <a href="http://answers.microsoft.com/en-us/winphone/forum/wpdp-wpupdate/0x80188308-error-on-81-dp-update-1-install/ccfe96a7-4a60-4bbc-b211-7450b134b41b">doing a reset as discussed in the forums</a> as it seem a bit drastic, thought I would wait for Microsoft to sort out the process. I got bored waiting..</p>
<p>Turns out as long as you do the backup first it is fairly painless, took about an hour of uploads and downloads over WiFi</p>
<ol>
<li>Created a manual backup of the phone: Settings&gt;backup&gt;apps+settings&gt;backup now.</li>
<li>Reset the phone to factory settings (DP 8.1), leaving any SD card alone: Settings&gt;about&gt;reset your phone.</li>
<li>When prompted logged in with the same ID as used for the backup</li>
<li>Restored the phone using the backup just created.  </li>
<li>Reconnected to all of the other accounts and let the phone download all of the apps.</li>
<li>Signed back into the Preview for Developers app – else you won’t see the updates!</li>
<li>The updates comes down without a problem as one large package</li>
</ol>
<p>Lets have a go with a UK aware version of Cortana….</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting ‘… is not a valid URL’ when using Git TF Clone</title>
      <link>https://blog.richardfennell.net/posts/getting-is-not-a-valid-url-when-using-git-tf-clone/</link>
      <pubDate>Tue, 02 Sep 2014 11:34:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-is-not-a-valid-url-when-using-git-tf-clone/</guid>
      <description>&lt;p&gt;I have been attempting to use the &lt;a href=&#34;http://www.microsoft.com/en-gb/developers/articles/week02mar2014/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-retaining-as-much-source-and-work-item-history-as-possible&#34;&gt;Git TF technique&lt;/a&gt; to migrate some content between TFS servers. I needed to move a folder structure that contains spaces in folder names from a TPC that also contains spaces in its name. So I thought my command line would be&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;git tf clone “http://tfsserver1:8080/tfs/My Tpc” “$/My Folder”’ oldrepo --deep
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;But this gave the error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;git-tf: “http://tfsserver1:8080/tfs/My Tpc” is not a valid URL
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;At first I suspected it was the quotes I was using, as I had &lt;a href=&#34;https://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/06/09/Cloning-tfs-repository-with-git-tf-gives-a-a-server-path-must-be-absolute.aspx&#34;&gt;had problems here before&lt;/a&gt;, but swapping from ‘ to “ made no difference.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been attempting to use the <a href="http://www.microsoft.com/en-gb/developers/articles/week02mar2014/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-retaining-as-much-source-and-work-item-history-as-possible">Git TF technique</a> to migrate some content between TFS servers. I needed to move a folder structure that contains spaces in folder names from a TPC that also contains spaces in its name. So I thought my command line would be</p>
<pre tabindex="0"><code>git tf clone “http://tfsserver1:8080/tfs/My Tpc” “$/My Folder”’ oldrepo --deep
</code></pre><p>But this gave the error</p>
<pre tabindex="0"><code>git-tf: “http://tfsserver1:8080/tfs/My Tpc” is not a valid URL
</code></pre><p>At first I suspected it was the quotes I was using, as I had <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/06/09/Cloning-tfs-repository-with-git-tf-gives-a-a-server-path-must-be-absolute.aspx">had problems here before</a>, but swapping from ‘ to “ made no difference.</p>
<p>The answer was to use the ASCII code %20 for the space, so this version of the command worked</p>
<pre tabindex="0"><code>git tf clone http://tfsserver1:8080/tfs/My%20Tpc “$/My Folder”’ oldrepo --deep
</code></pre><p>Interestingly you don’t need to use %20 for the folder name</p>
]]></content:encoded>
    </item>
    <item>
      <title>Build failing post TFS 2013.3 upgrade with ‘Stack empty. (type InvalidOperationException)’</title>
      <link>https://blog.richardfennell.net/posts/build-failing-post-tfs-2013-3-upgrade-with-stack-empty-type-invalidoperationexception/</link>
      <pubDate>Mon, 01 Sep 2014 20:42:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/build-failing-post-tfs-2013-3-upgrade-with-stack-empty-type-invalidoperationexception/</guid>
      <description>&lt;p&gt;Just started seeing build error on a build that was working until we upgraded the build agent to TFS 2013.3&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;Exception Message: Stack empty. (type InvalidOperationException)  
Exception Stack Trace:    at Microsoft.VisualStudio.TestImpact.Analysis.LanguageSignatureParser.NotifyEndType()  
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseType()  
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseRetType()  
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseMethod(Byte num1)  
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.Parse(Byte\* blob, UInt32 len)  
   at Microsoft.VisualStudio.TestImpact.Analysis.LanguageSignatureParser.ParseMethodName(MethodProps methodProps, String&amp;amp; typeName, String&amp;amp; fullName)  
   at Microsoft.VisualStudio.TestImpact.Analysis.AssemblyMethodComparer.AddChangeToList(DateTime now, List\`1 changes, CodeChangeReason reason, MethodInfo methodInfo, MetadataReader metadataReader, Guid assemblyIdentifier, SymbolReader symbolsReader, UInt32 sourceToken, LanguageSignatureParser&amp;amp; languageParser)  
   at Microsoft.VisualStudio.TestImpact.Analysis.AssemblyMethodComparer.CompareAssemblies(String firstPath, String secondPath, Boolean lookupSourceFiles)  
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.CompareBinary(CodeActivityContext context, String sharePath, String assembly, IList\`1 codeChanges)  
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.CompareBuildBinaries(CodeActivityContext context, IBuildDefinition definition, IList\`1 codeChanges)  
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.Execute(CodeActivityContext context)  
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)  
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;I assume the issue is a DLL mismatch between what is installed in as part of the build agent and something in the 2012 generation build process template in use.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just started seeing build error on a build that was working until we upgraded the build agent to TFS 2013.3</p>
<pre tabindex="0"><code>Exception Message: Stack empty. (type InvalidOperationException)  
Exception Stack Trace:    at Microsoft.VisualStudio.TestImpact.Analysis.LanguageSignatureParser.NotifyEndType()  
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseType()  
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseRetType()  
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.ParseMethod(Byte num1)  
   at Microsoft.VisualStudio.TestImpact.Analysis.SigParser.Parse(Byte\* blob, UInt32 len)  
   at Microsoft.VisualStudio.TestImpact.Analysis.LanguageSignatureParser.ParseMethodName(MethodProps methodProps, String&amp; typeName, String&amp; fullName)  
   at Microsoft.VisualStudio.TestImpact.Analysis.AssemblyMethodComparer.AddChangeToList(DateTime now, List\`1 changes, CodeChangeReason reason, MethodInfo methodInfo, MetadataReader metadataReader, Guid assemblyIdentifier, SymbolReader symbolsReader, UInt32 sourceToken, LanguageSignatureParser&amp; languageParser)  
   at Microsoft.VisualStudio.TestImpact.Analysis.AssemblyMethodComparer.CompareAssemblies(String firstPath, String secondPath, Boolean lookupSourceFiles)  
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.CompareBinary(CodeActivityContext context, String sharePath, String assembly, IList\`1 codeChanges)  
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.CompareBuildBinaries(CodeActivityContext context, IBuildDefinition definition, IList\`1 codeChanges)  
   at Microsoft.TeamFoundation.TestImpact.BuildIntegration.BuildActivities.GetImpactedTests.Execute(CodeActivityContext context)  
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)  
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)
</code></pre><p>I assume the issue is a DLL mismatch between what is installed in as part of the build agent and something in the 2012 generation build process template in use.</p>
<p>As an immediate fix, until I get a chance to swap the template to a newer one, was to disable Test Impact Analysis, which I was not using for this project anyway.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_193.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_190.png" title="image"></a></p>
<p>Once I did this my build completed OK with the tests ran OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDDNorth - agenda published - registration opens - and it is full</title>
      <link>https://blog.richardfennell.net/posts/dddnorth-agenda-published-registration-opens-and-it-is-full/</link>
      <pubDate>Mon, 01 Sep 2014 13:11:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dddnorth-agenda-published-registration-opens-and-it-is-full/</guid>
      <description>&lt;p&gt;This morning &lt;a href=&#34;http://www.dddnorth.co.uk/Schedule&#34;&gt;DDDNorth’s  agenda was published&lt;/a&gt;, &lt;a href=&#34;http://www.dddnorth.co.uk/Home/Register&#34;&gt;registration opened&lt;/a&gt; and it was full. All within a couple of hours.&lt;/p&gt;
&lt;p&gt;Looks like a good event if you managed to get a ticket. Glad I can get in as a speaker, else this mornings meetings would have left me on the waiting list&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD North Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddnorth.co.uk/Content/images/logo.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This morning <a href="http://www.dddnorth.co.uk/Schedule">DDDNorth’s  agenda was published</a>, <a href="http://www.dddnorth.co.uk/Home/Register">registration opened</a> and it was full. All within a couple of hours.</p>
<p>Looks like a good event if you managed to get a ticket. Glad I can get in as a speaker, else this mornings meetings would have left me on the waiting list</p>
<p><img alt="DDD North Logo" loading="lazy" src="http://www.dddnorth.co.uk/Content/images/logo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>My DDDnorth session has been accepted</title>
      <link>https://blog.richardfennell.net/posts/my-dddnorth-session-has-been-accepted/</link>
      <pubDate>Sun, 31 Aug 2014 11:54:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-dddnorth-session-has-been-accepted/</guid>
      <description>&lt;p&gt;My &lt;a href=&#34;http://www.dddnorth.co.uk/&#34;&gt;DDDNorth&lt;/a&gt; session &amp;ldquo;What is Desired State Configuration and how does it help me?&amp;rdquo; has been accepted, looking forward to it&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My <a href="http://www.dddnorth.co.uk/">DDDNorth</a> session &ldquo;What is Desired State Configuration and how does it help me?&rdquo; has been accepted, looking forward to it</p>
]]></content:encoded>
    </item>
    <item>
      <title>Listing all the PBIs that have no acceptance criteria</title>
      <link>https://blog.richardfennell.net/posts/listing-all-the-pbis-that-have-no-acceptance-criteria/</link>
      <pubDate>Fri, 22 Aug 2014 12:49:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/listing-all-the-pbis-that-have-no-acceptance-criteria/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Update 24 Aug 2014:&lt;/strong&gt;  Changed the PowerShell to use a pipe based filter as opposed to nested foreach loops&lt;/p&gt;
&lt;p&gt;The TFS Scrum process template’s Product Backlog Item work item type has an acceptance criteria field. It is good practice to make sure any PBI has this field completed; however it is not always possible to enter this content when the work item is initially create i.e. before it is approved. We oftan find we add a PBI that is basically a title and add the summary and acceptance criteria as the product is planned.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Update 24 Aug 2014:</strong>  Changed the PowerShell to use a pipe based filter as opposed to nested foreach loops</p>
<p>The TFS Scrum process template’s Product Backlog Item work item type has an acceptance criteria field. It is good practice to make sure any PBI has this field completed; however it is not always possible to enter this content when the work item is initially create i.e. before it is approved. We oftan find we add a PBI that is basically a title and add the summary and acceptance criteria as the product is planned.</p>
<p>It would be really nice to have a TFS work item query that listed all the PBIs that did not have the acceptance criteria field complete. Unfortunately there is not way to check a rich text or html field is empty in TFS queries <a href="http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/3080939-introduce-is-empty-operator-in-tfs-wiql-for-rich">It has been requested via UserVoice</a>, but there is no sign of it appearing in the near future.</p>
<p>So we are left the TFS API to save the day, the following PowerShell function does the job, returning a list of non-completed PBI work items that have empty Acceptance Criteria.</p>
<pre tabindex="0"><code>

\# Load the one we have to find, might be more than we truly need for this single function  
\# but I usually keep all these functions in a single module so share the references  
$ReferenceDllLocation = &#34;C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDEReferenceAssembliesv2.0&#34;  
Add-Type -Path $ReferenceDllLocation&#34;Microsoft.TeamFoundation.Client.dll&#34; -ErrorAction Stop -Verbose  
Add-Type -Path $ReferenceDllLocation&#34;Microsoft.TeamFoundation.Common.dll&#34; -ErrorAction Stop -Verbose  
Add-Type -Path $ReferenceDllLocation&#34;Microsoft.TeamFoundation.WorkItemTracking.Client.dll&#34;  -ErrorAction Stop –Verbose

  
 

function Get-TfsPBIWIthNoAcceptanceCriteria {  &lt;# .SYNOPSIS This function get the list of PBI work items that have no acceptance criteria  .DESCRIPTION This function allows a check to be made that all PBIs have a set of acceptance criteria  .PARAMETER CollectionUri TFS Collection URI  .PARAMETER TeamProject Team Project Name  .EXAMPLE  Get-TfsPBIWIthNoAcceptanceCriteria -CollectionUri &#34;http://server1:8080/tfs/defaultcollection&#34; -TeamProject &#34;My Project&#34;  #&gt;      Param     (     \[Parameter(Mandatory=$true)\]     \[uri\] $CollectionUri ,       \[Parameter(Mandatory=$true)\]     \[string\] $TeamProject      )      # get the source TPC     $teamProjectCollection = New-Object Microsoft.TeamFoundation.Client.TfsTeamProjectCollection($CollectionUri)      try     {         $teamProjectCollection.EnsureAuthenticated()     }     catch     {         Write-Error &#34;Error occurred trying to connect to project collection: $\_ &#34;         exit 1     }          #Get the work item store     $wiService = $teamProjectCollection.GetService(\[Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItemStore\])          # find each work item, we can&#39;t check for acceptance crieria state in the query     $pbi = $wiService.Query(&#34;SELECT \[System.Id\] FROM WorkItems WHERE \[System.TeamProject\] = &#39;{0}&#39;  AND  \[System.WorkItemType\] = &#39;Product Backlog Item&#39;  AND  \[System.State\] &lt;&gt; &#39;Done&#39; ORDER BY \[System.Id\]&#34; -f $teamproject)     $pbi |  where-Object { $\_.fields | where-object {$\_.ReferenceName -eq &#39;Microsoft.VSTS.Common.AcceptanceCriteria&#39; -and $\_.Value -eq &#34;&#34;}}       # Using a single piped line to filter the wi     # this is equivalent to the following nested loops for those who like a more winded structure      # $results = @()     # foreach ($wi in $pbi)     # {     #    foreach ($field in $wi.Fields)     #    {     #        if  ($field.ReferenceName -eq &#39;Microsoft.VSTS.Common.AcceptanceCriteria&#39; -and $field.Value -eq &#34;&#34;)      #        {     #            $results += $wi     #        }     #    }     # }     # $results } 
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Guest post on the Microsoft’s UK Developers site ‘Migrating a TFS TFVC based team project to a Git team project - a practical example’</title>
      <link>https://blog.richardfennell.net/posts/guest-post-on-the-microsofts-uk-developers-site-migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-a-practical-example/</link>
      <pubDate>Mon, 18 Aug 2014 12:14:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/guest-post-on-the-microsofts-uk-developers-site-migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-a-practical-example/</guid>
      <description>&lt;p&gt;I have just had an article published on the Microsoft’s UK Developers site &lt;a href=&#34;http://www.microsoft.com/en-gb/developers/articles/week03aug14/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project&#34;&gt;Migrating a TFS TFVC based team project to a Git team project - a practical example&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just had an article published on the Microsoft’s UK Developers site <a href="http://www.microsoft.com/en-gb/developers/articles/week03aug14/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project">Migrating a TFS TFVC based team project to a Git team project - a practical example</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Why is my TFS report not failing when I really think it should ?</title>
      <link>https://blog.richardfennell.net/posts/why-is-my-tfs-report-not-failing-when-i-really-think-it-should/</link>
      <pubDate>Fri, 15 Aug 2014 21:35:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-is-my-tfs-report-not-failing-when-i-really-think-it-should/</guid>
      <description>&lt;p&gt;Whilst creating some custom reports for a client we hit a problem that though the reports worked on my development system and their old TFS server it failed on their new one. The error being that the Microsoft_VSTS_Scheduling_CompletedWork was an invalid column name&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_192.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_189.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Initially I suspected the problem was a warehouse reprocessing issue, but other reports worked so it could not have been that.&lt;/p&gt;
&lt;p&gt;It must really be the column is missing, and that sort of makes sense. On the new server the team was using the Scrum process template, the Microsoft_VSTS_Scheduling_CompletedWork  and Microsoft_VSTS_Scheduling_OriginalEstimate fields are not included in this template, the plan had been to add them to allow some analysis of estimate accuracy. This had been done on my development system, but not on the client new server. Once these fields were added to the Task work item the report leapt into life.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst creating some custom reports for a client we hit a problem that though the reports worked on my development system and their old TFS server it failed on their new one. The error being that the Microsoft_VSTS_Scheduling_CompletedWork was an invalid column name</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_192.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_189.png" title="image"></a></p>
<p>Initially I suspected the problem was a warehouse reprocessing issue, but other reports worked so it could not have been that.</p>
<p>It must really be the column is missing, and that sort of makes sense. On the new server the team was using the Scrum process template, the Microsoft_VSTS_Scheduling_CompletedWork  and Microsoft_VSTS_Scheduling_OriginalEstimate fields are not included in this template, the plan had been to add them to allow some analysis of estimate accuracy. This had been done on my development system, but not on the client new server. Once these fields were added to the Task work item the report leapt into life.</p>
<p>The question is then, why did this work on the old TFS server? The team project on the old server being used to test the reports also did not have the customisation either. However, remember the OLAP cube for the TFS warehouse is shared between ALL team projects on a server, so as one of these other team projects was using the MSF Agile template the fields are present, hence the report worked.</p>
<p>Remember that shared OLAP cube, it can trip you up over and over again</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where have my freeview tuners gone?</title>
      <link>https://blog.richardfennell.net/posts/where-have-my-freeview-tuners-gone/</link>
      <pubDate>Thu, 14 Aug 2014 21:53:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-have-my-freeview-tuners-gone/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/category/MCE.aspx&#34;&gt;I have been a long time happy user of Windows Media Center&lt;/a&gt; since it’s XP days. My current systems is Windows 8.1 an &lt;a href=&#34;http://en.wikipedia.org/wiki/Acer_AspireRevo&#34;&gt;ATOM based Acer Revo&lt;/a&gt; with a pair of  &lt;a href=&#34;http://www.pctvsystems.com/Products/ProductsEuropeAsia/DVBTT2products/PCTVnanoStickT2/tabid/248/language/en-GB/Default.aspx&#34;&gt;USB PCTV Nanostick T2 Freeview HD tuners&lt;/a&gt;. For media storage I used a USB attached &lt;a href=&#34;http://uk.startech.com/HDD/Enclosures/35in-4-Drive-eSATA-USB-FireWire-External-SATA-RAID-Enclosure~S354UFER&#34;&gt;StarTech RAID disk sub system&lt;/a&gt;. This has been working well for a good couple of years, sitting in a cupboard under that stairs. However, I am about to move house and all the kit is going to have to go under the TV. The Revo is virtually silent, but the RAID crate was going to be an issue. It sounds like and aircraft taking off as the disks spin up.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/category/MCE.aspx">I have been a long time happy user of Windows Media Center</a> since it’s XP days. My current systems is Windows 8.1 an <a href="http://en.wikipedia.org/wiki/Acer_AspireRevo">ATOM based Acer Revo</a> with a pair of  <a href="http://www.pctvsystems.com/Products/ProductsEuropeAsia/DVBTT2products/PCTVnanoStickT2/tabid/248/language/en-GB/Default.aspx">USB PCTV Nanostick T2 Freeview HD tuners</a>. For media storage I used a USB attached <a href="http://uk.startech.com/HDD/Enclosures/35in-4-Drive-eSATA-USB-FireWire-External-SATA-RAID-Enclosure~S354UFER">StarTech RAID disk sub system</a>. This has been working well for a good couple of years, sitting in a cupboard under that stairs. However, I am about to move house and all the kit is going to have to go under the TV. The Revo is virtually silent, but the RAID crate was going to be an issue. It sounds like and aircraft taking off as the disks spin up.</p>
<p>A change of kit was needed….</p>
<p>I decided the best option was to move to a NAS, thus allowing the potentially noisy disks to be anywhere in the house. So I purchased a <a href="http://www.netgear.co.uk/home/products/connected-storage/RN10400.aspx">Netgear ReadyNAS 104</a>. It shows how price have dropped over the past few years as this was about half the price of my StarTech RAID, and holds well over twice as much and provide much more functionality. I wait to see if it is reliable, only time will tell!</p>
<p>So I popped the NAS on the LAN and started to copy over content from the RAID crate, at the same time (and this seems was the mistake) reconfiguring MCE to point at the NAS. All seemed OK, MCE reconfigured and background copies running, until I tried to watch live TV. MCE said it was trying to find a tuner, I waited. In the end I gave up and went to bed, assuming all would be OK in the morning when the media copy was finished and I could reboot the PC.</p>
<p>Unfortunately it was not, after a reboot it still said it could find no tuner. if I tried to rescan for TV channels it just hung (for well over 48 hours, I left it while I went away). All the other functions of MCE seemed fine. I tried removing the USB tuners, both physically and un-installing the drivers, it had not effect. I had corrupted the MCE DB it seemed, something I had done before looking back at <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/category/MCE.aspx">older posts.</a></p>
<p>In the end I had to reset MCE as detailed on <a href="http://bjdraw.com/2010/05/29/how-to-reset-windows-7-media-center/">Ben Drawbaugh’s blog</a>. Basically I deleted the contents of <em>c:programdatamicrosoftehome</em>  and reran the MCE Live TV setup wizard. I was not bothered over my channel list order, or series recording settings, so I did not bother with <a href="http://www.hack7mc.com/2010/04/mcbackup-3-0-brings-your-lineup-and-recordings-back.html">mcbackup</a> for the backup and restore steps.</p>
<p>Once this was done the tuners both worked again, though the channel scan took a good hour.</p>
<p>Interestingly I had assume clearing out the ehome folder would mean I lost all my MCE settings including the media library settings, but I didn’t my MCE was still pointing at the new NAS shares so a small win.</p>
<p>One point I had not considered over the move to a NAS, is that MCE cannot record TV to a network shares. Previously I had written all media to the locally attached RAID crate. The solution was to let MCE save TV to the local C:, but use a scheduled job to run ROBOCOPY to move the files to the NAS over night. Can’t see why it shouldn’t work, again only time will tell.</p>
<p><strong>Update:</strong></p>
<p>Forgot to mention another advantage of moving to the NAS. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/12/03/Upgrading-my-Windows-7-Media-Center-to-Windows-8.aspx">Previously I had to use the Logitech media server to serve music to my old Roku 1000 unit connected to my even older HiFi</a>, now the Roku can use the NAS directly, thus making the system setup far easier</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting the Typemock TFS build activities to work on a TFS build agent running in interactive mode</title>
      <link>https://blog.richardfennell.net/posts/getting-the-typemock-tfs-build-activities-to-work-on-a-tfs-build-agent-running-in-interactive-mode/</link>
      <pubDate>Sat, 02 Aug 2014 17:21:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-the-typemock-tfs-build-activities-to-work-on-a-tfs-build-agent-running-in-interactive-mode/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://msdn.microsoft.com/en-us/library/hh691189.aspx&#34;&gt;Windows 8 store applications need to be built on a TFS build agent running in interactive mode if you wish to run any tests&lt;/a&gt;. So whilst rebuilding all our build systems I decided to try to have all the agents running interactive. As we tend to run one agent per VM this was not going to be a major issue I thought.&lt;/p&gt;
&lt;p&gt;However, whilst testing we found that any of our builds that use the &lt;a href=&#34;http://docs.typemock.com/isolator/Default.aspx#%23Ref.chm/Documentation/TFS2013Build.html&#34;&gt;Typemock build activities&lt;/a&gt; failed when the build agent was running interactive, but work perfectly when it was running as a service. The error was&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://msdn.microsoft.com/en-us/library/hh691189.aspx">Windows 8 store applications need to be built on a TFS build agent running in interactive mode if you wish to run any tests</a>. So whilst rebuilding all our build systems I decided to try to have all the agents running interactive. As we tend to run one agent per VM this was not going to be a major issue I thought.</p>
<p>However, whilst testing we found that any of our builds that use the <a href="http://docs.typemock.com/isolator/Default.aspx#%23Ref.chm/Documentation/TFS2013Build.html">Typemock build activities</a> failed when the build agent was running interactive, but work perfectly when it was running as a service. The error was</p>
<pre tabindex="0"><code> 

Exception Message: Access to the registry key &#39;HKEY\_LOCAL\_MACHINESOFTWAREWow6432NodeTypeMock&#39; is denied. (type UnauthorizedAccessException)  
Exception Stack Trace:    at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str)  
   at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions)  
   at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey, RegistryKeyPermissionCheck permissionCheck)  
   at Configuration.RegistryAccess.CreateSubKey(RegistryKey reg, String subkey)  
   at TypeMock.Configuration.IsolatorRegistryManager.CreateTypemockKey()  
   at TypeMock.Deploy.AutoDeployTypeMock.Deploy(String rootDirectory)  
   at TypeMock.CLI.Common.TypeMockRegisterInfo.Execute()  
   at TypeMock.CLI.Common.TypeMockRegisterInfo..ctor()   at System.Activities.Statements.Throw.Execute(CodeActivityContext context)  
   at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)  
   at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)  
</code></pre><p>So the issue was registry access. Irrespective of whether running interactive or as a service I used the same domain service account, which was a local admin on the build agent. The only thing that changed as the mode of running.</p>
<p>After some thought I focused on UAC being the problem, but disabling this did not seem to fix the issue. I was stuck or so I thought.</p>
<p>However, <a href="http://blogs.blackmarble.co.uk/blogs/rhancock/">Robert Hancock</a> unknown to me, was suffering a similar problem with a TFS build that included a post build event that was failing to xcopy a Biztalk custom functoid DLL to ‘Program Files’. He kept getting an ‘exit code 4 access denied’ error when the build agent was running interactive. Turns out the <a href="http://www.petri.com/disable-uac-in-windows-7.htm">solution he found</a> on <a href="http://www.petri.com/author/daniel-petri">Daniel Petri</a> Blog also fixed my issues as they were both UAC/desktop interaction related.</p>
<p>The solution was to create a group policy for the build agent VMs that set the following</p>
<ul>
<li>User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode - Set its value to <strong>Elevate without prompting</strong>.</li>
<li>User Account Control: Detect application installations and prompt for elevation - Set its value to <strong>Disabled</strong>.</li>
<li>User Account Control: Only elevate UIAccess applications that are installed in secure locations - Set its value to <strong>Disabled</strong>.</li>
<li>User Account Control: Run all administrators in Admin Approval Mode - Set its value to <strong>Disabled</strong>.</li>
</ul>
<p>Once this GPO was pushed out to the build agent VMs and they were rebooted my Typemock based build and Robert Biztalk builds all worked as expected</p>
]]></content:encoded>
    </item>
    <item>
      <title>AddBizTalkHiddenReferences error in TFS build when installing ProjectBuildComponent via a command line setup</title>
      <link>https://blog.richardfennell.net/posts/addbiztalkhiddenreferences-error-in-tfs-build-when-installing-projectbuildcomponent-via-a-command-line-setup/</link>
      <pubDate>Sat, 02 Aug 2014 15:59:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/addbiztalkhiddenreferences-error-in-tfs-build-when-installing-projectbuildcomponent-via-a-command-line-setup/</guid>
      <description>&lt;p&gt;I have been trying to script the installation of all the tools and SDKs we need on our TFS Build Agent VMs. This included BizTalk. A quick check on &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/jj248690%28v=bts.80%29.aspx&#34;&gt;MSDN&lt;/a&gt; showed the setup command line parameter I need to install the build components was&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt; 

/ADDLOCAL ProjectBuildComponent
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;So I ran this via my VMs setup PowerShell script, all appeared OK, but when I tried a build I got the error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt; 

C:Program Files (x86)MSBuildMicrosoftBizTalkBizTalkCommon.targets (189): The &amp;#34;AddBizTalkHiddenReferences&amp;#34; task failed unexpectedly.  
System.ArgumentNullException: Value cannot be null.  
Parameter name: path1  
   at System.IO.Path.Combine(String path1, String path2)  
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.InitializeHiddenReferences()  
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.get\_HiddenReferences()  
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.GetHiddenReferencesNotAdded(IList\`1 projectReferences)  
   at Microsoft.VisualStudio.BizTalkProject.BuildTasks.AddBizTalkHiddenReferences.Execute()  
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()  
   at Microsoft.Build.BackEnd.TaskBuilder.&amp;lt;ExecuteInstantiatedTask&amp;gt;d\_\_20.MoveNext()
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The strange thing is, if I run the BizTalk installer via the UI and select just the ‘Project Build Components’ my build did not give this error.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been trying to script the installation of all the tools and SDKs we need on our TFS Build Agent VMs. This included BizTalk. A quick check on <a href="http://msdn.microsoft.com/en-us/library/jj248690%28v=bts.80%29.aspx">MSDN</a> showed the setup command line parameter I need to install the build components was</p>
<pre tabindex="0"><code> 

/ADDLOCAL ProjectBuildComponent
</code></pre><p>So I ran this via my VMs setup PowerShell script, all appeared OK, but when I tried a build I got the error</p>
<pre tabindex="0"><code> 

C:Program Files (x86)MSBuildMicrosoftBizTalkBizTalkCommon.targets (189): The &#34;AddBizTalkHiddenReferences&#34; task failed unexpectedly.  
System.ArgumentNullException: Value cannot be null.  
Parameter name: path1  
   at System.IO.Path.Combine(String path1, String path2)  
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.InitializeHiddenReferences()  
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.get\_HiddenReferences()  
   at Microsoft.VisualStudio.BizTalkProject.Base.HiddenReferencesHelper.GetHiddenReferencesNotAdded(IList\`1 projectReferences)  
   at Microsoft.VisualStudio.BizTalkProject.BuildTasks.AddBizTalkHiddenReferences.Execute()  
   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()  
   at Microsoft.Build.BackEnd.TaskBuilder.&lt;ExecuteInstantiatedTask&gt;d\_\_20.MoveNext()
</code></pre><p>The strange thing is, if I run the BizTalk installer via the UI and select just the ‘Project Build Components’ my build did not give this error.</p>
<p>On checking the Biztalk setup logs I saw that the UI based install does not run</p>
<pre tabindex="0"><code> 

/ADDLOCAL ProjectBuildComponent
</code></pre><p>but</p>
<pre tabindex="0"><code> 

/ADDLOCAL WMI,BizTalk,AdditionalApps,ProjectBuildComponent 
</code></pre><p>Once this change was made to my PowerShell script the TFS build worked OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2013 wizard allows you to proceed to verification even if you have no SQL admin access</title>
      <link>https://blog.richardfennell.net/posts/tfs-2013-wizard-allows-you-to-proceed-to-verification-even-if-you-have-no-sql-admin-access/</link>
      <pubDate>Fri, 01 Aug 2014 21:13:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2013-wizard-allows-you-to-proceed-to-verification-even-if-you-have-no-sql-admin-access/</guid>
      <description>&lt;p&gt;Had an interesting issue during and upgrade from TFS 2012 to 2013.2 today. The upgrade of the files proceeded as expect and the wizard ran. It picked up the correct Data Tier, found the tfs_configuration Db and I was able to fill in the service account details.&lt;/p&gt;
&lt;p&gt;However, when I got to the reporting section it found the report server URLs, but when it tried to find the tfs_warehouse DB it seemed to lock up, though the test of the SQL instance on the same page worked OK.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Had an interesting issue during and upgrade from TFS 2012 to 2013.2 today. The upgrade of the files proceeded as expect and the wizard ran. It picked up the correct Data Tier, found the tfs_configuration Db and I was able to fill in the service account details.</p>
<p>However, when I got to the reporting section it found the report server URLs, but when it tried to find the tfs_warehouse DB it seemed to lock up, though the test of the SQL instance on the same page worked OK.</p>
<p>In the end I used task manager to kill the config wizard.</p>
<p>I then re-ran the wizard, switching off the reporting. This time it got to the verification step, but seemed to hang again. After a very long wait it came back with an error that the account being using to do the upgrade did not have SysAdmin rights on the SQL instance.</p>
<p>On checking this turned out to be true, the user’s rights had been removed since the system was originally installed by a DBA. Once the rights were re-added the upgrade proceed perfectly; though interestingly the first page where you confirm the tfs_configuration DB now also had a check box about Always On, which it had not before.</p>
<p>So the strange things was not that it failed, I would expect that, but that any of the wizard worked at all. I would have expected a failure to even find the tfs_configuration DB at the start of the wizard. Not have to wait until the verification (or reporting) step</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Alert DSL documentation and downloads update</title>
      <link>https://blog.richardfennell.net/posts/tfs-alert-dsl-documentation-and-downloads-update/</link>
      <pubDate>Fri, 01 Aug 2014 21:00:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-alert-dsl-documentation-and-downloads-update/</guid>
      <description>&lt;p&gt;Having installed my &lt;a href=&#34;https://tfsalertsdsl.codeplex.com/&#34;&gt;TFS Alert DSL&lt;/a&gt; today onto a client’s TFS 2013.2, I have realised some of the documentation was a little unclear. So I have have just updated the &lt;a href=&#34;https://tfsalertsdsl.codeplex.com/releases/view/122072&#34;&gt;download page&lt;/a&gt; to provide an easy means to get&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The actual DSL implementation (without rebuilding the source)&lt;/li&gt;
&lt;li&gt;A command line tool to create an event source in case you want log to the Windows event log&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hope this helps&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Having installed my <a href="https://tfsalertsdsl.codeplex.com/">TFS Alert DSL</a> today onto a client’s TFS 2013.2, I have realised some of the documentation was a little unclear. So I have have just updated the <a href="https://tfsalertsdsl.codeplex.com/releases/view/122072">download page</a> to provide an easy means to get</p>
<ul>
<li>The actual DSL implementation (without rebuilding the source)</li>
<li>A command line tool to create an event source in case you want log to the Windows event log</li>
</ul>
<p>Hope this helps</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why is the Team Project drop down in Release Management empty?</title>
      <link>https://blog.richardfennell.net/posts/why-is-the-team-project-drop-down-in-release-management-empty/</link>
      <pubDate>Mon, 28 Jul 2014 16:41:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-is-the-team-project-drop-down-in-release-management-empty/</guid>
      <description>&lt;p&gt;&lt;strong&gt;The problem&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Today I found I had a problem when trying to associate a Release Management 2013.2 release pipeline with a TFS build. When I tried to select a team project the drop down for the release properties was empty.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_191.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_188.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The strange thing was this installation of Release Management has been working OK last week. What had changed?&lt;/p&gt;
&lt;p&gt;I suspected an issue connecting to TFS, so in the Release Management Client’s ‘Managing TFS’ tab I tried to verify the  active TFS server linked to the Release Management. As soon as I tried this I got the following error that the TFS server was not available.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>The problem</strong></p>
<p>Today I found I had a problem when trying to associate a Release Management 2013.2 release pipeline with a TFS build. When I tried to select a team project the drop down for the release properties was empty.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_191.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_188.png" title="image"></a></p>
<p>The strange thing was this installation of Release Management has been working OK last week. What had changed?</p>
<p>I suspected an issue connecting to TFS, so in the Release Management Client’s ‘Managing TFS’ tab I tried to verify the  active TFS server linked to the Release Management. As soon as I tried this I got the following error that the TFS server was not available.</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002_3.jpg"><img alt="clip_image002" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002_thumb_3.jpg" title="clip_image002"></a></p>
<p>I switched the TFS URL to HTTP from HTTPS and retired the verification and it worked. Going back to my release properties I could now see the build definitions again in the drop down. So I knew I had an SSL issue.</p>
<p>The strange thing was we use SSL as out default connection, and none of our developers were complaining they could not connect via HTTPS.</p>
<p>However, on checking I found on some of our build VMs there was an issue. If on those VMs I tried to connect to TFS in a browser with an HTTPS URL you got a certificate chain error.</p>
<p>But stranger, on my PC, where I was running the Release Management client, I could access TFS over HTTPS from a browser and Visual Studio, but the Release Management verification failed.</p>
<p><strong>The solution</strong></p>
<p>It turns out the issue was we had an intermediate cert issue with our TFS server. An older Digicert intermediate certificate had expired over the weekend, and though the new cert was in place, and had been for a good few months since we renewed our wildcard cert, the active wildcard cert insisted on using the old version of the intermediate cert on some machines.</p>
<p>As an immediate fix we ended up having to delete the old intermediate cert manually on machines showing the error. Once this was done the HTTPS connect worked again.</p>
<p>Turns the real culprit was a group policy used to push out intermediate certs that are required to be trusted for some document automation we use. This old group policy was pushing the wrong version of the cert to some server VMs. Once this policy was update with the correct cert and pushed out it overwrote the problem cert and the problem went away.</p>
<p>One potentially confusing thing here is that the ‘verity the TFS link’ in Release Management verifies that the Release Management server can see the TFS server, not the PC running the Release Management client. It was on the Release Management server I had to delete the dead cert (run a gpupdate /force to get the new policy). Hence why I was confused by my own PC working for Visual Studio and not for Release Management</p>
<p>So I suspect the issue with drop down being empty is always going to really mean the Release Management server cannot see the TFS server for some reason, so check certs, permissions or basic network failure.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I have just submitted a session for DDDNorth 2014</title>
      <link>https://blog.richardfennell.net/posts/i-have-just-submitted-a-session-for-dddnorth-2014/</link>
      <pubDate>Fri, 25 Jul 2014 14:47:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-have-just-submitted-a-session-for-dddnorth-2014/</guid>
      <description>&lt;p&gt;I have just submitted a session for DDDNorth 2014, which is at the University of Leeds on Saturday 18 October.&lt;/p&gt;
&lt;p&gt;There is still time for you to submit yours &lt;a href=&#34;http://bit.ly/1wwwqBA&#34;&gt;Session submission&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD North Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddnorth.co.uk/Content/images/logo.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just submitted a session for DDDNorth 2014, which is at the University of Leeds on Saturday 18 October.</p>
<p>There is still time for you to submit yours <a href="http://bit.ly/1wwwqBA">Session submission</a></p>
<p><img alt="DDD North Logo" loading="lazy" src="http://www.dddnorth.co.uk/Content/images/logo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Automating TFS Build Server deployment with SCVMM and PowerShell</title>
      <link>https://blog.richardfennell.net/posts/automating-tfs-build-server-deployment-with-scvmm-and-powershell/</link>
      <pubDate>Fri, 18 Jul 2014 16:47:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/automating-tfs-build-server-deployment-with-scvmm-and-powershell/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2014/07/17/Automating-TFS-Build-Server-deployment-with-SCVMM-and-PowerShell.aspx&#34;&gt;Rik recently posted&lt;/a&gt; about the work we have done to automatically provision TFS build agent VMs. This has come out of us having about 10 build agents on our TFS server all doing different jobs, with different SDKs etc. When we needed to increase capacity for a given build type we had a problems, could another agent run the build? what exactly was on the agent anyway? An audit of the boxes made for horrible reading, there were very inconsistent.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2014/07/17/Automating-TFS-Build-Server-deployment-with-SCVMM-and-PowerShell.aspx">Rik recently posted</a> about the work we have done to automatically provision TFS build agent VMs. This has come out of us having about 10 build agents on our TFS server all doing different jobs, with different SDKs etc. When we needed to increase capacity for a given build type we had a problems, could another agent run the build? what exactly was on the agent anyway? An audit of the boxes made for horrible reading, there were very inconsistent.</p>
<p>So Rik automated the provision of new VMs and I looked at providing a PowerShell script to install the base tools we needed  on our build agents, knowing this list is going to change a good deal over time. After some thought, for our first attempt we picked</p>
<ul>
<li>TFS itself (to provide the 2013.2 agent)</li>
<li>Visual Studio 2013.2 – you know you always end up installing it in the end to get SSDT, SDK and MSBuild targets etc.</li>
<li>WIX 3.8</li>
<li>Azure SDK 2.3 for Visual Studio 2013.2 – Virtually all our current projects need this. This is actually why we have had capacity issue on the old build agents as this was only installed on one.</li>
</ul>
<p>Given this basic set of tools we can build probably 70-80% of our solutions. If we use this as the base for all build boxes we can then add extra tools if required manually, but we expect we will just end up adding to the list of items installed on all our build boxes, assuming the cost of installing the extra tools/SDKs is not too high. Also we will try to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx">auto deploy tools as part of our build templates where possible</a>, again reducing what needs to be placed on any given build agent.</p>
<p>Now the script I ended up with is a bit rough and ready but it does the job. I think in the future a move to <a href="http://blogs.technet.com/b/privatecloud/archive/2013/08/30/introducing-powershell-desired-state-configuration-dsc.aspx">DSC</a> might help in this process, but I did not have time to write the custom resources now. I am assuming this script is going to be a constant work in progress as it is modified for new releases and tools.  I did make the effort to make all the steps check to see if they needed to be done, thus allowing the re-running of the script to ‘repair’ the build agent. All the writing to the event log is to make life easier for Rik when working out what is going on with the script, especially useful due to the installs from ISOs being a bit slow to run.</p>
<pre tabindex="0"><code>

\# make sure we have a working event logger with a suitable source  
Create-EventLogSource -logname &#34;Setup&#34; -source &#34;Create-TfsBuild&#34;  
write-eventlog -logname Setup -source Create-TfsBuild -eventID 6 -entrytype Information -message &#34;Create-Tfsbuild started&#34; 

 

\# add build service as local admin, not essential but make life easier for some projects  
Add-LocalAdmin -domain &#34;ourdomain&#34; -user &#34;Tfsbuilder&#34;

  
  
  
  
\# Install TFS, by mounting the ISO over the network and running the installer

\# The command ‘&amp; $isodrive + &#34;:tfs\_server.exe&#34; /quiet’ is run

\# In the function use a while loop to see when the tfconfig.exe file appears and assume the installer is done – dirty but works

\# allow me to use write-progress to give some indication the install is done.  
Write-Output &#34;Installing TFS server&#34;  
Add-Tfs &#34;\\storeISO ImagesVisual Studio20132013.2en\_visual\_studio\_team\_foundation\_server\_2013\_with\_update\_2\_x86\_x64\_dvd\_4092433.iso&#34;

 

Write-Output &#34;Configuring TFS Build&#34;  
\# clear out any old config – I found this helped avoid error when re-running script

\# A System.Diagnostics.ProcessStartInfo object is used to run the tfsconfig command with the argument &#34;setup /uninstall:All&#34;

\# ProcessStartInfo is used so we can capture the error output and log it to the event log if required  
Unconfigure-Tfs

  
\# and reconfigure, again using tfsconfig, this time with the argument &#34;unattend /configure  /unattendfile:config.ini&#34;, where

\# the config.ini has been created with tfsconfig unattend /create flag (check MSDN for the details)  
Configure-Tfs &#34;\\storeApplicationInstallersTFSBuildconfigsbuild.ini&#34;

 

\# install vs2013, again by mounting the ISO running the installer, with a loop to check for a file appearing  
Write-Output &#34;Installing Visual Studio&#34;  
Add-VisualStudio &#34;\\storeISO ImagesVisual Studio20132013.2en\_visual\_studio\_premium\_2013\_with\_update\_2\_x86\_dvd\_4238022.iso&#34;

 

\# install wix by running the exe with the –q options via ProcessStartInfo again.  
Write-Output &#34;Installing Wix&#34;  
Add-Wix &#34;\\storeApplicationInstallerswixwix38.exe&#34;

 

\# install azure SDK using the Web Platform Installer, checking if the Web PI is present first and installing it if needed

\# The Web PI installer lets you ask to reinstall a package,  if it is it just ignores the request, so you don’t need to check if Azure is already installed  
Write-Output &#34;Installing Azure SDK&#34;  
Add-WebPIPackage &#34;VWDOrVs2013AzurePack&#34;

 

write-eventlog -logname Setup -source Create-TfsBuild -eventID 7 -entrytype Information -message &#34;Create-Tfsbuild ended&#34;  
Write-Output &#34;End of script&#34;

 
</code></pre><p>So for a first pass this seems to work, I now need to make sure all our build can use this cut down build agent, if they can’t do I need to modify the build template? or do I need to add more tools to our standard install? or decide if it is going to need a special agent definition?</p>
<p>Once this is all done the hope is that when all the TFS build agents need patching for TFS 2013.x we will just redeploy new VMs or run a modified script to silently do the update. We shall see if this delivers on that promise</p>
]]></content:encoded>
    </item>
    <item>
      <title>Could not load file or assembly &#39;Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0’ when running a build on a new build agent on TFS 2013.2</title>
      <link>https://blog.richardfennell.net/posts/could-not-load-file-or-assembly-microsoft-teamfoundation-workitemtracking-common-version12-0-0-0-when-running-a-build-on-a-new-build-agent-on-tfs-2013-2/</link>
      <pubDate>Thu, 17 Jul 2014 10:03:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/could-not-load-file-or-assembly-microsoft-teamfoundation-workitemtracking-common-version12-0-0-0-when-running-a-build-on-a-new-build-agent-on-tfs-2013-2/</guid>
      <description>&lt;p&gt;I am currently rebuilding our TFS build infrastructure, we have too many build agents that are just too different, they don’t need to be. So I am looking at a standard set of features on a build agent and the ability to auto provision new instances to make scaling easier. More on this in a future post…&lt;/p&gt;
&lt;p&gt;Anyway whilst testing a new agent I had a problem. A build that had worked on a previous test agent failed with the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am currently rebuilding our TFS build infrastructure, we have too many build agents that are just too different, they don’t need to be. So I am looking at a standard set of features on a build agent and the ability to auto provision new instances to make scaling easier. More on this in a future post…</p>
<p>Anyway whilst testing a new agent I had a problem. A build that had worked on a previous test agent failed with the error</p>
<blockquote>
<p>Could not load file or assembly &lsquo;Microsoft.TeamFoundation.WorkItemTracking.Common, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a&rsquo; or one of its dependencies. The located assembly&rsquo;s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)</p></blockquote>
<p>The log showed it was failing to even do a get latest of the files to build, or anything on the build agent.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_190.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_187.png" title="image"></a></p>
<p>Turns out the issue was the PowerShell script that installed all the TFS build components and SDKs had failed when trying to install the Azure SDK for VS2013, the Web Deploy Platform was not installed so when it tried to use the command line installer to add this package it failed.</p>
<p>I fixed the issues for the Web PI tools and re-ran the command line to installed the Azure SDK and all was OK.</p>
<p>Not sure why this happened, maybe a missing pre-req put on by Web PI itself was the issue. I know older versions did have a .NET 3.5 dependency. Once to keep an eye on</p>
]]></content:encoded>
    </item>
    <item>
      <title>MSBuild targeting a project in a solution folder</title>
      <link>https://blog.richardfennell.net/posts/msbuild-targeting-a-project-in-a-solution-folder/</link>
      <pubDate>Mon, 14 Jul 2014 21:15:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/msbuild-targeting-a-project-in-a-solution-folder/</guid>
      <description>&lt;p&gt;Whilst working on an automated build where I needed to target a specific project I hit a problem. I would normally expect the MSBuild argument to be&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;/t:MyProject:Build&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Where I want to build the project &lt;strong&gt;Myproject&lt;/strong&gt; in my solution and perform the &lt;strong&gt;Build&lt;/strong&gt; target (which is probably the default anyway).&lt;/p&gt;
&lt;p&gt;However, my project was in a solution folder. The &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/ms171486.aspx&#34;&gt;documentation&lt;/a&gt; says the you should be able to use for form&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst working on an automated build where I needed to target a specific project I hit a problem. I would normally expect the MSBuild argument to be</p>
<blockquote>
<p>/t:MyProject:Build</p></blockquote>
<p>Where I want to build the project <strong>Myproject</strong> in my solution and perform the <strong>Build</strong> target (which is probably the default anyway).</p>
<p>However, my project was in a solution folder. The <a href="http://msdn.microsoft.com/en-us/library/ms171486.aspx">documentation</a> says the you should be able to use for form</p>
<blockquote>
<p>/t:TheSolutionFolderMyProject:Build</p></blockquote>
<p>but I kept getting the error the project did not exist.</p>
<p>Once I changed to</p>
<blockquote>
<p>/t:TheSolutionFolderMyProject</p></blockquote>
<p>it worked, the default build target was run, which was OK as this was <strong>Build</strong> the one I wanted.</p>
<p>Not sure why occurred, maybe I should steer clear of solution folders?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Building Azure Cloud Applications on TFS</title>
      <link>https://blog.richardfennell.net/posts/building-azure-cloud-applications-on-tfs/</link>
      <pubDate>Mon, 14 Jul 2014 14:43:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/building-azure-cloud-applications-on-tfs/</guid>
      <description>&lt;p&gt;If you are doing any work with Azure Cloud Applications there is a very good chance you will want your automated build process to produce the .CSPKG deployment file, you might even want it to do the deployment too.&lt;/p&gt;
&lt;p&gt;On our TFS build system, it turns out this is not a straight forward as you might hope. The problem is that the MSbuild publish target that creates the files creates them in the &lt;strong&gt;$(build agent working folder)sourcemyprojectbindebug&lt;/strong&gt; folder. Unlike the output of the build target which puts them in the &lt;strong&gt;$(build agent working folder)binaries&lt;/strong&gt; folder which gets copied to the build drops location. Hence though the files are created they are not accessible with the rest of the built items to the team.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are doing any work with Azure Cloud Applications there is a very good chance you will want your automated build process to produce the .CSPKG deployment file, you might even want it to do the deployment too.</p>
<p>On our TFS build system, it turns out this is not a straight forward as you might hope. The problem is that the MSbuild publish target that creates the files creates them in the <strong>$(build agent working folder)sourcemyprojectbindebug</strong> folder. Unlike the output of the build target which puts them in the <strong>$(build agent working folder)binaries</strong> folder which gets copied to the build drops location. Hence though the files are created they are not accessible with the rest of the built items to the team.</p>
<p>I have battled to sort this for a while, trying to avoid the need to edit our customised TFS build process template. This is something we try to avoid where possible, favouring environment variables and MSbuild arguments where we can get away with it. There is no point denying that editing build process templates is a pain point on TFS.</p>
<h3 id="the-solution--editing-the-process-template">The solution – editing the process template</h3>
<p>Turns out a colleague had fixed the same problem a few projects ago and the functionality was already hidden in our standard TFS build process template. The problem was it was not documented; a lesson for all of us, that it is a very good idea to put customisation information in a searchable location so others find customisations that are not immediate obvious. Frankly this is one of the main purposes of this blog, somewhere I can find what I did that years, as I won’t remember the details.</p>
<p>Anyway the key is to make sure the publish target for the MSBbuild uses the correct location to create the files. This is done using a pair of MSBuild arguments in the advanced section of the build configuration</p>
<ul>
<li><strong>/t:MyCloudApp:Publish</strong> -  this tells MSbuild to perform the publish action for just the project <strong>MyCloudApp</strong>. You might be able to just go <strong>/t:Publish</strong> if only one project in your solution has a Publish target</li>
<li><strong>/p:PublishDir=$(OutDir)</strong> - this is the magic. We pass in the temporary variable <strong>$(OutDir).</strong> At this point we don’t know the target binary location as it is build agent/instance specific, customisation in the TFS build process template converts this temporary value to the correct path.</li>
</ul>
<p>In the build process template in the <strong>Initialize Variable</strong> sequence within <strong>Run on Agent</strong> add a <strong>If</strong> Activity.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_189.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_186.png" title="image"></a></p>
<ul>
<li>Set the condition to <strong>MSBuildArguments.Contains(“$(OutDir)”)</strong></li>
<li>Within the true branch add an Assignment activity for the <strong>MSBuildArguments</strong> variable to <strong>MSBuildArguments.Replace(“$(OutDir)”, String.Format(“{0}{1}\”, BinariesDirectory, “Packages”))</strong></li>
</ul>
<p>This will swap the <strong>$(OutDir)</strong> for the correct TFS binaries location within that build.</p>
<p>After that it all just works as expected. The CSPKG file etc. ends up in the drops location.</p>
<p>Other things that did not work (prior to TFS 2013)</p>
<p>I had also looked a running a PowerShell script at the end of the build process or adding an <strong>AfterPublish</strong> target within the MSBuild process (by added it to the project file manually) that did a file copy. Both these methods suffered the problem that when the MSBuild command ran it did not know the location to drop the files into. Hence the need for the customisation above.</p>
<p>Now I should point out that though we are running TFS 2013 this project was targeting the TFS 2012 build tools, so I had to use the solution outlined above, a process template edit. However, if we had been using the TFS 2013 process template as our base for customisation then we would have had another way to get around the problem.</p>
<p><a href="http://msdn.microsoft.com/en-us/library/hh850448.aspx">TFS 2013 exposes the current build settings as environment variables</a>. This would allow us to use a <strong>AfterPublish</strong> MSBuild Target something like</p>
<pre tabindex="0"><code>&lt;Target Name=&#34;CustomPostPublishActions&#34; AfterTargets=&#34;AfterPublish&#34; Condition=&#34;&#39;$(TF\_BUILD\_DROPLOCATION)&#39; != &#39;&#39;&#34;&gt;   &lt;Exec Command=&#34;echo Post-PUBLISH event: Copying published files to: $(TF\_BUILD\_DROPLOCATION)&#34; /&gt;   &lt;Exec Command=&#34;xcopy &amp;quot;$(ProjectDir)bin$(ConfigurationName)app.publish&amp;quot; &amp;quot;$(TF\_BUILD\_DROPLOCATION)app.publish&amp;quot; /y &#34; /&gt; &lt;/Target&gt;
</code></pre><p>So maybe a simpler option for the future?</p>
<p>The moral of the story document your customisations and let your whole team know they exist</p>
]]></content:encoded>
    </item>
    <item>
      <title>Interesting license change for VS Online  for ‘Stakeholder’ users</title>
      <link>https://blog.richardfennell.net/posts/interesting-license-change-for-vs-online-for-stakeholder-users/</link>
      <pubDate>Mon, 14 Jul 2014 08:39:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/interesting-license-change-for-vs-online-for-stakeholder-users/</guid>
      <description>&lt;p&gt;All teams have  ‘Stakeholder’, the people the are driving a project forward, who want the new system to be able to do their job; but are often not directly involved in the production/testing of the system. In the past this has been an awkward group to provide TFS access for. If they want to see any detail of the project they need a TFS CAL, expensive for the occasional casual viewer.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>All teams have  ‘Stakeholder’, the people the are driving a project forward, who want the new system to be able to do their job; but are often not directly involved in the production/testing of the system. In the past this has been an awkward group to provide TFS access for. If they want to see any detail of the project they need a TFS CAL, expensive for the occasional casual viewer.</p>
<p>Last week <a href="http://blogs.msdn.com/b/bharry/archive/2014/07/09/upcoming-vs-online-licensing-changes.aspx">Brian Harry announced there would be a licensing change in VSO</a> with a ‘Stakeholder’ license. It has limitations, but provides the key feature they will need</p>
<ul>
<li>Full read/write/create on all work items</li>
<li>Create, run and save (to “My Queries”) work item queries</li>
<li>View project and team home pages</li>
<li>Access to the backlog, including add and update (but no ability to reprioritize the work)</li>
<li>Ability to receive work item alerts</li>
</ul>
<p>The best news is that it will be a free license, so no monthly cost to have as many ‘Stakeholders’ on you VSO account.</p>
<p>Now most of my clients are using on-premise TFS, so this change does not effect them. However, the <a href="http://blogs.msdn.com/b/bharry/archive/2014/07/09/upcoming-vs-online-licensing-changes.aspx">same post</a> mentions that the “Work Item Web Access” TFS CAL exemption will be change in future releases of TFS to bring it in line with the ‘Stakeholder’.</p>
<p>So good new all round, making TFS adoption easier, adding more ways for clients to access their ALM information</p>
]]></content:encoded>
    </item>
    <item>
      <title>A new badge for Channel9 Guy</title>
      <link>https://blog.richardfennell.net/posts/a-new-badge-for-channel9-guy/</link>
      <pubDate>Mon, 07 Jul 2014 13:01:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-new-badge-for-channel9-guy/</guid>
      <description>&lt;p&gt;My Channel9 Guy has a new MVP badge&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_188.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_185.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My Channel9 Guy has a new MVP badge</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_188.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_185.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Post by one of our Testers about experiences with CodeUI and Windows Store Apps on the MSDN blog</title>
      <link>https://blog.richardfennell.net/posts/post-by-one-of-our-testers-about-experiences-with-codeui-and-windows-store-apps-on-the-msdn-blog/</link>
      <pubDate>Thu, 03 Jul 2014 11:38:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-by-one-of-our-testers-about-experiences-with-codeui-and-windows-store-apps-on-the-msdn-blog/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.microsoft.com/en-gb/developers/articles/week05jun14/guidelines-for-resilient-automation-of-windows-store-xaml-apps-using-visual-studio-codedui&#34;&gt;Nice post&lt;/a&gt; by &lt;a href=&#34;https://twitter.com/CaptainShmaser&#34;&gt;Riccardo Viglianisi&lt;/a&gt;, one of Black Marble’s Testers, about his experiences with CodeUI and Windows Store Apps published on the MSDN UK Visual Studio blog.&lt;/p&gt;
&lt;p&gt;Well worth a read if you are looking at this technology.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.microsoft.com/en-gb/developers/articles/week05jun14/guidelines-for-resilient-automation-of-windows-store-xaml-apps-using-visual-studio-codedui">Nice post</a> by <a href="https://twitter.com/CaptainShmaser">Riccardo Viglianisi</a>, one of Black Marble’s Testers, about his experiences with CodeUI and Windows Store Apps published on the MSDN UK Visual Studio blog.</p>
<p>Well worth a read if you are looking at this technology.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Re-awarded Microsoft MVP for the 7th Year</title>
      <link>https://blog.richardfennell.net/posts/re-awarded-microsoft-mvp-for-the-7th-year/</link>
      <pubDate>Thu, 03 Jul 2014 08:38:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/re-awarded-microsoft-mvp-for-the-7th-year/</guid>
      <description>&lt;p&gt;I am really happy to say that I have had my &lt;a href=&#34;http://mvp.microsoft.com/en-us/mvp/Richard%20Fennell-4020304&#34;&gt;MVP for Visual Studio (ALM)&lt;/a&gt; re-awarded, so am an MVP for the 7th time. It is a privilege to get to work with such a great group of people as a have met via the MVP programme.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really happy to say that I have had my <a href="http://mvp.microsoft.com/en-us/mvp/Richard%20Fennell-4020304">MVP for Visual Studio (ALM)</a> re-awarded, so am an MVP for the 7th time. It is a privilege to get to work with such a great group of people as a have met via the MVP programme.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How long is my TFS 2010 to 2013 upgrade going to take – Part 2</title>
      <link>https://blog.richardfennell.net/posts/how-long-is-my-tfs-2010-to-2013-upgrade-going-to-take-part-2/</link>
      <pubDate>Fri, 27 Jun 2014 16:36:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-long-is-my-tfs-2010-to-2013-upgrade-going-to-take-part-2/</guid>
      <description>&lt;p&gt;Back in January I did a post &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/21/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take.aspx&#34;&gt;How long is my TFS 2010 to 2013 upgrade going to take?&lt;/a&gt; I have now done some more work with one of the clients and have more data. Specially the initial trial was 2010 &amp;gt; 2013 RTM on a single tier test VM; we have now done a test upgrade from 2010 &amp;gt; 2013.2 on the same VM and also one to a production quality dual tier system.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Back in January I did a post <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/21/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take.aspx">How long is my TFS 2010 to 2013 upgrade going to take?</a> I have now done some more work with one of the clients and have more data. Specially the initial trial was 2010 &gt; 2013 RTM on a single tier test VM; we have now done a test upgrade from 2010 &gt; 2013.2 on the same VM and also one to a production quality dual tier system.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_187.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_184.png" title="image"></a></p>
<p>The key lessons are</p>
<ul>
<li>There a 150 more steps to go from 2013 RTM to 2013.2, it takes a good deal longer.</li>
<li>The dual tier production hardware is nearly twice as fast to do the upgrade, though the initial steps (step 31, moving the source code) is not that much faster. It is the steps after this that are faster. We put it down to far better SQL throughput.</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Submissions open for DDDNorth 2014</title>
      <link>https://blog.richardfennell.net/posts/submissions-open-for-dddnorth-2014/</link>
      <pubDate>Wed, 25 Jun 2014 12:18:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/submissions-open-for-dddnorth-2014/</guid>
      <description>&lt;p&gt;DDD North is coming to the University of Leeds on Saturday 18 October.&lt;/p&gt;
&lt;p&gt;It is now open for &lt;a href=&#34;http://bit.ly/1wwwqBA&#34;&gt;Session submission&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD North Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddnorth.co.uk/Content/images/logo.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>DDD North is coming to the University of Leeds on Saturday 18 October.</p>
<p>It is now open for <a href="http://bit.ly/1wwwqBA">Session submission</a></p>
<p><img alt="DDD North Logo" loading="lazy" src="http://www.dddnorth.co.uk/Content/images/logo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>List of TFS Widgets</title>
      <link>https://blog.richardfennell.net/posts/list-of-tfs-widgets/</link>
      <pubDate>Mon, 23 Jun 2014 20:41:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/list-of-tfs-widgets/</guid>
      <description>&lt;p&gt;The ALM Rangers are again producing a list of useful tools and widgets for TFS. It can be found at  &lt;a href=&#34;http://aka.ms/widgets&#34;&gt;aka.ms/widgets&lt;/a&gt; and should be updated regularly&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The ALM Rangers are again producing a list of useful tools and widgets for TFS. It can be found at  <a href="http://aka.ms/widgets">aka.ms/widgets</a> and should be updated regularly</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cloning tfs repository with git-tf gives a &#34;a server path must be absolute&#34;</title>
      <link>https://blog.richardfennell.net/posts/cloning-tfs-repository-with-git-tf-gives-a-a-server-path-must-be-absolute/</link>
      <pubDate>Mon, 09 Jun 2014 13:23:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cloning-tfs-repository-with-git-tf-gives-a-a-server-path-must-be-absolute/</guid>
      <description>&lt;p&gt;I am currently involved in &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/16/Using-git-tf-to-migrate-code-between-TFS-servers-retaining-history.aspx&#34;&gt;moving some TFS TFVC hosted source to a TFS Git repository&lt;/a&gt;.  The first step was to clone the source for a team project from TFS using the command&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;git tf clone --deep [http://tfsserver01:8080/tfs/defaultcollection](http://tfsserver01:8080/tfs/defaultcollection) ‘$My Project’ localrepo1
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;and it worked fine. However the next project I tried to move had no space in the source path&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;git tf clone --deep [http://tfsserver01:8080/tfs/defaultcollection](http://tfsserver01:8080/tfs/defaultcollection) ‘$MyProject’ localrepo2
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This gave the error&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;git-tf: A server path must be absolute.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Turns out if the problem was the single quotes. Remove these and the command worked as expected&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am currently involved in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/16/Using-git-tf-to-migrate-code-between-TFS-servers-retaining-history.aspx">moving some TFS TFVC hosted source to a TFS Git repository</a>.  The first step was to clone the source for a team project from TFS using the command</p>
<pre tabindex="0"><code>git tf clone --deep [http://tfsserver01:8080/tfs/defaultcollection](http://tfsserver01:8080/tfs/defaultcollection) ‘$My Project’ localrepo1
</code></pre><p>and it worked fine. However the next project I tried to move had no space in the source path</p>
<pre tabindex="0"><code>git tf clone --deep [http://tfsserver01:8080/tfs/defaultcollection](http://tfsserver01:8080/tfs/defaultcollection) ‘$MyProject’ localrepo2
</code></pre><p>This gave the error</p>
<pre tabindex="0"><code>git-tf: A server path must be absolute.
</code></pre><p>Turns out if the problem was the single quotes. Remove these and the command worked as expected</p>
<pre tabindex="0"><code>git tf clone --deep [http://tfsserver01:8080/tfs/defaultcollection](http://tfsserver01:8080/tfs/defaultcollection) $MyProject localrepo2
</code></pre><p>Seems you should only use the quotes when there are spaces in a path name.</p>
<p>Updated 11 June – After a bit more thought I think I have tracked down the true cause. It is not actually the single quote, but the fact the command line had been cut and pasted from Word. This mean the quote was a ‘ not a &lsquo;. Cutting and pasting from Word can always lead to similar problems, but it is still a strange error message, I would have expected an invalid character message</p>
]]></content:encoded>
    </item>
    <item>
      <title>‘The Circle’ a good read</title>
      <link>https://blog.richardfennell.net/posts/the-circle-a-good-read/</link>
      <pubDate>Sat, 07 Jun 2014 13:37:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-circle-a-good-read/</guid>
      <description>&lt;p&gt;Seven whole years ago &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2007/01/17/Time-to-revisit-Microserfs.aspx&#34;&gt;I wrote&lt;/a&gt; about re-reading &lt;em&gt;[corrected – getting old and forgetful not William Gibson’s it was]&lt;/em&gt;  Douglas Coupland’s  &lt;a href=&#34;//rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;amp;bc1=000000&amp;amp;IS2=1&amp;amp;bg1=FFFFFF&amp;amp;fc1=000000&amp;amp;lc1=0000FF&amp;amp;t=buitwoonmypc-21&amp;amp;o=2&amp;amp;p=8&amp;amp;l=as4&amp;amp;m=amazon&amp;amp;f=ifr&amp;amp;ref=ss_til&amp;amp;asins=0007179812&#34;&gt;Microserfs&lt;/a&gt; and how it compared to his then new book &lt;a href=&#34;http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;amp;bc1=000000&amp;amp;IS2=1&amp;amp;bg1=FFFFFF&amp;amp;fc1=000000&amp;amp;lc1=0000FF&amp;amp;t=buitwoonmypc-21&amp;amp;o=2&amp;amp;p=8&amp;amp;l=as4&amp;amp;m=amazon&amp;amp;f=ifr&amp;amp;ref=ss_til&amp;amp;asins=0747585873&#34;&gt;JPod&lt;/a&gt;. And how they both reflected the IT world at their time. Speculative fiction always says more about the time they are written than the future they predict.&lt;/p&gt;
&lt;p&gt;I have just read &lt;a href=&#34;http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;amp;bc1=000000&amp;amp;IS2=1&amp;amp;bg1=FFFFFF&amp;amp;fc1=000000&amp;amp;lc1=0000FF&amp;amp;t=buitwoonmypc-21&amp;amp;o=2&amp;amp;p=8&amp;amp;l=as4&amp;amp;m=amazon&amp;amp;f=ifr&amp;amp;ref=ss_til&amp;amp;asins=B00EODUWQ6&#34;&gt;‘The Circle’ by Dave Eggers&lt;/a&gt; which in many ways is a similar book for our social media, big brother monitored age. I will leave it to you to decide if it a utopian or dystopia but it is well worth a read&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Seven whole years ago <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2007/01/17/Time-to-revisit-Microserfs.aspx">I wrote</a> about re-reading <em>[corrected – getting old and forgetful not William Gibson’s it was]</em>  Douglas Coupland’s  <a href="//rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;bc1=000000&amp;IS2=1&amp;bg1=FFFFFF&amp;fc1=000000&amp;lc1=0000FF&amp;t=buitwoonmypc-21&amp;o=2&amp;p=8&amp;l=as4&amp;m=amazon&amp;f=ifr&amp;ref=ss_til&amp;asins=0007179812">Microserfs</a> and how it compared to his then new book <a href="http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;bc1=000000&amp;IS2=1&amp;bg1=FFFFFF&amp;fc1=000000&amp;lc1=0000FF&amp;t=buitwoonmypc-21&amp;o=2&amp;p=8&amp;l=as4&amp;m=amazon&amp;f=ifr&amp;ref=ss_til&amp;asins=0747585873">JPod</a>. And how they both reflected the IT world at their time. Speculative fiction always says more about the time they are written than the future they predict.</p>
<p>I have just read <a href="http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;bc1=000000&amp;IS2=1&amp;bg1=FFFFFF&amp;fc1=000000&amp;lc1=0000FF&amp;t=buitwoonmypc-21&amp;o=2&amp;p=8&amp;l=as4&amp;m=amazon&amp;f=ifr&amp;ref=ss_til&amp;asins=B00EODUWQ6">‘The Circle’ by Dave Eggers</a> which in many ways is a similar book for our social media, big brother monitored age. I will leave it to you to decide if it a utopian or dystopia but it is well worth a read</p>
<p><a href="http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;bc1=000000&amp;IS2=1&amp;bg1=FFFFFF&amp;fc1=000000&amp;lc1=0000FF&amp;t=buitwoonmypc-21&amp;o=2&amp;p=8&amp;l=as4&amp;m=amazon&amp;f=ifr&amp;ref=ss_til&amp;asins=B00EODUWQ6"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_186.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Our TFS Lab Management Infrastructure</title>
      <link>https://blog.richardfennell.net/posts/our-tfs-lab-management-infrastructure/</link>
      <pubDate>Thu, 05 Jun 2014 23:08:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/our-tfs-lab-management-infrastructure/</guid>
      <description>&lt;p&gt;After my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/&#34;&gt;session Techorama&lt;/a&gt; last week I have been asked some questions over how we built our TFS Lab Management infrastructure. Well here is a bit more detail, thanks to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/&#34;&gt;Rik&lt;/a&gt; for helping correcting what I had misremembered and providing much of the detail.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_185.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_183.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;For SQL we have two physical servers with Intel processors. Each has a pair of mirrored disks for the OS and RAID5 group of disks for data. We use SQL 2012 Enterprise Always On for replication to keep the DBs in sync. The servers are part of a Windows cluster (needed for Always On) and we use a VM to give a third server in the witness role. This is hosted on a production Hyper-V cloud. We have a number of availability groups on this platform, basically one per service we run. This allows us to split the read/write load between the two servers (unless they have failed over to a single box). If we had only one availability group for all the DBs  one node would being all the read/write and the other read only, so not that balanced.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After my <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/">session Techorama</a> last week I have been asked some questions over how we built our TFS Lab Management infrastructure. Well here is a bit more detail, thanks to <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/">Rik</a> for helping correcting what I had misremembered and providing much of the detail.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_185.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_183.png" title="image"></a></p>
<p>For SQL we have two physical servers with Intel processors. Each has a pair of mirrored disks for the OS and RAID5 group of disks for data. We use SQL 2012 Enterprise Always On for replication to keep the DBs in sync. The servers are part of a Windows cluster (needed for Always On) and we use a VM to give a third server in the witness role. This is hosted on a production Hyper-V cloud. We have a number of availability groups on this platform, basically one per service we run. This allows us to split the read/write load between the two servers (unless they have failed over to a single box). If we had only one availability group for all the DBs  one node would being all the read/write and the other read only, so not that balanced.</p>
<p>SCVMM runs on a physical server with a pair of hardware-mirrored 2Tb disks for 2Tb of storage. That’s split into two partitions, as you can’t use data de-duplication on the OS volume of Windows. This allows us to have something like 5Tb of Lab VM images stored on the SCVMM library share that’s hosted on the SCVMM server. This share is for lab management use only.</p>
<p>We also have two physical servers that make up a Windows Cluster with a Cluster Shared Volume on an iSCSI SANs. This hosts a number of SCVMM libraries for ISO Images, Production VM Images and test stuff. Data de-duplication again is giving us an 80% space saving on the SAN (ISO images of OS’ and VHDs of installed OS’ dedupe _<em>really</em>_ well)</p>
<p>Our Lab cloud currently has three AMD based servers. They use the same disk setup as the SQL boxes, with a mirrored pair for OS and RAID5 for VM storage.</p>
<p>Our Production Hyper-V also has three servers, but this time in a Windows Cluster using a Cluster Shared Volume on our other iSCSI SAN for VM storage so it can do automated failover of VMs.</p>
<p>Each of the SQL servers, SCVMM servers and Lab Hyper-V servers uses Windows Server 2012 R2 NIC teaming to combine 2 x 1Gbit NICs which gives us better throughput and failover. The lab servers have one team for VM traffic and one team for the hyper-v management that is used when deploying VMs. That means we can push VMs around as fast as the disks will push data in either direction, pretty much, without needed expensive 10Gbit Ethernet.</p>
<p>So I hope that answers any questions.</p>
]]></content:encoded>
    </item>
    <item>
      <title>What new in TFS from Teched 2014?</title>
      <link>https://blog.richardfennell.net/posts/what-new-in-tfs-from-teched-2014/</link>
      <pubDate>Tue, 13 May 2014 19:19:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-new-in-tfs-from-teched-2014/</guid>
      <description>&lt;p&gt;If you use TFS then it is well worth a look at &lt;a href=&#34;http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/FDN04#fbid=&#34;&gt;Brian Harry’s Teched2014 session ‘Modern Application Lifecycle Management’&lt;/a&gt;. It goes through changes and new features with TFS both on-premise and in the cloud, including&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2014/05/12/migrating-your-data-from-tfs-to-visual-studio-online-with-new-free-utility-from-opshub.aspx&#34;&gt;Migrating Your Data from TFS to Visual Studio Online with New Free Utility from OpsHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Authentication with your corporate Active Directory account via AAD (planned for Summer 2014)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2014/05/12/a-new-api-for-visual-studio-online.aspx&#34;&gt;New API is based on REST, OAUTH, Json and Service Hooks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Release Management, PowerShell DSC, ability to link Azure VMs including client OS for dev/test&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2014/05/12/application-insights-extension-for-visual-studio-update-may-12.aspx&#34;&gt;Application Insights available inside Visual Studio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Azure vNext portal – bring it all together&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Not all these features are in 2013.2 (&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2014/05/12/vs-2013-update-2-and-other-teched-news.aspx&#34;&gt;which was released during the conference&lt;/a&gt;). However, in the session they said Visual Studio 2013.3CTP is going to be available next week, so not long to wait if you want a look at the latest features.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you use TFS then it is well worth a look at <a href="http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/FDN04#fbid=">Brian Harry’s Teched2014 session ‘Modern Application Lifecycle Management’</a>. It goes through changes and new features with TFS both on-premise and in the cloud, including</p>
<ul>
<li><a href="http://blogs.msdn.com/b/visualstudioalm/archive/2014/05/12/migrating-your-data-from-tfs-to-visual-studio-online-with-new-free-utility-from-opshub.aspx">Migrating Your Data from TFS to Visual Studio Online with New Free Utility from OpsHub</a></li>
<li>Authentication with your corporate Active Directory account via AAD (planned for Summer 2014)</li>
<li><a href="http://blogs.msdn.com/b/bharry/archive/2014/05/12/a-new-api-for-visual-studio-online.aspx">New API is based on REST, OAUTH, Json and Service Hooks</a></li>
<li>Release Management, PowerShell DSC, ability to link Azure VMs including client OS for dev/test</li>
<li><a href="http://blogs.msdn.com/b/bharry/archive/2014/05/12/application-insights-extension-for-visual-studio-update-may-12.aspx">Application Insights available inside Visual Studio</a></li>
<li>Azure vNext portal – bring it all together</li>
</ul>
<p>Not all these features are in 2013.2 (<a href="http://blogs.msdn.com/b/bharry/archive/2014/05/12/vs-2013-update-2-and-other-teched-news.aspx">which was released during the conference</a>). However, in the session they said Visual Studio 2013.3CTP is going to be available next week, so not long to wait if you want a look at the latest features.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New release of TFS Alerts DSL that allows work item state rollup</title>
      <link>https://blog.richardfennell.net/posts/new-release-of-tfs-alerts-dsl-that-allows-work-item-state-rollup/</link>
      <pubDate>Sun, 11 May 2014 17:33:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-release-of-tfs-alerts-dsl-that-allows-work-item-state-rollup/</guid>
      <description>&lt;p&gt;A very common question I am asked at clients is&lt;/p&gt;
&lt;p&gt;“Is it possible for a parent TFS work item to be automatically be set to ‘done’ when all the child work items are ‘done’?”.&lt;/p&gt;
&lt;p&gt;The answer is not out the box, there is no work item state roll up in TFS.&lt;/p&gt;
&lt;p&gt;However it is possible via the API. I have modified my &lt;a href=&#34;https://tfsalertsdsl.codeplex.com/&#34;&gt;TFS Alerts DSL CodePlex project&lt;/a&gt; to expose this functionality. I have added a couple of methods that allow you to find the parent and child of a work item, and hence create your own rollup script.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A very common question I am asked at clients is</p>
<p>“Is it possible for a parent TFS work item to be automatically be set to ‘done’ when all the child work items are ‘done’?”.</p>
<p>The answer is not out the box, there is no work item state roll up in TFS.</p>
<p>However it is possible via the API. I have modified my <a href="https://tfsalertsdsl.codeplex.com/">TFS Alerts DSL CodePlex project</a> to expose this functionality. I have added a couple of methods that allow you to find the parent and child of a work item, and hence create your own rollup script.</p>
<p>To make use of this all you need to do is create a TFS Alert that calls a SOAP end point where the Alerts DSL is installed. This end point should be called whenever a work item changes state. It will in turn run a Python script similar to the following to perform the roll-up</p>
<pre tabindex="0"><code>

import sys  
\# Expect 2 args the event type and a value unique ID for the wi  
if sys.argv\[0\] == &#34;WorkItemEvent&#34; :   
    wi = GetWorkItem(int(sys.argv\[1\]))  
    parentwi = GetParentWorkItem(wi)  
    if parentwi == None:  
        LogInfoMessage(&#34;Work item &#39;&#34; + str(wi.Id) + &#34;&#39; has no parent&#34;)  
    else:  
        LogInfoMessage(&#34;Work item &#39;&#34; + str(wi.Id) + &#34;&#39; has parent &#39;&#34; + str(parentwi.Id) + &#34;&#39;&#34;)

 

        results = \[c for c in GetChildWorkItems(parentwi) if c.State != &#34;Done&#34;\]  
        if len(results) == 0:  
            LogInfoMessage(&#34;All child work items are &#39;Done&#39;&#34;)  
            parentwi.State = &#34;Done&#34;  
            UpdateWorkItem(parentwi)  
            msg = &#34;Work item &#39;&#34; + str(parentwi.Id) + &#34;&#39; has been set as &#39;Done&#39; as all its child work items are done&#34;  
            SendEmail(&#34;richard@typhoontfs&#34;,&#34;Work item &#39;&#34; + str(parentwi.Id) + &#34;&#39; has been updated&#34;, msg)  
            LogInfoMessage(msg)  
        else:  
            LogInfoMessage(&#34;Not all child work items are &#39;Done&#39;&#34;)  
else:  
    LogErrorMessage(&#34;Was not expecting to get here&#34;)  
    LogErrorMessage(sys.argv)

 
</code></pre><p>So now there is a fairly easy way to create your own rollups, based on your own rules</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting ‘The build directory of the test run either does not exist or access permission is required’ error when trying to run tests as part of the Release Management deployment</title>
      <link>https://blog.richardfennell.net/posts/getting-the-build-directory-of-the-test-run-either-does-not-exist-or-access-permission-is-required-error-when-trying-to-run-tests-as-part-of-the-release-management-deployment/</link>
      <pubDate>Mon, 05 May 2014 19:34:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-the-build-directory-of-the-test-run-either-does-not-exist-or-access-permission-is-required-error-when-trying-to-run-tests-as-part-of-the-release-management-deployment/</guid>
      <description>&lt;p&gt;Whilst running tests as part of a Release Management deployment I started seeing the error ‘The build directory of the test run either does not exist or access permission is required’, and hence all my tests failed. It seems that there are issues that can cause this problem, as mentioned in the comments &lt;a href=&#34;http://nakedalm.com/execute-tests-release-management-visual-studio-2013/&#34;&gt;in Martin Hinshelwood’s post on running tests in deployment&lt;/a&gt;, specially spaces in the build name can cause this problem, but this was not the case for me.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst running tests as part of a Release Management deployment I started seeing the error ‘The build directory of the test run either does not exist or access permission is required’, and hence all my tests failed. It seems that there are issues that can cause this problem, as mentioned in the comments <a href="http://nakedalm.com/execute-tests-release-management-visual-studio-2013/">in Martin Hinshelwood’s post on running tests in deployment</a>, specially spaces in the build name can cause this problem, but this was not the case for me.</p>
<p>Strangest point was it used to work, what had I changed?</p>
<p>To debug the problem I logged into the test VM as the account the deployment service was running as (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/03/29/Getting-started-with-Release-Management-with-network-isolated-Lab-Management-environments.aspx">a shadow account as the environment was network isolated</a>). I got the command line that the component was trying to run by looking at the messages in the deployment log</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_181.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_179.png" title="image"></a></p>
<p>I then went to the deployment folder on the test VM</p>
<blockquote>
<p>%appdata%localtempreleasemanagement[the release management component name][release number]</p></blockquote>
<p>and ran the same command line. Strange thing was this worked! All the tests ran and passed OK, TFS was updated, everything was good.</p>
<p>It seemed I only had an issue when triggering the tests via a Release Management deployment, very strange!</p>
<blockquote>
<p>A side note here, when I say the script ran OK it did report an error and did not export and unpack the test results from the TRX file to pass back to the console/release management log. Turns out this is because the MTMExec.ps1 script uses the command <strong>[System.IO.File]::Exist(..)</strong> to check if the .TRX file has been produced. This fails when the script is run manually. This is because it relies on <strong>[Environment]::CurrentDirectory</strong>, which is not set the same way when run manually as when a script is called by the deployment service. When run manually it seems to default to <strong>c:windowssystem32</strong> not the current folder.</p>
<p>If you are editing this script, and want it to work in both scenarios, then probably best to use the PowerShell <strong>Test-Path(..)</strong> cmdlet as opposed to <strong>[System.IO.File]::Exist(..)</strong> </p></blockquote>
<p>So where to look for this problem, the error says something can’t access the drops location, but what?</p>
<p>A bit of thought as to who is doing what can help here</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_182.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_180.png" title="image"></a></p>
<p>When the deployment calls for a test to be run</p>
<ul>
<li>The Release Management deployment agent pulls the component down to the test VM from the Release Management Server</li>
<li>It then runs the Powershell Script</li>
<li>The PowerShell script runs TCM.exe to trigger the test run, passing in the credentials to access the TFS server and Test Controller</li>
<li>The Test Controller triggers the tests to be run on the Test Agent, providing it with the required DLLs from the TFS drops location – THIS IS THE STEP WITH THE PROBLEM IS SEEN</li>
<li>The Test Agent runs the tests and passes the results back to TFS via the Test Controller</li>
<li>After the PowerShell script triggers the test run it loops until the test run is complete.</li>
<li>It then uses TCM again to extract the test results, which it parses and passes back to the Release Management server</li>
</ul>
<p>So a good few places to check the logs.</p>
<p>Turns out the error was being reported on the Test Controller.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_183.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_181.png" title="image"></a></p>
<p><em>(QTController.exe, PID 1208, Thread 14) Could not use lab service account to access the build directory. Failure: Network path does not exist or is not accessible using following user: \storedropsSabs.Main.CISabs.Main.CI_2.3.58.11938 using blackmarbletfslab. Error Code: 53</em></p>
<p>The error told me the folder and who couldn’t access it, the domain service account ‘tfslab’ the Test Agents use to talk back to the Test Controller.</p>
<p>I checked the drops location share and this user has adequate access rights. I even logged on to the Test Controller as this user and confirmed I could open the share.</p>
<p>I then had a thought, this was the account the Test Agents were using to communicate with the Test Controller, but was it the account the controller was running as? A check showed it was not, the controller was running as the default ‘Local System’. As soon as I swapped to using the lab service account (or I think any domain account with suitable rights) it all started to work.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_184.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_182.png" title="image"></a></p>
<p>So why did this problem occur?</p>
<p>All I can think of was that (to address another issue with Windows 8.1 Coded-UI testing) the Test Controller was upgraded to 2013.2RC, but the Test Agent in this lab environment was still at 2013RTM. Maybe the mismatch is the issue?</p>
<p>I may revisit and retest with the ‘Local System’ account when 2013.2 RTM’s and I upgrade all the controllers and agents, but I doubt it. I have no issue running the test controller as a domain account.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Setting the LocalSQLServer connection string in web deploy</title>
      <link>https://blog.richardfennell.net/posts/setting-the-localsqlserver-connection-string-in-web-deploy/</link>
      <pubDate>Fri, 02 May 2014 17:23:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-the-localsqlserver-connection-string-in-web-deploy/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/05/01/Changing-WCF-bindings-for-MSDeploy-packages-when-using-Release-Management.aspx&#34;&gt;If you are using Webdeploy&lt;/a&gt; you might wish to alter the connection string the for the LocalSQLServer that is used by the ASP.NET provider for web part personalisation. The default is to use ASPNETDB.mdf in the APP_Data folder, but in a production system you could well want to use a ‘real’ SQL server.&lt;/p&gt;
&lt;p&gt;If you look in your web config, assuming you are not using the default ‘not set’ setting, will look something like&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/05/01/Changing-WCF-bindings-for-MSDeploy-packages-when-using-Release-Management.aspx">If you are using Webdeploy</a> you might wish to alter the connection string the for the LocalSQLServer that is used by the ASP.NET provider for web part personalisation. The default is to use ASPNETDB.mdf in the APP_Data folder, but in a production system you could well want to use a ‘real’ SQL server.</p>
<p>If you look in your web config, assuming you are not using the default ‘not set’ setting, will look something like</p>
<blockquote>
<connectionStrings>  
    <clear />  
    <add name="LocalSQLServer" connectionString="Data Source=(LocalDB)projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" providerName="System.Data.SqlClient" />  
  </connectionStrings></blockquote>
<p>Usually you expect any connection strings in the web.config to appear in the Web Deploy publish wizard, but it does not. I have no real idea why, but maybe it is something to do with having to use <clear /> to remove the default?</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_180.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_178.png" title="image"></a></p>
<p>If you use a parameters.xml file to add parameters to the web deploy you would think you could add the block</p>
<blockquote>
<parameter name="LocalSQLServer" description="Please enter the ASP.NET DB path" defaultvalue="\_\_LocalSQLServer\_\_" tags="">  
  <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/connectionStrings/add\[@name='LocalSQLServer'\]/@connectionString" />  
</parameter></blockquote>
<p>However, this does not work, in the setparameters.xml that is generated you find  two entries, first yours then the auto-generated one, and the last one wins, so you don’t get the correct connection string.</p>
<blockquote>
<setParameter name="LocalSQLServer" value="\_\_LocalSQLServer\_\_" />  
<setParameter name="LocalSQLServer-Web.config Connection String" value="Data Source=(LocalDB)projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" /></blockquote>
<p>The solution I found manually add your parameter in the parameters.xml file as</p>
<blockquote>
<parameter name="LocalSQLServer-Web.config Connection String" description="LocalSQLServer Connection String used in web.config by the application to access the database." defaultValue="\_\_LocalSQLServer\_\_" tags="SqlConnectionString">  
  <parameterEntry kind="XmlFile" scope="\\web.config$" match="/configuration/connectionStrings/add\[@name='LocalSQLServer'\]/@connectionString" />  
</parameter></blockquote>
<p>With this form the connection string was correctly modified as only one entry appears in the generated file</p>
]]></content:encoded>
    </item>
    <item>
      <title>Changing WCF bindings for MSDeploy packages when using Release Management</title>
      <link>https://blog.richardfennell.net/posts/changing-wcf-bindings-for-msdeploy-packages-when-using-release-management/</link>
      <pubDate>Thu, 01 May 2014 12:03:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/changing-wcf-bindings-for-msdeploy-packages-when-using-release-management/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.colinsalmcorner.com/post/webdeploy-and-release-management--the-proper-way&#34;&gt;Colin Dembovsky’s excellent post ‘WebDeploy and Release Management – The Proper Way&lt;/a&gt;’ explains how to pass parameters from Release Management into MSDeploy to update Web.config files. On the system I am working on I also need to do some further web.config translation, basically the WCF section is different on a Lab or Production build as it needs to use Kerberos, whereas local debug builds don’t.&lt;/p&gt;
&lt;p&gt;In the past I dealt with this, and editing the AppSettings, using MSDeploy web.config translation. This worked fine, but it meant I built the product three time, exactly what Colin’s post is trying to avoid. The techniques in the post for the AppSettings and connection strings are fine, but don’t apply so well for the large block swapouts, as I need for WCF bindings section.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.colinsalmcorner.com/post/webdeploy-and-release-management--the-proper-way">Colin Dembovsky’s excellent post ‘WebDeploy and Release Management – The Proper Way</a>’ explains how to pass parameters from Release Management into MSDeploy to update Web.config files. On the system I am working on I also need to do some further web.config translation, basically the WCF section is different on a Lab or Production build as it needs to use Kerberos, whereas local debug builds don’t.</p>
<p>In the past I dealt with this, and editing the AppSettings, using MSDeploy web.config translation. This worked fine, but it meant I built the product three time, exactly what Colin’s post is trying to avoid. The techniques in the post for the AppSettings and connection strings are fine, but don’t apply so well for the large block swapouts, as I need for WCF bindings section.</p>
<p>I was considering my options when I realised there a simple option.</p>
<ul>
<li>My default web.config has the bindings for local operation i.e. no Kerberos</li>
<li>The web.debug.config translation hence does nothing</li>
<li>Both web.lab.config and web.release.confing translations have Kerberos bindings swapped out</li>
</ul>
<p>So all I needed to do was build the Release build (as you would for production release anyway) this will have the correct bindings in the MSDeploy package for both Lab and Release. You can then use Release Management to set the AppSettings and connection strings as required.</p>
<p>Simple, no extra handling required. I had thought my self into a problem I did not really have.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Release Management components fail to deploy with a timeout if a variable is changed from standard to encrypted</title>
      <link>https://blog.richardfennell.net/posts/release-management-components-fail-to-deploy-with-a-timeout-if-a-variable-is-changed-from-standard-to-encrypted/</link>
      <pubDate>Thu, 01 May 2014 11:47:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/release-management-components-fail-to-deploy-with-a-timeout-if-a-variable-is-changed-from-standard-to-encrypted/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx&#34;&gt;have been using Release Management&lt;/a&gt; to update some of our internal deployment processes. This has included changing the way we roll out MSDeploy packages; I am following &lt;a href=&#34;http://www.colinsalmcorner.com/post/webdeploy-and-release-management--the-proper-way&#34;&gt;Colin Dembovsky’s excellent post of the subject&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I hit an interesting issue today. One of the configuration variable parameters I was passing into a component was a password field. For my initial tests had just let this be a clear text ‘standard’ string in the Release Management. Once I got this all working I thought I better switch this variable to ‘encrypted’, so I just change the type on the Configuration Variables tab.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/08/What-I-learnt-getting-Release-Management-running-with-a-network-Isolated-environment.aspx">have been using Release Management</a> to update some of our internal deployment processes. This has included changing the way we roll out MSDeploy packages; I am following <a href="http://www.colinsalmcorner.com/post/webdeploy-and-release-management--the-proper-way">Colin Dembovsky’s excellent post of the subject</a>.</p>
<p>I hit an interesting issue today. One of the configuration variable parameters I was passing into a component was a password field. For my initial tests had just let this be a clear text ‘standard’ string in the Release Management. Once I got this all working I thought I better switch this variable to ‘encrypted’, so I just change the type on the Configuration Variables tab.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_178.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_176.png" title="image"></a> </p>
<p>On doing this I was warned that previous deployment would not be re-deployable, but that was OK for me, it was just a trial system. I would not be going back to older versions.</p>
<p>However when I tried to run this revised release template all the steps up to the edited MSDeploy step were fine, but the MSDeploy step never ran it just timed out. The component was never deployed to the target machine %appdata%localtempreleasemanagement folder.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_179.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_177.png" title="image"></a></p>
<p>In the end, after a few reboots to confirm the comms were OK, I just re-added the component to the release template and entered all the variables again. It then deployed without a problem.</p>
<p>I think this is a case of a misleading error message.</p>
]]></content:encoded>
    </item>
    <item>
      <title>‘Windows Phone 8.1 Update’ update</title>
      <link>https://blog.richardfennell.net/posts/windows-phone-8-1-update-update/</link>
      <pubDate>Mon, 28 Apr 2014 09:09:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-phone-8-1-update-update/</guid>
      <description>&lt;p&gt;I have been running &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/15/All-upgraded-to-the-Windows-Phone-81-Update.aspx&#34;&gt;Windows Phone 8.1 Update&lt;/a&gt; for a couple of weeks now and have to say I like. I have not suffered the poor battery life others seem to have suffered. Maybe this is an feature of the Nokia 820 no needing as many firmware updates from Nokia (which aren&amp;rsquo;t available yet) note having such power hungry features as the larger phones.&lt;/p&gt;
&lt;p&gt;The only issue I have had is that I lost an audio channel when using a headset. Initially I was unsure if it was a mechanical fault on the headphone socket, but I checked the headset was good, it sounded as if the balance was faded to just one side as you could just hear something faint on the failing side. Anyway as often is the case in IT, a reboot of the phone fixed the issue.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been running <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/04/15/All-upgraded-to-the-Windows-Phone-81-Update.aspx">Windows Phone 8.1 Update</a> for a couple of weeks now and have to say I like. I have not suffered the poor battery life others seem to have suffered. Maybe this is an feature of the Nokia 820 no needing as many firmware updates from Nokia (which aren&rsquo;t available yet) note having such power hungry features as the larger phones.</p>
<p>The only issue I have had is that I lost an audio channel when using a headset. Initially I was unsure if it was a mechanical fault on the headphone socket, but I checked the headset was good, it sounded as if the balance was faded to just one side as you could just hear something faint on the failing side. Anyway as often is the case in IT, a reboot of the phone fixed the issue.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The return of Visual Studio Setup projects - just because you can use them should you?</title>
      <link>https://blog.richardfennell.net/posts/the-return-of-visual-studio-setup-projects-just-because-you-can-use-them-should-you/</link>
      <pubDate>Wed, 23 Apr 2014 09:18:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-return-of-visual-studio-setup-projects-just-because-you-can-use-them-should-you/</guid>
      <description>&lt;p&gt;A significant blocker for some of my customers moving to Visual Studio 2013 (and 2012 previously) has been the removal of Visual Studio Setup Projects; my experience has been confirmed by &lt;a href=&#34;https://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/3041773-bring-back-the-basic-setup-and-deployment-project&#34;&gt;UserVoice&lt;/a&gt;. Well Microsoft have addressed this pain point by releasing a &lt;a href=&#34;http://blogs.msdn.com/b/visualstudio/archive/2014/04/17/visual-studio-installer-projects-extension.aspx&#34;&gt;Visual Studio Extension to re-add this Visual Studio 2010 functionality to 2013&lt;/a&gt;. This can be downloaded from the &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/9abe329c-9bba-44a1-be59-0fbf6151054d&#34;&gt;Visual Studio Gallery&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Given this release, the question now becomes should you use it? Or should you take the harder road in the short term of moving to &lt;a href=&#34;http://wixtoolset.org/&#34;&gt;Wix&lt;/a&gt;, but with the far greater flexibility this route offers going forward?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A significant blocker for some of my customers moving to Visual Studio 2013 (and 2012 previously) has been the removal of Visual Studio Setup Projects; my experience has been confirmed by <a href="https://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/3041773-bring-back-the-basic-setup-and-deployment-project">UserVoice</a>. Well Microsoft have addressed this pain point by releasing a <a href="http://blogs.msdn.com/b/visualstudio/archive/2014/04/17/visual-studio-installer-projects-extension.aspx">Visual Studio Extension to re-add this Visual Studio 2010 functionality to 2013</a>. This can be downloaded from the <a href="http://visualstudiogallery.msdn.microsoft.com/9abe329c-9bba-44a1-be59-0fbf6151054d">Visual Studio Gallery</a>.</p>
<p>Given this release, the question now becomes should you use it? Or should you take the harder road in the short term of moving to <a href="http://wixtoolset.org/">Wix</a>, but with the far greater flexibility this route offers going forward?</p>
<p>At Black Marble we decided when Visual Studio Setup projects were dropped to move all active projects over to Wix, the learning curve can be a pain, but in reality most Visual Studio Setup project convert to fairly simple Wix projects. The key advantage for us is that you can build a Wix project on a TFS build agent via MSBuild; not something you can do with a  Visual Studio Setup Project without jump through hoops after installing Visual Studio on the build box.</p>
<p>That said I know that the upgrade cost of moving to Wix is a major blocker for many people, and this extension will remove that cost. However, please consider the extension a tool to allow a more staged transition of installer technology, not an end in itself. Don’t let you installers become a nest of <a href="http://www.technologyandfriends.com/TechnologyAndFriends/SubText/archive/2010/03/08/tf076.aspx">technical debt</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>All upgraded to the Windows Phone 8.1 Update</title>
      <link>https://blog.richardfennell.net/posts/all-upgraded-to-the-windows-phone-8-1-update/</link>
      <pubDate>Tue, 15 Apr 2014 10:03:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/all-upgraded-to-the-windows-phone-8-1-update/</guid>
      <description>&lt;p&gt;My Nokia 820 phone is now &lt;a href=&#34;http://www.wpcentral.com/windows-phone-81-now-available&#34;&gt;updated to 8.1 with the developer preview&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_177.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_175.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The actual upgrade was straight forward, the only issue was that the &lt;a href=&#34;http://www.wpcentral.com/windows-phone-81-now-available&#34;&gt;Store was down last night&lt;/a&gt; so updating apps could not be done until this morning. This was made more of an issue by the fact I had had to remove all my Nokia Maps and the iPodcast application (and downloaded podcasts) to free up space on the phone to allow the upgrade. Both these apps could only store data on the phone (not the SDcard) thus blocked the upgrade. This lack of space on the actual phone has been a constant issue for me on the Nokia 820.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My Nokia 820 phone is now <a href="http://www.wpcentral.com/windows-phone-81-now-available">updated to 8.1 with the developer preview</a>.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_177.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_175.png" title="image"></a></p>
<p>The actual upgrade was straight forward, the only issue was that the <a href="http://www.wpcentral.com/windows-phone-81-now-available">Store was down last night</a> so updating apps could not be done until this morning. This was made more of an issue by the fact I had had to remove all my Nokia Maps and the iPodcast application (and downloaded podcasts) to free up space on the phone to allow the upgrade. Both these apps could only store data on the phone (not the SDcard) thus blocked the upgrade. This lack of space on the actual phone has been a constant issue for me on the Nokia 820.</p>
<p>So what is new and immediately useful to me?</p>
<ul>
<li>You can now store virtually anything on an SDCard, not just music and images</li>
<li>The notification bar is great, no need for the connectivity shortcuts, but it does so much more</li>
<li>And at last podcasting is built back in, only issue is I am not sure I want the hassle of re-entering all my subscriptions, iPodcast does such a great job storing them in the cloud making re-installation or device swaps so easy – time will tell on that one if I move or not.</li>
</ul>
<p>At this point I decided to leave my phone on UK settings, so did not <a href="http://www.wpcentral.com/comment/849041">get Cortana enable</a>, just letting others in the office map out any issues that may occur with playing with region settings and the Store.</p>
<p>So now to see what WP8.1 is like to live with…</p>
]]></content:encoded>
    </item>
    <item>
      <title>The 2013 editions TFS books</title>
      <link>https://blog.richardfennell.net/posts/the-2013-editions-tfs-books/</link>
      <pubDate>Tue, 15 Apr 2014 09:27:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-2013-editions-tfs-books/</guid>
      <description>&lt;p&gt;The 2013 editions of existing TFS books are now available&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.amazon.co.uk/dp/1118836340?tag=buitwoonmypc-21&amp;amp;camp=2902&amp;amp;creative=19466&amp;amp;linkCode=as4&amp;amp;creativeASIN=1118836340&amp;amp;adid=0GBH270XCWF1JPMBDH4G&amp;amp;&amp;amp;ref-refURL=http%3A%2F%2Fblogs.blackmarble.co.uk%2Fblogs%2Frfennell%2Fpage%2FReading-List.aspx&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://ecx.images-amazon.com/images/I/51xbj0kaNWL._SX385_.jpg&#34;&gt;&lt;/a&gt;     &lt;a href=&#34;http://www.amazon.co.uk/Professional-Application-Lifecycle-Management-Visual-ebook/dp/B00JDJDPJM/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1397549808&amp;amp;sr=1-1&amp;amp;keywords=Professional&amp;#43;Application&amp;#43;Lifecycle&amp;#43;Management&amp;#43;with&amp;#43;Visual&amp;#43;Studio&amp;#43;2013&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_175.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Also well worth a look is ‘Team Foundation Server 2013 Customization’ by Gordon Beeming. A great look at all the extension points in TFS&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.amazon.co.uk/Team-Foundation-Server-2013-Customization-ebook/dp/B00HYTDXJU/ref=sr_1_1?ie=UTF8&amp;amp;qid=1397550212&amp;amp;sr=8-1&amp;amp;keywords=Team&amp;#43;Foundation&amp;#43;Server&amp;#43;2013&amp;#43;Customization&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_176.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A longer list of books can be found in &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/page/Reading-List.aspx&#34;&gt;my reading list&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The 2013 editions of existing TFS books are now available</p>
<p><a href="http://www.amazon.co.uk/dp/1118836340?tag=buitwoonmypc-21&amp;camp=2902&amp;creative=19466&amp;linkCode=as4&amp;creativeASIN=1118836340&amp;adid=0GBH270XCWF1JPMBDH4G&amp;&amp;ref-refURL=http%3A%2F%2Fblogs.blackmarble.co.uk%2Fblogs%2Frfennell%2Fpage%2FReading-List.aspx"><img loading="lazy" src="http://ecx.images-amazon.com/images/I/51xbj0kaNWL._SX385_.jpg"></a>     <a href="http://www.amazon.co.uk/Professional-Application-Lifecycle-Management-Visual-ebook/dp/B00JDJDPJM/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1397549808&amp;sr=1-1&amp;keywords=Professional&#43;Application&#43;Lifecycle&#43;Management&#43;with&#43;Visual&#43;Studio&#43;2013"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_175.png" title="image"></a></p>
<p>Also well worth a look is ‘Team Foundation Server 2013 Customization’ by Gordon Beeming. A great look at all the extension points in TFS</p>
<p><a href="http://www.amazon.co.uk/Team-Foundation-Server-2013-Customization-ebook/dp/B00HYTDXJU/ref=sr_1_1?ie=UTF8&amp;qid=1397550212&amp;sr=8-1&amp;keywords=Team&#43;Foundation&#43;Server&#43;2013&#43;Customization"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_176.png" title="image"></a></p>
<p>A longer list of books can be found in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/page/Reading-List.aspx">my reading list</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Where has my picture password login sign in gone on Windows 8?</title>
      <link>https://blog.richardfennell.net/posts/where-has-my-picture-password-login-sign-in-gone-on-windows-8/</link>
      <pubDate>Thu, 10 Apr 2014 13:45:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-has-my-picture-password-login-sign-in-gone-on-windows-8/</guid>
      <description>&lt;p&gt;I have had a &lt;a href=&#34;http://surface.microsoftstore.com/store/msuk/en_GB/html/pbPage.CATS/categoryID.66611000?tid=tDLg4htd&amp;amp;cid=5366&amp;amp;pcrid=3432721664&amp;amp;pkw=surface%202&amp;amp;pmt=e&amp;amp;WT.srch=1&amp;amp;WT.mc_id=pointitsem_Microsoft&amp;#43;UK_bing_Surface&amp;#43;-&amp;#43;UK&amp;amp;WT.term=surface%202&amp;amp;WT.campaign=Surface&amp;#43;-&amp;#43;UK&amp;amp;WT.content=tDLg4htd&amp;amp;WT.source=bing&amp;amp;WT.medium=cpc&#34;&gt;Surface 2&lt;/a&gt; for about six months. It is great for watching videos on the train, or a bit of browsing, but don’t like it for note taking in meetings. This is a shame, as this is what I got it for; a light device with good battery life to take to meetings. What I needed was something I could hand write on in OneNote, an electronic pad. The Surface 2 touch screen is just not accurate enough.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have had a <a href="http://surface.microsoftstore.com/store/msuk/en_GB/html/pbPage.CATS/categoryID.66611000?tid=tDLg4htd&amp;cid=5366&amp;pcrid=3432721664&amp;pkw=surface%202&amp;pmt=e&amp;WT.srch=1&amp;WT.mc_id=pointitsem_Microsoft&#43;UK_bing_Surface&#43;-&#43;UK&amp;WT.term=surface%202&amp;WT.campaign=Surface&#43;-&#43;UK&amp;WT.content=tDLg4htd&amp;WT.source=bing&amp;WT.medium=cpc">Surface 2</a> for about six months. It is great for watching videos on the train, or a bit of browsing, but don’t like it for note taking in meetings. This is a shame, as this is what I got it for; a light device with good battery life to take to meetings. What I needed was something I could hand write on in OneNote, an electronic pad. The Surface 2 touch screen is just not accurate enough.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2014/02/05/Using-the-Dell-Venue-8-Pro-Stylus.aspx">After Rik’s glowing review</a> I have just got a <a href="http://www.dell.com/uk/p/dell-venue-8-pro/pd">Dell Venue 8 Pro</a> and stylus. I setup the Dell with a picture password and all was OK for a while, I could login via a typed password or a picture as you would expect. However the picture password sign-in option disappeared from the lock/login screen at some point after running the numerous update and application installation I needed.</p>
<p>I am not 100% certain, but I think the issue is that when I configured the Windows 8 Mail application to talk to our company Exchange server I was asked to accept some security settings from our domain. I think this blocked picture password for non-domain joined devices. I joined the Dell to our domain (you can do this as it is Atom not ARM based , assuming you re willing to do a reinstall with Windows 8 Pro) and this seems to have fixed my problem. I have installed all the same patches and apps and I still have the picture password option.</p>
<p>So roll on the next meeting to see if I can take reasonable hand written notes on it, and that OneNote desktop manages to get them converted to text.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Handling .pubxml files with TFS MSBuild arguments</title>
      <link>https://blog.richardfennell.net/posts/handling-pubxml-files-with-tfs-msbuild-arguments/</link>
      <pubDate>Thu, 10 Apr 2014 13:26:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/handling-pubxml-files-with-tfs-msbuild-arguments/</guid>
      <description>&lt;p&gt;With &lt;a href=&#34;http://www.hanselman.com/blog/TinyHappyFeatures3PublishingImprovementsChainedConfigTransformsAndDeployingASPNETAppsFromTheCommandLine.aspx&#34;&gt;Visual Studio 2012 there were changes in the way Web Publishing worked&lt;/a&gt;; the key fact being that the configuration was moved from the .csproj to a .pubxml in the properties folder. This allows them to be more easily managed under source control by a team. This does have some knock on effects though, especially when you start to consider automated build and deployment.&lt;/p&gt;
&lt;p&gt;Up to now we have not seen issues in this area, most of our active projects that needed web deployment packages had started in the Visual Studio 2010 era so had all the publish details in the project and this is still supported by later versions of Visual Studio. This meant that if we had three configurations debug, lab and release, then there were three different sets of settings stored in different blocks of the project file. So if you used the &lt;strong&gt;/p:DeployOnBuild=True&lt;/strong&gt; MS Build argument for your TFS build, and built all three configurations you got the settings related to the respective configuration in each drop location.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With <a href="http://www.hanselman.com/blog/TinyHappyFeatures3PublishingImprovementsChainedConfigTransformsAndDeployingASPNETAppsFromTheCommandLine.aspx">Visual Studio 2012 there were changes in the way Web Publishing worked</a>; the key fact being that the configuration was moved from the .csproj to a .pubxml in the properties folder. This allows them to be more easily managed under source control by a team. This does have some knock on effects though, especially when you start to consider automated build and deployment.</p>
<p>Up to now we have not seen issues in this area, most of our active projects that needed web deployment packages had started in the Visual Studio 2010 era so had all the publish details in the project and this is still supported by later versions of Visual Studio. This meant that if we had three configurations debug, lab and release, then there were three different sets of settings stored in different blocks of the project file. So if you used the <strong>/p:DeployOnBuild=True</strong> MS Build argument for your TFS build, and built all three configurations you got the settings related to the respective configuration in each drop location.</p>
<p>This seems a good system, until you consider you have built the assemblies three times, in a world of continuous deployment by binary promotion is this what you want? Better to build the assemblies once, but have different (or transformed) configuration files for each environment/stage in the release pipeline. This is where a swap to a .pubxml file helps.</p>
<p>You create a .pubxml file by running the wizard in Visual Studio via right click on a project and selecting Publish</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_174.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_174.png" title="image"></a></p>
<p>To get TFS build to to use a .pubxml file you need to pass its name as a MSBuild argument. So in the past we would have used the argument <strong>/p:DeployOnBuild=True</strong>, now we would use <strong>/p:DeployOnBuild=True;PublishProfile=MyProfile</strong>, where there is a .pubxml file in the path</p>
<blockquote>
<p>[Project]/properties/PublishProfiles/MyProfile.pubxml</p></blockquote>
<p>Once this is done your package will be built (assuming that this is Web Deployment Package and not some other form of deploy) and available on your drops location. The values you may wish to alter are probably in the <strong>[your package name].SetParameters.xml</strong> file, which you can alter with whichever transform technology you wish to use e.g. <a href="http://www.hanselman.com/blog/SlowCheetahWebconfigTransformationSyntaxNowGeneralizedForAnyXMLConfigurationFile.aspx">SlowCheetah</a> or Release Management workflows.</p>
<blockquote>
<p>One potential gotcha I had whilst testing with MSBuild from the command line, is that the .pubxml files contains a value for the property  <strong><DesktopBuildPackageLocation>.</strong> This will be the output path you used when you created the publish profile using the wizard in Visual Studio.</p>
<p>If you are testing your arguments with MSBuild.exe from the command line this is where the output gets built to. If you want the build to behave more like TFS build (using the obj/bin folders) you can by clearing this value by passing the MSBuild argument <strong>/p:DesktopBuildPackageLocation=&quot;&quot;.</strong> </p>
<p>You don’t need to worry about this for the TFS build definitions as it seems to be able to work it out and get the correctly packaged files to the drops location.</p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>What I learnt getting Release Management running with a network Isolated environment</title>
      <link>https://blog.richardfennell.net/posts/what-i-learnt-getting-release-management-running-with-a-network-isolated-environment/</link>
      <pubDate>Tue, 08 Apr 2014 13:58:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-i-learnt-getting-release-management-running-with-a-network-isolated-environment/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 20 Oct 2014&lt;/strong&gt; – With notes on using an Action for cross domain authentication&lt;/p&gt;
&lt;p&gt;In &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/03/29/Getting-started-with-Release-Management-with-network-isolated-Lab-Management-environments.aspx&#34;&gt;my previous post I described how to get a network isolated environment up and running with Release Management&lt;/a&gt;, it is all to do with shadow accounts. Well getting it running is one thing, having a useful release process is another.&lt;/p&gt;
&lt;p&gt;For my test environment I needed to get three things deployed and tested&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A SQL DB deployed via a DACPAC&lt;/li&gt;
&lt;li&gt;A WCF web service deployed using MSDeploy&lt;/li&gt;
&lt;li&gt;A web site deployed using MSDeploy&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;My environment was a four VM &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx&#34;&gt;network isolated environment&lt;/a&gt; running on our TFS Lab Management system.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 20 Oct 2014</strong> – With notes on using an Action for cross domain authentication</p>
<p>In <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/03/29/Getting-started-with-Release-Management-with-network-isolated-Lab-Management-environments.aspx">my previous post I described how to get a network isolated environment up and running with Release Management</a>, it is all to do with shadow accounts. Well getting it running is one thing, having a useful release process is another.</p>
<p>For my test environment I needed to get three things deployed and tested</p>
<ul>
<li>A SQL DB deployed via a DACPAC</li>
<li>A WCF web service deployed using MSDeploy</li>
<li>A web site deployed using MSDeploy</li>
</ul>
<p>My environment was a four VM <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx">network isolated environment</a> running on our TFS Lab Management system.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_163.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_163.png" title="image"></a> </p>
<p>The roles of the VMs were</p>
<ul>
<li>A domain controller</li>
<li>A SQL 2008R2 server  (Release Management deployment agent installed)</li>
<li>A VM configured as a generic IIS web server (Release Management deployment agent installed)</li>
<li>A VM configured as an SP2010 server (needed in the future, but its presence caused me issues so I will mention it)</li>
</ul>
<h3 id="accessing-domain-shares">Accessing domain shares</h3>
<p>The first issue we encountered was that we need the deployment agent on the VMs to be able to access domain shares on our corporate network, not just ones on the local network isolated domain. They need to be able to do this to download the actual deployment media. The easiest way I found to do this was to place a NET USE command at the start of the workflow for each VM I was deploying too. This allowed authentication from the test domain to our corporate domain and hence access for the agent to get the files it needed. The alternatives would have been using more shadow accounts, or cross domain trusts, both things I did not want the hassle of managing.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_164.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_164.png" title="image"></a></p>
<p>The run command line activity runs  the <strong>net</strong> command with the arguments</p>
<pre tabindex="0"><code>net use [\\storedropsshare](//\storedropsshare) _\[password\]_ /user:_\[corpdomainaccount\]_
</code></pre><p>I needed to use this command on each VM I was running the deployment agent on, so appears twice in this workflow, once for the DB server and once for the web server.</p>
<p><strong>Updated 20 Oct 2014</strong>: After using this technique in a few release I realised it was a far better idea to have a action to do the job. The technique I mentioned above meant the password was in clear text, a parameterised action allows it to be encrypted.</p>
<p>To create an action (and it can be an action, not a component as it does not need to know the build location) needs the following settings</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_212.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_209.png" title="image"></a></p>
<h3 id="version-of-ssdt-sql-tools">Version of SSDT SQL tools</h3>
<p>My SQL instance was SQL 2008R2, when I tried to use the standard Release Management DACPAC Database Deployer tool it failed with assembly load errors. Basically the assemblies downloaded as part of the tool deployment did not match anything on the VM.</p>
<p>My first step was to install the latest SQL 2012 SSDT tools on the SQL VM. This did not help the problem as there was still a mismatch between the assemblies. I therefore create a new tool in the Release Management inventory, this was a copy of the existing DACPAC tool command, but using the current version of the tool assemblies from SSDT 2012</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_165.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_165.png" title="image"></a></p>
<p>Using this version of the tools worked, my DB could be deployed/updated.</p>
<h3 id="granting-rights-for-sql">Granting Rights for SQL</h3>
<p>Using SSDT to deploy a DB (especially if you have the package set to drop the DB) does  not grant any user access rights.</p>
<p>I found the easiest way to grant the rights the web service AppPool accounts needed was to run a SQL script. I did this by creating a component for my release with a small block of SQL to create DB owners, this is the same technique as used for the standard SQL create/drop activities shipped in the box with Release Management.</p>
<p>The arguments I used for the <strong>sqlcmd</strong> were <strong>-S __ServerName__ -b -Q &ldquo;use __DBname__ ; create user [__username__] for login [__username__];  exec sp_addrolemember &lsquo;db_owner&rsquo;, &lsquo;__username__&rsquo;;&rdquo;</strong></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_166.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_166.png" title="image"></a></p>
<p>Once I had created this component I could pass the parameters I needed add DB owners.</p>
<h3 id="creating-the-web-sites">Creating the web sites</h3>
<p>This was straight forward, I just used the standard components to create the required AppPools and the web sites. It is worth nothing that these command can be run against existing site, the don’t error if the site/AppPool already exists. This seems to be the standard model with Release Management as there is no decision (if) branching in the workflow, so all tools have to either work or stop the deployment.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_167.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_167.png" title="image"></a></p>
<p>I then used the <a href="http://www.colinsalmcorner.com/2013_11_01_archive.html">irmsdeploy.exe</a> Release Management component to run the MSDeploy publish on each web site/service</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_168.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_168.png" title="image"></a></p>
<p>A note here: you do need make sure you set the path to the package to be the actual folder the .ZIP file is in, not the parental drop folder (in my case <strong>Lab_PublishedWebsitesSABSTestHarness_Package</strong> not <strong>Lab</strong>)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_169.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_169.png" title="image"></a></p>
<h3 id="running-some-integration-tests">Running some integration tests</h3>
<p>We now had a deployment that worked. It pulled the files from our corporate LAN and deployed them into a network isolated lab environment.</p>
<p>I now wanted to run some tests to validate the deployment. I chose to use some SQL based tests that were run via MSTest. These tests had already been <a href="http://msdn.microsoft.com/en-us/library/ff942471.aspx">added to Microsoft Test Manager (MTM) using TCM</a>, so I thought I had all I needed.</p>
<p>I added the Release Management MTM component to my workflow and set the values taken from MTM for test plan and suite etc.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_170.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_170.png" title="image"></a></p>
<p>However I quickly hit cross domain authentication issues again. The Release Management component does all this test management via a PowerShell script that runs TCM. This must communicate with TFS, which in my system was in the other domain, so fails.</p>
<p>The answer was to modify the PowerShell script to also pass some login credentials</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_171.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_171.png" title="image"></a></p>
<p>The only change in the PowerShell script was that each time the TCM command is called the <strong>/login:$LoginCreds</strong> block is added, where $LoginCreds are the credentials passed in the form <strong>corpdomainuser,password</strong></p>
<blockquote>
<p>$testRunId = &amp; &ldquo;$tcmExe&rdquo; run /create /title:&quot;$Title&quot; <strong>/login:$LoginCreds</strong> /planid:$PlanId /suiteid:$SuiteId /configid:$ConfigId /collection:&quot;$Collection&quot; /teamproject:&quot;$TeamProject&quot; $testEnvironmentParameter $buildDirectoryParameter $buildDefinitionParameter $buildNumberParameter $settingsNameParameter $includeParameter<br>
   </p></blockquote>
<p>An interesting side note is that if you try to run the TCM command at the command prompt you only need to provide the credentials on the first time it is run, they are cached. This does not seem to be the case inside the Release Management script, TCM is run three times, each time you need to pass the credentials.</p>
<p>Once this was in place, and suitable credentials added to the workflow I expected my test to run. They did but 50% failed – Why?</p>
<p>It runs out the issue was that in my Lab Management environment setup I had set the roles of both IIS server and SharePoint server to <strong>Web Server.</strong></p>
<p>My automated test plan in MTM was set to run automated tests on the <strong>Web Server</strong> role, so sent 50% of the tests to each of the available servers. The tests were run by Lab Agent (not the deployment agent) which was running as the Network Service machine accounts e.g. <strong>ProjProjIIS75$</strong> and <strong>ProjProjSp2010$</strong>. Only for former of these had been granted access to the SQL DB (it was the account being used for the AppPool), hence half the test failed, with DB access issues</p>
<p>I had two options here, grant both machine accounts access, or alter my Lab Environment. I chose the latter. I put the two boxes in different roles</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_172.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_172.png" title="image"></a></p>
<p>I then had to load the test plan in MTM so it was updated with the changes</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_173.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_173.png" title="image"></a></p>
<p>Once this was done my tests then ran as expected.</p>
<h3 id="summary">Summary</h3>
<p>So I now have a Release Management deployment plan that works for a network isolated environment. I can run integration tests, and will soon add some CodeUI ones, it is should only be a case of editing the test plan.</p>
<p>It is an interesting question of how well Release Management, in its current form, works with Lab Management when it is SCVMM/Network Isolated environment based, is is certainly not its primary use case, but it can be done as this post shows. It certainly provides more options than the TFS Lab Management build template we used to use, and does provide an easy way to extend the process to manage deployment to production.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for ‘Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid’ errors on TFS 2013.2 build</title>
      <link>https://blog.richardfennell.net/posts/fix-for-web-deployment-task-failed-unknown-provideroptiondefiningprojectfullpath-known-provideroptions-areskipinvalid-errors-on-tfs-2013-2-build/</link>
      <pubDate>Mon, 07 Apr 2014 23:36:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-web-deployment-task-failed-unknown-provideroptiondefiningprojectfullpath-known-provideroptions-areskipinvalid-errors-on-tfs-2013-2-build/</guid>
      <description>&lt;p&gt;When working with web applications we tend to use &lt;a href=&#34;http://www.iis.net/downloads/microsoft/web-deploy&#34;&gt;MSDeploy&lt;/a&gt; for distribution. Our TFS build box, as well as producing a &lt;strong&gt;_PublishedWebsite&lt;/strong&gt; copy of the site, produce the ZIP packaged version we use to deploy to test and production servers via PowerShell or IIS Manager&lt;/p&gt;
&lt;p&gt;To create this package we add the MSBuild Arguments **/p:CreatePackageOnPublish=True /p:DeployOnBuild=true /p:IsPackaging=True **&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_162.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_162.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This was been working fine, until I upgraded our TFS build system to &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2014/04/02/tfs-2013-2-update-2-released.aspx&#34;&gt;2013.2&lt;/a&gt;. Any builds queued after this upgrade, that builds MSDeploy packages, gives the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When working with web applications we tend to use <a href="http://www.iis.net/downloads/microsoft/web-deploy">MSDeploy</a> for distribution. Our TFS build box, as well as producing a <strong>_PublishedWebsite</strong> copy of the site, produce the ZIP packaged version we use to deploy to test and production servers via PowerShell or IIS Manager</p>
<p>To create this package we add the MSBuild Arguments **/p:CreatePackageOnPublish=True /p:DeployOnBuild=true /p:IsPackaging=True **</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_162.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_162.png" title="image"></a></p>
<p>This was been working fine, until I upgraded our TFS build system to <a href="http://blogs.msdn.com/b/bharry/archive/2014/04/02/tfs-2013-2-update-2-released.aspx">2013.2</a>. Any builds queued after this upgrade, that builds MSDeploy packages, gives the error</p>
<blockquote>
<p>C:Program Files (x86)MSBuildMicrosoftVisualStudiov12.0WebMicrosoft.Web.Publishing.targets (3883): Web deployment task failed. (Unknown ProviderOption:DefiningProjectFullPath. Known ProviderOptions are:skipInvalid.)</p></blockquote>
<p>If I removed the <strong>/p:DeployOnBuild=true</strong> argument, the build was fine, just no ZIP package was created.</p>
<p>After a bit of thought I realised that I had also upgraded my PC to <a href="http://blogs.msdn.com/b/bharry/archive/2014/04/02/tfs-2013-2-update-2-released.aspx">2013.2 RC</a>, the publish options for a web project are more extensive, giving more options for Azure.</p>
<p>So I assumed the issue was a mismatch between MSBuild and target files, missing these new options. So I replaced the contents of <strong>C:Program Files (x86)MSBuildMicrosoftVisualStudiov12.0Web</strong> on my build box with the version from my upgraded development PC and my build started working again.</p>
<p>Seems there are some extra parameters set in the newer version of the build targets. Lets see if it changes again when Visual Studio 2013.2 RTMs.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading a VSTO project from VS 2008 to 2013</title>
      <link>https://blog.richardfennell.net/posts/upgrading-a-vsto-project-from-vs-2008-to-2013/</link>
      <pubDate>Mon, 07 Apr 2014 13:17:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-a-vsto-project-from-vs-2008-to-2013/</guid>
      <description>&lt;p&gt;To make sure all our Word documents are consistent we use a Word template that include VSTO action pane.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_161.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_161.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This allow us to insert standard blocks of text, T&amp;amp;C and the like, and also makes sure document revisions and reviews are correctly logged. We have used this for years without any issues, but I recently needed to make some changes to the underlying Word .dotx template and I had to jump through a couple of hoops to get it rebuilding in Visual Studio 2013 for Office 2013 (previously it had been built against Visual Studio 2008 generation tools)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>To make sure all our Word documents are consistent we use a Word template that include VSTO action pane.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_161.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_161.png" title="image"></a></p>
<p>This allow us to insert standard blocks of text, T&amp;C and the like, and also makes sure document revisions and reviews are correctly logged. We have used this for years without any issues, but I recently needed to make some changes to the underlying Word .dotx template and I had to jump through a couple of hoops to get it rebuilding in Visual Studio 2013 for Office 2013 (previously it had been built against Visual Studio 2008 generation tools)</p>
<p>The old VSTO project opened in Visual Studio 2013 without a problem, doing the one way upgrade. However, when I tried to build the project (which also signs it) I got the error</p>
<blockquote>
<pre tabindex="0"><code>The &#34;FindRibbons&#34; task failed unexpectedly. System.IO.FileNotFoundException:   Could not load file or assembly &#39;BMAddIn, Version=1.0.0.0, Culture=neutral,    PublicKeyToken=null&#39; or one of its dependencies.   The system cannot find the file specified.
</code></pre></blockquote>
<p>The  issue was that you need to remove the <strong>SecurityTransparent</strong> attribute from the end of the <strong>AssemblyInfo.cs</strong> file <a href="http://msdn.microsoft.com/en-us/library/ee207231%28VS.100%29.aspx#upgrade">as detailed in MSDN</a>.</p>
<p>Once this error was clear, I also got a problem when I tried to sign the assembly</p>
<blockquote>
<p>error CS1548: Cryptographic failure while signing assembly. Unknown error (8013141c)</p></blockquote>
<p>This was fixed by sorting the rights on my PC as I am running Visual Studio as a non-admin account. You need to give your current user ‘Full Access’ to C:Documents and SettingsAll UsersApplication DataMicrosoftCryptoRSAMachineKeys, or run Visual Studio as admin</p>
<p>So now it rebuilds and can be deployed I can make my modifications and enhance our VSTO solution, a much underused technology.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2013.2 has RTM’d</title>
      <link>https://blog.richardfennell.net/posts/tfs-2013-2-has-rtmd/</link>
      <pubDate>Thu, 03 Apr 2014 16:24:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2013-2-has-rtmd/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2014/04/02/tfs-2013-2-update-2-released.aspx&#34;&gt;TFS 2013.2 got RTM’d&lt;/a&gt; last night and is available on MSDN; interestingly Visual Studio 2013.2 is still only an RC, we have to wait for that to RTM.&lt;/p&gt;
&lt;p&gt;As we had a good proportion of our team at &lt;a href=&#34;http://www.buildwindows.com/&#34;&gt;Build 2014&lt;/a&gt; I took the chance to do the upgrade today. It went smoothly, no surprises, though the  installation phase (the middle bit after the copy and before the config wizard) took a while. On our build agents, they all seemed to want reboot (or two) at this point, the TFS server did not, but took a good few minutes with no progress bar movement when I assume it was updating libraries.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.msdn.com/b/bharry/archive/2014/04/02/tfs-2013-2-update-2-released.aspx">TFS 2013.2 got RTM’d</a> last night and is available on MSDN; interestingly Visual Studio 2013.2 is still only an RC, we have to wait for that to RTM.</p>
<p>As we had a good proportion of our team at <a href="http://www.buildwindows.com/">Build 2014</a> I took the chance to do the upgrade today. It went smoothly, no surprises, though the  installation phase (the middle bit after the copy and before the config wizard) took a while. On our build agents, they all seemed to want reboot (or two) at this point, the TFS server did not, but took a good few minutes with no progress bar movement when I assume it was updating libraries.</p>
<p>So what do we get in 2013.2?</p>
<ul>
<li>Can query on Work Item Tagging</li>
<li>Backlog management improvements</li>
<li>Work item charting improvements (can pin charts to the homepage)</li>
<li>Export test plan to HTML</li>
<li>Release Management “Tags”</li>
<li>An assortment of Git improvements</li>
</ul>
<p>I bet the charts on the home page and querying tags will be popular</p>
]]></content:encoded>
    </item>
    <item>
      <title>Looking forward to speaking at Techorama in May</title>
      <link>https://blog.richardfennell.net/posts/looking-forward-to-speaking-at-techorama-in-may/</link>
      <pubDate>Sun, 30 Mar 2014 13:56:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/looking-forward-to-speaking-at-techorama-in-may/</guid>
      <description>&lt;p&gt;Looking forward to speaking at &lt;a href=&#34;http://www.techorama.be/&#34;&gt;Techorama in May&lt;/a&gt;, great range of &lt;a href=&#34;http://www.techorama.be/speakers/&#34;&gt;speakers&lt;/a&gt; and &lt;a href=&#34;http://www.techorama.be/agenda/&#34;&gt;subjects&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.techorama.be/tickets&#34;&gt;&lt;img alt=&#34;speaking-at[1]&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/speaking-at%5B1%5D.jpg&#34; title=&#34;speaking-at[1]&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Hope to see you there, for ticket details see &lt;a href=&#34;http://www.techorama.be/tickets/&#34; title=&#34;http://www.techorama.be/tickets/&#34;&gt;http://www.techorama.be/tickets/&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Looking forward to speaking at <a href="http://www.techorama.be/">Techorama in May</a>, great range of <a href="http://www.techorama.be/speakers/">speakers</a> and <a href="http://www.techorama.be/agenda/">subjects</a>.</p>
<p><a href="http://www.techorama.be/tickets"><img alt="speaking-at[1]" loading="lazy" src="/wp-content/uploads/sites/2/historic/speaking-at%5B1%5D.jpg" title="speaking-at[1]"></a></p>
<p>Hope to see you there, for ticket details see <a href="http://www.techorama.be/tickets/" title="http://www.techorama.be/tickets/">http://www.techorama.be/tickets/</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting started with Release Management with network isolated Lab Management environments</title>
      <link>https://blog.richardfennell.net/posts/getting-started-with-release-management-with-network-isolated-lab-management-environments/</link>
      <pubDate>Sat, 29 Mar 2014 15:23:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-started-with-release-management-with-network-isolated-lab-management-environments/</guid>
      <description>&lt;p&gt;Our testing environments are based on TFS Lab Management, historically we have managed deployment into them manually (or at least via a PowerShell script run manually) or using TFS Build. I thought it time I at least tried to move over to &lt;a href=&#34;http://www.visualstudio.com/en-us/explore/release-management-vs.aspx&#34;&gt;Release Management&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The process to install the components of Release Management is fairly straight forward, there are wizards that ask little other than which account to run as&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Install the deployment server, pointing at a SQL instance&lt;/li&gt;
&lt;li&gt;Install the management client, pointing at the deployment server&lt;/li&gt;
&lt;li&gt;Install the deployment agent on each box you wish to deploy to, again pointing it as the deployment server&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I hit a problem with the third step. Our lab environments are usually &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx&#34;&gt;network isolated&lt;/a&gt;, hence each can potentially be running their own copies of the same domain. This means the connection from the deployment agent to the deployment server is cross domain. We don’t want to setup cross domain trusts as&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Our testing environments are based on TFS Lab Management, historically we have managed deployment into them manually (or at least via a PowerShell script run manually) or using TFS Build. I thought it time I at least tried to move over to <a href="http://www.visualstudio.com/en-us/explore/release-management-vs.aspx">Release Management</a>.</p>
<p>The process to install the components of Release Management is fairly straight forward, there are wizards that ask little other than which account to run as</p>
<ul>
<li>Install the deployment server, pointing at a SQL instance</li>
<li>Install the management client, pointing at the deployment server</li>
<li>Install the deployment agent on each box you wish to deploy to, again pointing it as the deployment server</li>
</ul>
<p>I hit a problem with the third step. Our lab environments are usually <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx">network isolated</a>, hence each can potentially be running their own copies of the same domain. This means the connection from the deployment agent to the deployment server is cross domain. We don’t want to setup cross domain trusts as</p>
<ol>
<li>we don’t want cross domain trusts, they are a pain to manage</li>
<li>as we have multiple copies of environments there are more than one copy of some domains – all very confusing for cross domain trusts</li>
</ol>
<p>So this means you have to use <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/20/Getting-TFS-2012-Agents-to-communicate-cross-domain.aspx">shadow accounts</a>, as detailed <a href="http://support.microsoft.com/kb/2905742">in MSDN for Release Management</a>, the key with this process is to make sure you manually add the accounts in Release Management (step 2) – I missed this at first as it differs from what you usually need to do.</p>
<p>To resolve this issue, add the Service User accounts in Release Management. To do this, follow these steps:</p>
<ol>
<li>
<p>Create a Service User account for each deployment agent in Release Management. For example, create the following:</p>
<p>_Server1__LocalAccount1<br>
__Server2__LocalAccount1<br>
_<em>Server3__LocalAccount1</em></p>
</li>
<li>
<p>Create an account in Release Management, and then assign to that account the Service User and Release Manager user rights. For example, create Release_Management_serverLocalAccount1.</p>
</li>
<li>
<p>Run the deployment agent configuration on each deployment computer.</p>
</li>
</ol>
<p>However I still had a problem, I entered the correct details in the deployment  configuration client, but got the error</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_159.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_159.png" title="image"></a></p>
<p>The logs showed</p>
<blockquote>
<p>Received Exception : Microsoft.TeamFoundation.Release.CommonConfiguration.ConfigurationException: Failed to validate Release Management Server for Team Foundation Server 2013.<br>
   at Microsoft.TeamFoundation.Release.CommonConfiguration.DeployerConfigurationManager.ValidateServerUrl()<br>
   at Microsoft.TeamFoundation.Release.CommonConfiguration.DeployerConfigurationManager.ValidateAndConfigure(DeployerConfigUpdatePack updatePack, DelegateStatusUpdate statusListener)<br>
   at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument)</p></blockquote>
<p>A quick look using <a href="http://www.wireshark.org/">WireShark</a> showed it was try to access <a href="http://releasemgmt.mydomain.com:1000/configurationservice.asmx,if" title="http://releasemgmt.blackmarble.co.uk:1000/configurationservice.asmx">http://releasemgmt.mydomain.com:1000/configurationservice.asmx,if</a> I tried to access this in a browser it showed</p>
<blockquote>
<p>Getting</p>
<p>Server Error in &lsquo;/&rsquo; Application.<br>
-&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;-</p>
<p>Request format is unrecognized.<br>
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.<br>
Exception Details: System.Web.HttpException: Request format is unrecognized.</p></blockquote>
<p>Turns out the issue was I needed to be run the deployment client configuration tool as the shadow user account, not as any other local administrator.</p>
<p>Once I did this the configuration worked and the management console could scan for the client. So now I can really start to play…</p>
]]></content:encoded>
    </item>
    <item>
      <title>A better way of using TFS Community Build Extensions StyleCop activity so it can use multiple rulesets</title>
      <link>https://blog.richardfennell.net/posts/a-better-way-of-using-tfs-community-build-extensions-stylecop-activity-so-it-can-use-multiple-rulesets/</link>
      <pubDate>Tue, 25 Mar 2014 09:26:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-better-way-of-using-tfs-community-build-extensions-stylecop-activity-so-it-can-use-multiple-rulesets/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;The &lt;a href=&#34;https://tfsbuildextensions.codeplex.com/&#34;&gt;TFS Community Build Extensions&lt;/a&gt; provide many activities to enhance your build. One we use a lot is the one for &lt;a href=&#34;http://stylecop.codeplex.com/&#34;&gt;StyleCop&lt;/a&gt; to enforce code consistency in projects as part of our check in &amp;amp; build process.&lt;/p&gt;
&lt;p&gt;In most projects you will not want a single set of StyleCop rules to be applied across the whole solution. Most teams will require a higher level of ‘rule adherence’ for production code as opposed to unit test code. By this I don’t mean the test code is ‘lower quality’, just that rules will differ e.g. we don’t require XML documentation blocks on unit test methods as the unit test method names should be documentation enough.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>The <a href="https://tfsbuildextensions.codeplex.com/">TFS Community Build Extensions</a> provide many activities to enhance your build. One we use a lot is the one for <a href="http://stylecop.codeplex.com/">StyleCop</a> to enforce code consistency in projects as part of our check in &amp; build process.</p>
<p>In most projects you will not want a single set of StyleCop rules to be applied across the whole solution. Most teams will require a higher level of ‘rule adherence’ for production code as opposed to unit test code. By this I don’t mean the test code is ‘lower quality’, just that rules will differ e.g. we don’t require XML documentation blocks on unit test methods as the unit test method names should be documentation enough.</p>
<p>This means each of our projects in a solution may have their own StyleCop settings file. With Visual Studio these are found and used by the StyleCop runner without an issue.</p>
<p>However, on our TFS build boxes what we found that when we told it to build a solution, the StyleCop settings file in the same folder as the solution file was used for the whole solution. This means we saw a lot of false violations, such as unit test with no documentation headers.</p>
<p>The workaround we have used to not tell the TFS build to build a solution, but to build each project individually (in the correct order). By doing this the StyleCop settings file in the project folder is picked up. This is an OK solution, but does mean you need to remember to add new projects and remove old ones as the solution matures. Easy to forget.</p>
<h3 id="why-is-it-like-this">Why is it like this?</h3>
<p>Because of this on our engineering backlog we have had a task to update the StyleCop task so it did not use the settings file from root solution/project folder (or any single named settings file you specified).</p>
<p>I eventually got around to this, mostly due to new solutions being starting that I knew would contain many projects and potentially had a more complex structure than I wanted to manage by hand within the  build process.</p>
<p>The issue is the in the activity a StyleCop console application object is created and run. This takes a single settings file and a list of .CS files as parameters. So if you want multiple settings files you need to create multiple StyleCop console application objects.</p>
<p>Not a problem I thought, nothing adding a couple of activity arguments and a foreach loop can’t fix. I even got as far as testing the logic in a unit test harness, far easier than debugging in a TFS build itself.</p>
<p>It was then I realised the real problem, it was the <a href="https://tfsbuildextensions.codeplex.com/documentation">StyleCop build activity documentation</a>, and I only have myself  to blame here as I wrote it!</p>
<p>The documentation suggests a way to use the activity</p>
<ol>
<li>Find the .sln or .csproj folder</li>
<li>From this folder load a settings.stylecop file</li>
<li>Find all the .CS files under this location</li>
<li>Run StyleCop</li>
</ol>
<p>It does not have to be this way, you don’t need to edit the StyleCop activity, just put in a different workflow</p>
<h3 id="a-better-workflow">A better workflow?</h3>
<p>The key is finding the setting files, not the solution or project files. So if we assume we are building a single solution we can use the following workflow</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_158.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_158.png" title="image"></a></p>
<p>Using the base path of the .sln file do a recursive search for all *.stylecop files</p>
<p>Loop on this set of .stylecop files</p>
<p>For each one do a recursive search for .cs files under its location</p>
<p>Run StyleCop for this settings file against the source files below it.</p>
<p>This solution seems to work, you might get some files scanned twice if you have nested settings files, but that is not a issue for us as we place a StyleCop settings file with each projects. We alter the rules in each of these files as needed, from full sets to empty rulesets of we want StyleCop to skip the project.</p>
<p>So now I have it working internally it is now time go and update the <a href="https://tfsbuildextensions.codeplex.com/">TFS Community Build Extensions Documentation</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Migrating to SCVMM 2012 R2 in a TFS Lab Scenario</title>
      <link>https://blog.richardfennell.net/posts/migrating-to-scvmm-2012-r2-in-a-tfs-lab-scenario/</link>
      <pubDate>Thu, 20 Mar 2014 16:03:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/migrating-to-scvmm-2012-r2-in-a-tfs-lab-scenario/</guid>
      <description>&lt;p&gt;Rik has just done a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2014/03/20/Migrating-to-SCVMM-2012-R2-in-a-TFS-Lab-Scenario.aspx&#34;&gt;post on upgrading our SCVMM 2012 instance we use with Lab Management to 2012 R2&lt;/a&gt;, a good few gotchas in there as you might expect.&lt;/p&gt;
&lt;p&gt;Well worth a read&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Rik has just done a <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2014/03/20/Migrating-to-SCVMM-2012-R2-in-a-TFS-Lab-Scenario.aspx">post on upgrading our SCVMM 2012 instance we use with Lab Management to 2012 R2</a>, a good few gotchas in there as you might expect.</p>
<p>Well worth a read</p>
]]></content:encoded>
    </item>
    <item>
      <title>Migrating a TFS TFVC based team project to a Git team project retaining as much source and work item history as possible</title>
      <link>https://blog.richardfennell.net/posts/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-retaining-as-much-source-and-work-item-history-as-possible/</link>
      <pubDate>Fri, 14 Mar 2014 11:02:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/migrating-a-tfs-tfvc-based-team-project-to-a-git-team-project-retaining-as-much-source-and-work-item-history-as-possible/</guid>
      <description>&lt;p&gt;I have just had a guest blog post published on the Microsoft UK Developers site ‘&lt;a href=&#34;http://bit.ly/RFennellBlog&#34;&gt;Migrating a TFS TFVC based team project to a Git team project retaining as much source and work item history as possible&lt;/a&gt;’. It discusses alternatives to using TFS Integration Platform to migrate source code and associated work items.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just had a guest blog post published on the Microsoft UK Developers site ‘<a href="http://bit.ly/RFennellBlog">Migrating a TFS TFVC based team project to a Git team project retaining as much source and work item history as possible</a>’. It discusses alternatives to using TFS Integration Platform to migrate source code and associated work items.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Guest post on Microsoft ALM Blog on Lab Management</title>
      <link>https://blog.richardfennell.net/posts/guest-post-on-microsoft-alm-blog-on-lab-management/</link>
      <pubDate>Tue, 25 Feb 2014 22:24:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/guest-post-on-microsoft-alm-blog-on-lab-management/</guid>
      <description>&lt;p&gt;A &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2014/02/25/tfs-lab-management-with-multiple-scvmm-servers.aspx&#34;&gt;clarification blog post&lt;/a&gt; I wrote on what to do if you have multiple SCVMM servers in your network and want to use TFS Lab Management has just been published on the &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/&#34;&gt;Microsoft Application Lifecycle Management&lt;/a&gt; blog.&lt;/p&gt;
&lt;p&gt;For more information on best practices with TFS Lab Management  have a look at the &lt;a href=&#34;http://vsarlabman.codeplex.com/&#34;&gt;ALM Rangers guide&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://vsarlabman.codeplex.com/&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-45-92-metablogapi/6114.image_5F00_4B7E16BC.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2014/02/25/tfs-lab-management-with-multiple-scvmm-servers.aspx">clarification blog post</a> I wrote on what to do if you have multiple SCVMM servers in your network and want to use TFS Lab Management has just been published on the <a href="http://blogs.msdn.com/b/visualstudioalm/">Microsoft Application Lifecycle Management</a> blog.</p>
<p>For more information on best practices with TFS Lab Management  have a look at the <a href="http://vsarlabman.codeplex.com/">ALM Rangers guide</a></p>
<p><a href="http://vsarlabman.codeplex.com/"><img alt="image" loading="lazy" src="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-00-45-92-metablogapi/6114.image_5F00_4B7E16BC.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD South west</title>
      <link>https://blog.richardfennell.net/posts/ddd-south-west/</link>
      <pubDate>Tue, 25 Feb 2014 18:57:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-south-west/</guid>
      <description>&lt;p&gt;There is a call for speakers for DDD South West on the 17th of May in Bristol. I have submitted as proposal, are you going to?&lt;/p&gt;
&lt;p&gt;For more details see &lt;a href=&#34;http://www.dddsouthwest.com/&#34; title=&#34;http://www.dddsouthwest.com/&#34;&gt;http://www.dddsouthwest.com/&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;dddsw_medium.jpg&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddsouthwest.com/SiteAssets/badge/dddsw_medium.jpg&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is a call for speakers for DDD South West on the 17th of May in Bristol. I have submitted as proposal, are you going to?</p>
<p>For more details see <a href="http://www.dddsouthwest.com/" title="http://www.dddsouthwest.com/">http://www.dddsouthwest.com/</a> </p>
<p><img alt="dddsw_medium.jpg" loading="lazy" src="http://www.dddsouthwest.com/SiteAssets/badge/dddsw_medium.jpg"></p>
]]></content:encoded>
    </item>
    <item>
      <title>High CPU utilisation on the data tier after a TFS 2010 to 2013 upgrade</title>
      <link>https://blog.richardfennell.net/posts/high-cpu-utilisation-on-the-data-tier-after-a-tfs-2010-to-2013-upgrade/</link>
      <pubDate>Tue, 25 Feb 2014 18:52:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/high-cpu-utilisation-on-the-data-tier-after-a-tfs-2010-to-2013-upgrade/</guid>
      <description>&lt;p&gt;There have been significant changes in the DB schema between TFS 2010 and 2013. This means that as part of an in-place upgrade process a good deal of data needs to be moved around. Some of this is done as part of the actual upgrade process, but to get you up and running quicker, some is done post upgrade using SQL SPROCs. Depending how much data there is to move this can take a while, maybe many hours. This is the cause the SQL load.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There have been significant changes in the DB schema between TFS 2010 and 2013. This means that as part of an in-place upgrade process a good deal of data needs to be moved around. Some of this is done as part of the actual upgrade process, but to get you up and running quicker, some is done post upgrade using SQL SPROCs. Depending how much data there is to move this can take a while, maybe many hours. This is the cause the SQL load.</p>
<p>A key factor as to how long this takes is the size of your pre upgrade tbl_attachmentContent table, this is where amongst other things test attachments are stored. So if you have a lot of test attachments it will take a while as these are moved to their new home in tbl_content.</p>
<p>If you want to minimise the time this takes it can be a good idea to remove any unwanted test attachments prior to doing the upgrade. This is done with the test attachment cleaner from the appropriate version of <a href="http://visualstudiogallery.msdn.microsoft.com/f017b10c-02b4-4d6d-9845-58a06545627f">TFS Power Tools</a> for your TFS server. However beware that if you don’t have a suitably patched SQL server there can be issues with ghost files (see <a href="http://www.be-init.nl/blog/14063/guide-to-reduce-tfs-database-growth-using-the-test-attachment-cleaner">Terje’s post</a>).</p>
<p>If you cannot patch your SQL to a suitable version to avoid this problem then it is best to clean our old test attachments only after the while TFS migrate has completed i.e. wait until the high SQL CPU utilisation caused by the SPROC based migration has completed. You don’t want to be trying to clean out old test attachments at the same time TFS is trying to migrate them.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New ALM Rangers releases</title>
      <link>https://blog.richardfennell.net/posts/new-alm-rangers-releases/</link>
      <pubDate>Tue, 18 Feb 2014 12:47:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-alm-rangers-releases/</guid>
      <description>&lt;p&gt;While I have been on holiday there have been a few &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2014/02/12/new-readiness-content-released-or-spotted-by-the-alm-rangers.aspx&#34;&gt;ALM Rangers releases&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I am particularly happy to see the &lt;a href=&#34;http://blogs.msdn.com/b/willy-peter_schaub/archive/2014/02/06/lab-management-guide-v3-update-is-available.aspx&#34;&gt;Lab Management Guide … v3 update&lt;/a&gt; guidance is available, as this was a project I was working on. The big change from previous editions is that it covers setting up Lab Management to make use of Azure IaaS resources.&lt;/p&gt;
&lt;p&gt;So if you use TFS have a look at &lt;a href=&#34;http://aka.ms/vsarsolutions&#34;&gt;http://aka.ms/vsarsolutions&lt;/a&gt; for a full list of resources&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>While I have been on holiday there have been a few <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2014/02/12/new-readiness-content-released-or-spotted-by-the-alm-rangers.aspx">ALM Rangers releases</a>.</p>
<p>I am particularly happy to see the <a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2014/02/06/lab-management-guide-v3-update-is-available.aspx">Lab Management Guide … v3 update</a> guidance is available, as this was a project I was working on. The big change from previous editions is that it covers setting up Lab Management to make use of Azure IaaS resources.</p>
<p>So if you use TFS have a look at <a href="http://aka.ms/vsarsolutions">http://aka.ms/vsarsolutions</a> for a full list of resources</p>
<p><a href="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-01-18-52-metablogapi/7840.VS2012.ALMRangers.Logo.NoTrademark.T_5F00_4CDF67E4.png"><img alt="VS2012.ALMRangers.Logo.NoTrademark.T" loading="lazy" src="http://blogs.msdn.com/cfs-file.ashx/__key/communityserver-blogs-components-weblogfiles/00-00-01-18-52-metablogapi/1884.VS2012.ALMRangers.Logo.NoTrademark.T_5F00_thumb_5F00_4C7334EF.png" title="VS2012.ALMRangers.Logo.NoTrademark.T"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>A walkthrough of getting Kerberos working with a Webpart inside SharePoint accessing a WCF service</title>
      <link>https://blog.richardfennell.net/posts/a-walkthrough-of-getting-kerberos-working-with-a-webpart-inside-sharepoint-accessing-a-wcf-service/</link>
      <pubDate>Thu, 06 Feb 2014 21:47:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-walkthrough-of-getting-kerberos-working-with-a-webpart-inside-sharepoint-accessing-a-wcf-service/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Update 2/4/2014&lt;/strong&gt; – Added notes about using service accounts as opposed to machine accounts for the AppPool running the web service&lt;/p&gt;
&lt;p&gt;In the past I have &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/29/fun-with-wcf-sharepoint-and-kerberos-well-it-looks-like-fun-with-hindsight.aspx&#34;&gt;posted on how to get Kerberos running for multi tier applications&lt;/a&gt;. Well as usual when I had to redeploy the application onto new hardware I found my notes were not as clear as I would have hoped. So here is what is meant to be a walkthrough for getting our application working in our TFS lab environment.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Update 2/4/2014</strong> – Added notes about using service accounts as opposed to machine accounts for the AppPool running the web service</p>
<p>In the past I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/29/fun-with-wcf-sharepoint-and-kerberos-well-it-looks-like-fun-with-hindsight.aspx">posted on how to get Kerberos running for multi tier applications</a>. Well as usual when I had to redeploy the application onto new hardware I found my notes were not as clear as I would have hoped. So here is what is meant to be a walkthrough for getting our application working in our TFS lab environment.</p>
<p><strong>What we are building</strong></p>
<p>Our lab is a four box system, running in a test domain <strong>proj.local</strong></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_153.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_153.png" title="image"></a></p>
<ul>
<li>ProjDC – the domain controller for the proj.local domain</li>
<li>ProjIIS75 – a web server hosting our WCF web service</li>
<li>ProjSQL2008R2 – the SQL box for the applications in the domain</li>
<li>ProjSP2010 –  a SharePoint server</li>
</ul>
<p>The logical system we are trying to build is a SharePoint site with a webpart that calls a WCF service which in turn makes calls to a SQL database. We need the identity the user logs into SharePoint server as to be passed to WCF service via impersonation.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_154.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_154.png" title="image"></a></p>
<p>Though not important to this story, all this was all running a TFS Lab management infrastructure as a network isolated environment</p>
<p><strong>Application Deployment</strong></p>
<p>We have to deploy a number of layers for our application</p>
<p><strong>DB</strong></p>
<ol>
<li>Using a SSDT DACPAC deployment we created a new DB for our application on <strong>ProjSQL2008R2</strong></li>
<li>We grant the account (in this case a machine account <strong>projProjIIS75$)</strong> owner access to this DB (the WCF service will run as this account)</li>
</ol>
<p><strong>WCF Service</strong></p>
<ol>
<li>Using MSDeploy we deploy a new copy of our WCF web site onto <strong>ProjIIS75.</strong></li>
<li>We bound this to port 8081</li>
<li>We set the AppPool set to run as Network Service (the <strong>projProjIIS75$</strong>  account we just granted DB access to)<br>
Updated note: You can use a domain <strong>service account</strong> here e.g. <strong>projappserviceaccount</strong>, but if you do this the Kerberos setting should be applied to the service account not the machine account)</li>
<li>We made sure the web site authentication is set enable for <strong>anonymous authentication</strong>, <strong>ASP.NET impersonation</strong> and  <strong>windows authentication</strong></li>
<li>Set the DB connection string to point to the new DB on <strong>ProjSql2008R2</strong>, and other server specific AppSettings, in the <strong>web.config</strong></li>
<li>Made sure port 8081 was open on the firewall</li>
</ol>
<p><strong>SharePoint</strong></p>
<ol>
<li>Add the WSP solution containing our front end  to the SharePoint farm (you can use STSadm or powershell commands to do this)</li>
<li>Using SharePoint Central Admin we deployed this solution to the web application</li>
<li>Activated the feature on the site the solution has been deployed to.</li>
<li>Create a new web page to host the webpart e.g. <a href="http://share2010.proj.local/sitepages/mypage.aspx">http://share2010.proj.local/sitepages/mypage.aspx</a> (Note here the name we use to access this SharePoint site is <strong>share2010</strong> not <strong>ProjSp2010.</strong> This host name is resolved via the DNS on <strong>ProjDC</strong> of our lab environment. This lab setup has a fully configured SharePoint 2010 with a number of web applications each with their own name and associated service accounts, this is important later on)</li>
<li>We added our webpart to the page and set the webpart properties to</li>
</ol>
<ul>
<li>The Url for the WCF web service <strong><a href="http://ProjIIS75.proj.local:8081/callservice.svc">http://ProjIIS75.proj.local:8081/callservice.svc</a></strong></li>
<li>The SPN for the WCF web service <strong>http/ProjIIS75.proj.local:8081</strong></li>
</ul>
<p>Note: we provide the URL and SPN as a parameters as we build the WCF connection programmatically within the webpart. This is as it would be awkward to put this information in a web.config file on a multi server SharePoint farm and we don’t want to hard code them.</p>
<p><strong>Our Code</strong></p>
<p>The WCF service is configured via its <strong>web.config</strong></p>
<blockquote>
<p>&lt;system.serviceModel&gt;<br>
    <bindings><br>
      <wsHttpBinding><br>
           <binding name="MyBinding"><br>
          <security mode="Message"><br>
            <message clientCredentialType="Windows" negotiateServiceCredential="false" establishSecurityContext="false" /><br>
          </security><br>
        </binding><br>
      </wsHttpBinding><br>
    </bindings><br>
    <services><br>
      <service behaviorConfiguration="BlackMarble.Sabs.WcfService.CallsServiceBehavior" name="BlackMarble.Sabs.WcfService.CallsService"><br>
        <endpoint address="" binding="wsHttpBinding" contract="BlackMarble.Sabs.WcfService.ICallsService" bindingConfiguration="MyBinding"></endpoint><br>
        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /><br>
      </service><br>
     </services><br>
    <behaviors><br>
      <serviceBehaviors><br>
        <behavior name="BlackMarble.Sabs.WcfService.CallsServiceBehavior"><br>
          <serviceMetadata httpGetEnabled="true" /><br>
          <serviceDebug includeExceptionDetailInFaults="true" /><br>
          <serviceAuthorization impersonateCallerForAllOperations="true" /><br>
        </behavior><br>
      </serviceBehaviors><br>
    </behaviors><br>
  &lt;/system.serviceModel&gt;</p></blockquote>
<p>The webpart does the same programmatically</p>
<blockquote>
<p>log.Trace(String.Format(“Using URL: {0} SPN: {1} &ldquo;, this.callServiceUrl, this.callServiceSpn));<br>
var callServiceBinding = new WSHttpBinding();<br>
callServiceBinding.Security.Mode = SecurityMode.Message;<br>
callServiceBinding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;<br>
callServiceBinding.Security.Message.NegotiateServiceCredential = false;<br>
callServiceBinding.Security.Message.EstablishSecurityContext = false;<br>
var  ea = new EndpointAddress(new Uri(this.callServiceUrl),  EndpointIdentity.CreateSpnIdentity(this.callServiceSpn));<br>
callServiceBinding.MaxReceivedMessageSize = 2000000;<br>
callServiceBinding.ReaderQuotas.MaxArrayLength = 2000000;</p>
<p>this.callServiceClient = new BlackMarble.Sabs.WcfWebParts.CallService.CallsServiceClient(callServiceBinding, ea);<br>
this.callServiceClient.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;<br>
this.callServiceClient.Open();</p></blockquote>
<p><strong>Getting the Kerberos bits running</strong></p>
<p>First remember that this is a preconfigured test lab where the whole domain, including the SP2010 instance, is already setup for Kerberos authentication. These notes just detail the bits we need to alter to check.</p>
<p>To make sure out new WCF series works in this environment we needed to do the following. All this editing can be done on the domain controller</p>
<ol>
<li>
<p>Using ASDIEDIT, make sure the the computer running the WCF web service, <strong>ProjIIS75</strong>, has any entry in it’s <strong>ServicePrincipalName</strong> for the correct protocol and port i.e. **HTTP/projiis75.proj.local:8081<br>
**Update note: If using a <strong>service account</strong> as opposed to the machine account, network service, you make the same <strong>ServicePrincipalName</strong> edits but to the service account <strong>projappserviceaccount</strong>.<br>
You should only add an SPN entry in one place, if you enter it in two nothing will work, so make sure the SPN is applied to the account the AppPool will run as where it be the machine account if you using network service or the service account if a domain account is being used.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_155.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_155.png" title="image"></a></p>
</li>
<li>
<p>Using Active Directory  Users and Computers tool make sure the <strong>computer</strong> running the WCF web service, <strong>ProjIIS75</strong>, is set to allow delegation<br>
Update note: If using a <strong>service account</strong> as opposed to the machine account, network service, you make the same edits but to the service account <strong>projappserviceaccount</strong>.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_156.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_156.png" title="image"></a></p>
</li>
<li>
<p>Using Active Directory Users and Computers tool make sure the <strong>service account</strong> running the Sharepoint web application, in our case <strong>projsp2010_share</strong>,  is set to allow Kerberos delegation to the computer SPN set in step 1. <strong>HTTP/projiis75.proj.local:8081</strong>. To do this you press the add button, select the correct server then pick the SPN from the list.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_157.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_157.png" title="image"></a></p>
</li>
</ol>
<p><strong>IMPORTANT</strong> Now you would expect that you could just set the ‘Trust  the user for delegation to any service’; however we were unable to get this to work. Now this might just be something we set wrong, but if so I don’t know what it was.</p>
<p>Once this was all set we did an IIS reset on <strong>ProjSP2010</strong> and reloaded the SharePoint page and it all leapt into life.</p>
<p><strong>How to try to debug when it does not work</strong></p>
<p>There is no simple answer to how to debug this type of system, if it fails it just seems to not work and you are left scratching your head. The best option is plenty of in product logging which I tend to surface using <a href="http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx">DebugView</a>, also <a href="http://www.wcfstorm.com/wcf/home.aspx">WCFStorm</a> can be useful to check the WCF service is up</p>
<p>So I hope I find this post useful when I next need to rebuild this system. Maybe someone else will find it useful too.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Gravitas’s Tech Talk #3 -</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-gravitass-tech-talk-3/</link>
      <pubDate>Thu, 06 Feb 2014 17:15:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-gravitass-tech-talk-3/</guid>
      <description>&lt;p&gt;I am speaking at &lt;a href=&#34;https://www.eventbrite.co.uk/e/tech-talk-3-tfs-visual-studio-2013-vs-the-rest-tickets-10523581315&#34;&gt;Gravitas’s Tech Talk #3 - &amp;ldquo;TFS &amp;amp; Visual Studio 2013 vs The Rest&amp;rdquo;&lt;/a&gt; on Tuesday march the 4th about&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&amp;ldquo;Microsoft&amp;rsquo;s Application Lifecycle product has had a lot of changes in the past couple of years. In this session will look at how it can be used provide a complete solution from project inception through development, testing and deployment for project both using Microsoft and other vendor technologies&amp;rdquo;&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking at <a href="https://www.eventbrite.co.uk/e/tech-talk-3-tfs-visual-studio-2013-vs-the-rest-tickets-10523581315">Gravitas’s Tech Talk #3 - &ldquo;TFS &amp; Visual Studio 2013 vs The Rest&rdquo;</a> on Tuesday march the 4th about</p>
<blockquote>
<p><strong>&ldquo;Microsoft&rsquo;s Application Lifecycle product has had a lot of changes in the past couple of years. In this session will look at how it can be used provide a complete solution from project inception through development, testing and deployment for project both using Microsoft and other vendor technologies&rdquo;</strong></p></blockquote>
<p>Hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>GDR3 update for my Nokia 820</title>
      <link>https://blog.richardfennell.net/posts/gdr3-update-for-my-nokia-820/</link>
      <pubDate>Sun, 02 Feb 2014 09:59:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/gdr3-update-for-my-nokia-820/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.wpcentral.com/windows-phone-8-update-3&#34;&gt;GDR3 update for my Nokia 820&lt;/a&gt; has at last arrived. As my phone is a developer unit it is at the end of the automated update list. I suppose I could have pulled it down by hand, but I was not in a major rush as I am not doing WP8 development at present.&lt;/p&gt;
&lt;p&gt;The update seems to have gone on OK. Only strange thing was I was getting low space warnings prior to the upgrade. I suspect this was the fact the new patch had been download onto the phones main storage, but this was a bit full of content from &lt;a href=&#34;http://ipodcast.dotarrowsite.com/&#34;&gt;iPodcast&lt;/a&gt; (the otherwise excellent app I use for podcasts can’t store to my SD card). Just to be on the safe side I uninstalled iPodcast, did the GDR3 update and then reinstalled iPodcast. As I have the premium version I could easily restore my podcast lists, including what I had and had not listened to, from the cloud.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.wpcentral.com/windows-phone-8-update-3">GDR3 update for my Nokia 820</a> has at last arrived. As my phone is a developer unit it is at the end of the automated update list. I suppose I could have pulled it down by hand, but I was not in a major rush as I am not doing WP8 development at present.</p>
<p>The update seems to have gone on OK. Only strange thing was I was getting low space warnings prior to the upgrade. I suspect this was the fact the new patch had been download onto the phones main storage, but this was a bit full of content from <a href="http://ipodcast.dotarrowsite.com/">iPodcast</a> (the otherwise excellent app I use for podcasts can’t store to my SD card). Just to be on the safe side I uninstalled iPodcast, did the GDR3 update and then reinstalled iPodcast. As I have the premium version I could easily restore my podcast lists, including what I had and had not listened to, from the cloud.</p>
<p>Updated 2nd Feb: just checked and it has appeared as yet for my son&rsquo;s 520</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgraded older Build and Test Controllers to TFS 2013</title>
      <link>https://blog.richardfennell.net/posts/upgraded-older-build-and-test-controllers-to-tfs-2013/</link>
      <pubDate>Sat, 01 Feb 2014 14:40:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgraded-older-build-and-test-controllers-to-tfs-2013/</guid>
      <description>&lt;p&gt;All has been going well since our upgrade from &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/21/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take.aspx&#34;&gt;TFS 2012 to 2013&lt;/a&gt;, no nasty surprises.&lt;/p&gt;
&lt;p&gt;As I had a bit of time I thought it a good idea to start the updates of our build and lab/test systems. We had only upgraded our TFS 2012.3 server to 2013. We had not touched our build system (one 2012 controller and 7 agents on various VMs) and our Lab Management/test controller. Our plan, after a bit of thought, was to so a slow migration putting in new 2013 generation build and test controllers in addition to our 2012 ones. We would then decide on an individual build agent VM basis what to do, probably upgrading the build agents and connecting them to the new controller. There seems to be no good reason to rebuild the whole build agent VMs with the specific SDKs and tools they each need.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>All has been going well since our upgrade from <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/21/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take.aspx">TFS 2012 to 2013</a>, no nasty surprises.</p>
<p>As I had a bit of time I thought it a good idea to start the updates of our build and lab/test systems. We had only upgraded our TFS 2012.3 server to 2013. We had not touched our build system (one 2012 controller and 7 agents on various VMs) and our Lab Management/test controller. Our plan, after a bit of thought, was to so a slow migration putting in new 2013 generation build and test controllers in addition to our 2012 ones. We would then decide on an individual build agent VM basis what to do, probably upgrading the build agents and connecting them to the new controller. There seems to be no good reason to rebuild the whole build agent VMs with the specific SDKs and tools they each need.</p>
<p>So we created a new pair of Windows 2012R2 domain joined server VMs, on  one we installed a test controller and on the build controller and a single build agent</p>
<blockquote>
<p>Note: I always tend to favour a single build agent per VM, usually using a single core VM. I tend to find most builds are IO locked not CPU locked so having more smaller VMs, I think, tends to be easier to manage at the VM hosting resource level.</p></blockquote>
<p><strong>Test Controller</strong></p>
<p>Most of the use of our Test Controller is as part of our TFS Lab Management environments. If you load MTM 2013 you will see that it cannot manage 2012 Test Controller, they appear as off line. Lab Management is meant to keep the test agents upgraded, so it should upgrade an agent from one point release to another e.g. 2012.3 to 2012.4. However, this upgrade feature does not extend to 2012 to 2013 major release upgrades. Also I have always found the automated deployment/upgrade of test agents as part of an environment deployment is problematic at best. You often seem to suffer DNS and timeout issues. Easily the most reliable method is to make sure the correct (or at least compatible) test agents are installed on all the environment VMs prior to their configuration at the deployment/restart stage.</p>
<p>Given this the system that seems to work for getting environment’s test agents talking to the new 2013 Test Controller is:</p>
<ol>
<li>
<p>In MTM stop the environment</p>
</li>
<li>
<p>Open the environment settings and change the controller to the new one</p>
</li>
<li>
<p>Restart the environment, you will see the VMs show as not ready. The test agents won’t configure.</p>
</li>
<li>
<p>Connect to each VM</p>
</li>
<li>
<p>Uninstall the 2012 Test Agent</p>
</li>
<li>
<p>install the 2013 Test Agent</p>
</li>
<li>
<p>Stop and restart the environment and all should work – with luck the VMs will configure properly and show as ready</p>
</li>
<li>
<p>If the don’t</p>
</li>
<li>
<p>Try a second restart, sometimes sorts it</p>
</li>
<li>
<p>You can try a repair, re-entering the various password.</p>
</li>
<li>
<p><strong>Updated 5 Feb 2014</strong> Found I always need to do this. If problem really persist try running  Run the Test Agent Configuration tool on each VM, press next,next, next etc. and it will try to configure. It will probably fail, but hopefully it will have done enough port opening etc. to allow the next environment restart to work correctly</p>
</li>
<li>
<p>If it still fails you need to check the logs, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/03/28/What-machine-name-is-being-used-when-you-compose-an-environment-from-running-VMs-in-Lab-Management.aspx">but suspect a DNS issue</a>.</p>
</li>
</ol>
<p>Obvious you could move step 4 to the start if you make the fair assumption it is going to need manual intervention</p>
<p><strong>Build Controller</strong></p>
<p>Swapping your build over from 2012 to 2013 will have site specific issues. It all depends what build activities you are using. If they are bound to TFS 2012 API they may not work unless you rebuild them. However from my first tests I have found my Build 2012 processes template seem to work I, this is whether I set my  build controller ‘custom assemblies path’ to either my 2012 DLL versions or their 2013 equivalents. So .NET is managing to resolve usable DLLs to get the build working.</p>
<p>Obviously there is still more to do here, checking all my custom build assemblies, maybe a look at revising the whole build scripts to make use to 2013 features, but that can wait.</p>
<p>What I have now allows me to upgrade our Windows 8.1 build agent VM so it can connect to our 2013 Build Controller. Thus allowing use to run full automated builds and tests of Windows 8.1 application. Up to now with TFS 2012 we had only been able to the basic build working due to having to hack i the build process as you need Visual Studio 2013 generation tools to fully build and test Windows 8.1 applications.</p>
<p>So we are going to have 2012 build and test controllers around for  while, but we have provided the migration is not going to be too bad. Maybe just needs a bit of thought over some custom build assemblies.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Testing new social plugin for BlogEngine</title>
      <link>https://blog.richardfennell.net/posts/testing-new-social-plugin-for-blogengine/</link>
      <pubDate>Mon, 27 Jan 2014 13:19:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/testing-new-social-plugin-for-blogengine/</guid>
      <description>&lt;p&gt;We are testing the ‘social publish’  plugin for BlogEngine. &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2014/01/27/BlogEngineNet-automatically-tweeting-on-new-posts.aspx&#34;&gt;Rik’s post&lt;/a&gt; has the details on what needs to be done to get it working with the current Twitter APIs&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We are testing the ‘social publish’  plugin for BlogEngine. <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2014/01/27/BlogEngineNet-automatically-tweeting-on-new-posts.aspx">Rik’s post</a> has the details on what needs to be done to get it working with the current Twitter APIs</p>
]]></content:encoded>
    </item>
    <item>
      <title>How long is my TFS 2010 to 2013 upgrade going to take?</title>
      <link>https://blog.richardfennell.net/posts/how-long-is-my-tfs-2010-to-2013-upgrade-going-to-take/</link>
      <pubDate>Tue, 21 Jan 2014 19:44:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-long-is-my-tfs-2010-to-2013-upgrade-going-to-take/</guid>
      <description>&lt;p&gt;Update 27 Jun 2013 &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/06/27/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take-Part-2.aspx&#34;&gt;See update version of post with more data&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I seem to be involved with a number of TFS 2010 to 2013 upgrades at present. I suppose people are looking at TFS 2013 in the same way as they have historically looked at the first service pack for a product i.e: the time to upgrade when most of the main issues are addressed. That said TFS 2013 is not TFS 2012 SP1!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Update 27 Jun 2013 <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/06/27/How-long-is-my-TFS-2010-to-2013-upgrade-going-to-take-Part-2.aspx">See update version of post with more data</a></p>
<p>I seem to be involved with a number of TFS 2010 to 2013 upgrades at present. I suppose people are looking at TFS 2013 in the same way as they have historically looked at the first service pack for a product i.e: the time to upgrade when most of the main issues are addressed. That said TFS 2013 is not TFS 2012 SP1!</p>
<p>A common question is how long will the process take to upgrade each Team Project Collection? The answer is that it depends, a good consultants answer. Factors include the number of work items, size of the code base, number of changesets, volume of test results and the list goes on.  The best I have been able to come up with is to record some timings of previous upgrades and use this data to make an educated guess. </p>
<p>In an upgrade of a TPC from TFS 2010 to 2013 there are 793 steps to be taken. Not all these take the same length of time, some are very slow as can be seen in the chart. I have plotted the points where the upgrade seems to pause the longest. These are mostly towards the start of the process where I assume the main  DB schema changes are being made</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_160.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_160.png" title="image"></a></p>
<p>To give some more context</p>
<ul>
<li>Client C was a production quality multi tier setup and took about 3 hours to complete.</li>
<li>Client L, though with a a similar sized DB to Server A, was much slower to upgrade, around 9 hours. However, it was on a slower single tier test VM and also had a lot of historic test data attachments (70%+ of the DB contents)</li>
<li>Demo VM was my demo/test TFS 2010 VM, this had 4 TPCs, the timing are for the largest of 600Mb. In reality this server had little ‘real’ data. It is also interesting to note that though there were four TPCs the upgrade did three in parallel and when the first finished started the fourth. Worth remembering if you are planning an upgrade of many TPCs.</li>
</ul>
<p>Given this chart, if you know how long it takes to get to Step 30 of 793 you can get an idea of which of these lines closest matches your system.</p>
<p>I will continue to update this post as I get more sample data, hope it will be of use to others to gauge how only upgrades may take, but remember your mileage may vary.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading our TFS 2012 server to 2013</title>
      <link>https://blog.richardfennell.net/posts/upgrading-our-tfs-2012-server-to-2013/</link>
      <pubDate>Tue, 21 Jan 2014 19:09:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-our-tfs-2012-server-to-2013/</guid>
      <description>&lt;p&gt;We have eventually got around to the 2013 upgrade of our production TFS server. It had been put off due to some tight delivery deadlines around Christmas.&lt;/p&gt;
&lt;p&gt;The upgrade went fine, unlike out some &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/02/07/More-in-rights-being-stripped-for-the-team-project-contributors-group-in-TFS-2012-when-QU1-applied-and-how-to-sort-it.aspx&#34;&gt;previous ones&lt;/a&gt; we have had.&lt;/p&gt;
&lt;p&gt;The upgrading of our team process templates, to add the new features, was greatly eased by using &lt;a href=&#34;http://features4tfs.codeplex.com/&#34;&gt;Feature4Tfs tool on CodePlex&lt;/a&gt;. This meant one command line call and all the projects were done (we had no significant process customisation) as opposed to visiting in team project in the admin console.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have eventually got around to the 2013 upgrade of our production TFS server. It had been put off due to some tight delivery deadlines around Christmas.</p>
<p>The upgrade went fine, unlike out some <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/02/07/More-in-rights-being-stripped-for-the-team-project-contributors-group-in-TFS-2012-when-QU1-applied-and-how-to-sort-it.aspx">previous ones</a> we have had.</p>
<p>The upgrading of our team process templates, to add the new features, was greatly eased by using <a href="http://features4tfs.codeplex.com/">Feature4Tfs tool on CodePlex</a>. This meant one command line call and all the projects were done (we had no significant process customisation) as opposed to visiting in team project in the admin console.</p>
<p>For now we are continuing to run with our TFS 2012 generation build and test controllers. These are working fine with 2013, so we can upgrade these when it is convenient, not all in a rush.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Changing targeted .NET version for a project means web.config changes for EF</title>
      <link>https://blog.richardfennell.net/posts/changing-targeted-net-version-for-a-project-means-web-config-changes-for-ef/</link>
      <pubDate>Wed, 15 Jan 2014 12:43:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/changing-targeted-net-version-for-a-project-means-web-config-changes-for-ef/</guid>
      <description>&lt;p&gt;I am upgrade an internal system from .NET 4.0 to 4.5 so that I can use the &lt;a href=&#34;http://blog.johnsworkshop.net/tfs11-api-reading-the-team-configuration-iterations-and-areas/#more-527&#34;&gt;Team API features in TFS&lt;/a&gt;. The system is based around a WCF web service that links our customer help desk system to TFS to keep bug reports in sync. It uses Entity Framework to access our help desk SQL DB.&lt;/p&gt;
&lt;p&gt;When I changed the targeted .NET framework  for the WCF project, I started to get warning to update the Nuget managed references for EF, which I did.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am upgrade an internal system from .NET 4.0 to 4.5 so that I can use the <a href="http://blog.johnsworkshop.net/tfs11-api-reading-the-team-configuration-iterations-and-areas/#more-527">Team API features in TFS</a>. The system is based around a WCF web service that links our customer help desk system to TFS to keep bug reports in sync. It uses Entity Framework to access our help desk SQL DB.</p>
<p>When I changed the targeted .NET framework  for the WCF project, I started to get warning to update the Nuget managed references for EF, which I did.</p>
<p>Once this was done, all my unit tests passed, however when i tried to load my test system it got the following  error (when it tried to create the EF DbContext)</p>
<blockquote>
<p>An exception of type &lsquo;System.TypeInitializationException&rsquo; occurred in EntityFramework.dll but was not handled in user code</p>
<p>Additional information: The type initializer for &lsquo;System.Data.Entity.Internal.AppConfig&rsquo; threw an exception.</p></blockquote>
<p>Turns out the issue was a reference to EF in the WCF project web.config</p>
<blockquote>
<configSections>  
    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=4.4.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />  
    <!-- For more information on Entity Framework configuration, visit [http://go.microsoft.com/fwlink/?LinkID=237468](http://go.microsoft.com/fwlink/?LinkID=237468) -->  
  </configSections></blockquote>
<p>should have been</p>
<blockquote>
<configSections>  
    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />  
    <!-- For more information on Entity Framework configuration, visit [http://go.microsoft.com/fwlink/?LinkID=237468](http://go.microsoft.com/fwlink/?LinkID=237468) -->  
  </configSections></blockquote>
<p>A misleading error message don’t you think?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for intermittent connection problem in lab management – restart the test controller</title>
      <link>https://blog.richardfennell.net/posts/fix-for-intermittent-connection-problem-in-lab-management-restart-the-test-controller/</link>
      <pubDate>Mon, 13 Jan 2014 14:50:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-intermittent-connection-problem-in-lab-management-restart-the-test-controller/</guid>
      <description>&lt;p&gt;Just had a problem with a TFS 2012 Lab Management deployment build. It was working this morning, deploying two web sites via MSDeploy and a DB via a DacPac, then running some CodedUI tests. However, when I tried a new deployment this afternoon it kept failing with the error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The deployment task was aborted because there was a connection failure between the test controller and the test agent.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_151.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_151.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;If you watched the build deployment via MTM you could see it start OK, then the agent went off line after a few seconds.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just had a problem with a TFS 2012 Lab Management deployment build. It was working this morning, deploying two web sites via MSDeploy and a DB via a DacPac, then running some CodedUI tests. However, when I tried a new deployment this afternoon it kept failing with the error:</p>
<blockquote>
<p><em>The deployment task was aborted because there was a connection failure between the test controller and the test agent.</em></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_151.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_151.png" title="image"></a></p></blockquote>
<p>If you watched the build deployment via MTM you could see it start OK, then the agent went off line after a few seconds.</p>
<p>Turns out the solution was the old favourite, to reboot of the Build Controller. Would like to know why it was giving this intermittent problem though.</p>
<p><strong>Update 14th Jan</strong> <em>An alternative solution to rebooting is to add a hosts file entry on the VM running the test agent for the IP address of the test controller. Seems the problem is name resolution, but not sure why it occurs</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Great support from BlogEngine.NET</title>
      <link>https://blog.richardfennell.net/posts/great-support-from-blogengine-net/</link>
      <pubDate>Tue, 07 Jan 2014 10:27:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/great-support-from-blogengine-net/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/06/Upgrading-from-BlogEngine-28-to-29.aspx&#34;&gt;posted yesterday&lt;/a&gt; that we had upgraded to the current version of BE 2.9. We, as you might expect, had a few minor issues, but I must say the support on the &lt;a href=&#34;http://blogengine.codeplex.com/discussions&#34;&gt;discussion forums&lt;/a&gt; has been excellent.&lt;/p&gt;
&lt;p&gt;This included the problems I had that were down to missing files and web.config issues (basically my copy errors when moving content from our BE 2.8 instance to the new BE 2.9 on) and a genuine bug in the new CustomFields code &lt;a href=&#34;http://blogengine.codeplex.com/discussions/479432&#34;&gt;(fixed in 2.9.0.3&lt;/a&gt;). All discussion posts were responded to, and in the case of the bug fix, within a very short period of time.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2014/01/06/Upgrading-from-BlogEngine-28-to-29.aspx">posted yesterday</a> that we had upgraded to the current version of BE 2.9. We, as you might expect, had a few minor issues, but I must say the support on the <a href="http://blogengine.codeplex.com/discussions">discussion forums</a> has been excellent.</p>
<p>This included the problems I had that were down to missing files and web.config issues (basically my copy errors when moving content from our BE 2.8 instance to the new BE 2.9 on) and a genuine bug in the new CustomFields code <a href="http://blogengine.codeplex.com/discussions/479432">(fixed in 2.9.0.3</a>). All discussion posts were responded to, and in the case of the bug fix, within a very short period of time.</p>
<p>If you need a blog server and have not looked at at <a href="http://dotnetblogengine.net/">BlogEngine.NET</a> I think it will be well worth your time taking a peek</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading from BlogEngine 2.8 to 2.9</title>
      <link>https://blog.richardfennell.net/posts/upgrading-from-blogengine-2-8-to-2-9/</link>
      <pubDate>Mon, 06 Jan 2014 13:57:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-from-blogengine-2-8-to-2-9/</guid>
      <description>&lt;p&gt;I have just upgrade our Blog Server from &lt;a href=&#34;http://www.dotnetblogengine.net/&#34;&gt;BlogEngine.NET 2.8 to 2.9&lt;/a&gt;, all seems to have gone well, as before basically it is just copies files and adds s table to the DB schema, so…&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;backup your blogs folder and SQL DB in case of problems&lt;/li&gt;
&lt;li&gt;delete the contents  of the blogs folder&lt;/li&gt;
&lt;li&gt;copy in the the &lt;a href=&#34;http://blogengine.codeplex.com/releases/view/116747&#34;&gt;new release&lt;/a&gt; from the zip&lt;/li&gt;
&lt;li&gt;run the SQL upgrade script on your DB&lt;/li&gt;
&lt;li&gt;fix the SQL connection string in the web.config&lt;/li&gt;
&lt;li&gt;copy in the theme and extension files you were using as detailed in the &lt;a href=&#34;http://blogengine.codeplex.com/documentation&#34;&gt;release notes&lt;/a&gt;, but I found using &lt;a href=&#34;http://www.rtur.net/blog/post/2013/12/01/updater-utility-for-blogengine-29&#34;&gt;updater utility&lt;/a&gt; did a great job for me&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;As many of our sites were using the standard theme, it picked up the new bootstrap based version. Anyone with the other themes just see what they saw before&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just upgrade our Blog Server from <a href="http://www.dotnetblogengine.net/">BlogEngine.NET 2.8 to 2.9</a>, all seems to have gone well, as before basically it is just copies files and adds s table to the DB schema, so…</p>
<ol>
<li>backup your blogs folder and SQL DB in case of problems</li>
<li>delete the contents  of the blogs folder</li>
<li>copy in the the <a href="http://blogengine.codeplex.com/releases/view/116747">new release</a> from the zip</li>
<li>run the SQL upgrade script on your DB</li>
<li>fix the SQL connection string in the web.config</li>
<li>copy in the theme and extension files you were using as detailed in the <a href="http://blogengine.codeplex.com/documentation">release notes</a>, but I found using <a href="http://www.rtur.net/blog/post/2013/12/01/updater-utility-for-blogengine-29">updater utility</a> did a great job for me</li>
</ol>
<p>As many of our sites were using the standard theme, it picked up the new bootstrap based version. Anyone with the other themes just see what they saw before</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for Media Center library issue after Christmas tree lights incident</title>
      <link>https://blog.richardfennell.net/posts/fix-for-media-center-library-issue-after-christmas-tree-lights-incident/</link>
      <pubDate>Tue, 24 Dec 2013 13:43:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-media-center-library-issue-after-christmas-tree-lights-incident/</guid>
      <description>&lt;p&gt;&lt;em&gt;Twas the night before Christmas and….&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To cut a long story short the PC that runs my &lt;a href=&#34;http://windows.microsoft.com/en-gb/windows7/products/features/windows-media-center&#34;&gt;Window Media Center&lt;/a&gt; (MCE) got switched on and off at the wall twice whilst Christmas tree lights were being put up.&lt;/p&gt;
&lt;p&gt;Now the PC is running WIndows 8.1 on modern hardware, so it should have been OK, and mostly was. However I found a problem that MCE was not showing any music, video or pictures in its libraries but the recorded TV library was fine. I suspected the issue was that my media is on an external USB3 RAID unit, so there was a chance that on one of the unintended reboots the drives had not spun up in time and MCE had ‘forgotten’ about the external drive.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>Twas the night before Christmas and….</em></p>
<p>To cut a long story short the PC that runs my <a href="http://windows.microsoft.com/en-gb/windows7/products/features/windows-media-center">Window Media Center</a> (MCE) got switched on and off at the wall twice whilst Christmas tree lights were being put up.</p>
<p>Now the PC is running WIndows 8.1 on modern hardware, so it should have been OK, and mostly was. However I found a problem that MCE was not showing any music, video or pictures in its libraries but the recorded TV library was fine. I suspected the issue was that my media is on an external USB3 RAID unit, so there was a chance that on one of the unintended reboots the drives had not spun up in time and MCE had ‘forgotten’ about the external drive.</p>
<p>So I tried to re-add the missing libraries via <strong>MCE &gt; Tasks &gt; Settings &gt; Media Libraries</strong>. The wizard ran OK allowing me to select the folders on the external disk, but when I got to the end the final dialog closed virtually instantly. I would normally have expected it to count up all the media files as they were found. Also if I went back into the wizard I could not see the folder I had just added.</p>
<p>A bit of searching on the web told me that MCE shares its libraries with Windows Media Player, and there was a a good chance they were corrupted. In fact running the <a href="http://support.microsoft.com/mats/windows_media_player_diagnostic?wa=wsignin1.0">Windows Media Player trouble-shooter</a> told me as as much. So I deleted the contents of <strong>%LOCALAPPDATA%MicrosoftMedia Player</strong> folder as suggested. It had no useful effect on the problem. The only change was the final dialog in the wizard did appear to count the media files it found now, taking a few minutes before it closed. But the results of the scan were not saved.</p>
<p>So I switched my focus to Media Player (WMP). I quickly saw this was showing the same problems. If I selected <strong>WMP &gt; Organise &gt; Manage libraries</strong> no dialog was shown for music, video or pictures. However the dialog did appear for Recorded TV which we know was working in MCE.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_149.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_149.png" title="image"></a></p>
<p>Also if I selected <strong>WMP &gt; Organise &gt; Options… &gt; Rip Music</strong>, there was no rip location set, and you could not set it if you pressed the Change button.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_150.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_150.png" title="image"></a></p>
<p>The web quickly showed me I was not alone in this problem, <a href="http://answers.microsoft.com/en-us/windows/forum/windows_7-pictures/cannot-rip-cd-cannot-change-album-info/07fb8c99-9a7b-4456-9633-1f487d5f7a42">as shown in this post and others on the Microsoft forums</a>. It is worth noting that this thread, and the others, do seem to focus on Windows 7 or Vista. Remember I was on a PC that was a new install of Windows 8 and in place upgraded to 8.1 via the Windows Store, but I don’t think was the issue.</p>
<p>Anyway I tried everything I could find the posts</p>
<ul>
<li>Restarted services</li>
<li>Deleted the WMP databases (again)</li>
<li>Uninstalled and re-install WMP via the <strong>WIndows Control panel &gt; Install Products &gt; Windows feature</strong></li>
<li>Checked the permissions on folder containing the media</li>
</ul>
<p>Everything seemed to point to a missing folder. The threads talked about WMP being set to use a Rip folder that it could not find. As my data was on an external RAID this seemed reasonable. However on checking <strong>[HKEY_CURRENT_USERSoftwareMicrosoftMediaPlayerPreferencesHMELastSharedFolders]</strong> there were no paths that could not be resolved.</p>
<p>So I decided to have a good look at what was going on under the covers with <a href="http://technet.microsoft.com/en-gb/sysinternals/bb896645.aspx">Sysinternals Procmon</a>, but could see nothing obvious, no missing folders, not registry key calls missed.</p>
<p>In the end the pointer to the actual fix was on <a href="http://answers.microsoft.com/en-us/windows/forum/windows_7-pictures/cannot-rip-cd-cannot-change-album-info/07fb8c99-9a7b-4456-9633-1f487d5f7a42?page=8">page 8 of the thread by Tim de Baets</a>. Turns out the issue was with the media libraries in <strong>C:Users<your username>AppDataRoamingMicrosoftWindowsLibraries</strong>. If I tried to a open any of these in Windows Explorer I got an error dialog in the form <strong>&lsquo;Music-library-ms&rsquo; is not longer working.</strong> So I deleted the Pictures, Music and Video library folders in <strong>C:Users<your username>AppDataRoamingMicrosoftWindowsLibraries</strong>, which was not a problem as they were all empty.</p>
<p>When I reloaded WMP I could now open the <strong>WMP &gt; Organise &gt; Manage libraries</strong> dialogs and re-add the folders on my RAID disk, also I could set the Rip folder.</p>
<p>As these settings were shared with MCE my problem was fixed, ready for a Christmas of recording TV, looking at family photos and playing music.</p>
<p>Whether it was the power outages that caused the problem, I have my doubts, as power cuts have not been an issue in the past. maybe it is some strange permission hangover from the upgrade from Windows 8 &gt; 8.1 I doubt I will ever find out.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting the domainuser when using versionControl.GetPermissions() in the TFS API</title>
      <link>https://blog.richardfennell.net/posts/getting-the-domainuser-when-using-versioncontrol-getpermissions-in-the-tfs-api/</link>
      <pubDate>Fri, 20 Dec 2013 15:40:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-the-domainuser-when-using-versioncontrol-getpermissions-in-the-tfs-api/</guid>
      <description>&lt;p&gt;If you are using the TFS API to get a list of user who have rights in a given version control folder you need to be careful as you don’t get back the domainuser name you might expect from the GetPermissions(..) call. You actually get the display name. Now that might be fine for you but I needed the domainuser format as I was trying to populate a peoplepicker control.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are using the TFS API to get a list of user who have rights in a given version control folder you need to be careful as you don’t get back the domainuser name you might expect from the GetPermissions(..) call. You actually get the display name. Now that might be fine for you but I needed the domainuser format as I was trying to populate a peoplepicker control.</p>
<p>The answer is you need to make a second call to the TFS IIdentityManagementService  to get the name in the form you want.</p>
<p>This might not be best code, but shows the steps required</p>
<blockquote>
<p>private List<string> GetUserWithAccessToFolder(IIdentityManagementService ims, VersionControlServer versionControl, string path)<br>
{<br>
    var users = new List<string>();<br>
    var perms = versionControl.GetPermissions(new string[] { path }, RecursionType.None);<br>
    foreach (var perm in perms)<br>
    {<br>
        foreach (var entry in perm.Entries)<br>
        {<br>
                var userIdentity = ims.ReadIdentity(IdentitySearchFactor.DisplayName,<br>
                                                        entry.IdentityName,<br>
                                                        MembershipQuery.None,<br>
                                                        ReadIdentityOptions.IncludeReadFromSource);</p>
<p>                users.Add(userIdentity.UniqueName);<br>
          }<br>
    }</p>
<p>    return users;<br>
}</p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>A hair in the gate</title>
      <link>https://blog.richardfennell.net/posts/a-hair-in-the-gate/</link>
      <pubDate>Thu, 12 Dec 2013 12:39:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-hair-in-the-gate/</guid>
      <description>&lt;p&gt;My &lt;a href=&#34;http://www.microsoft.com/hardware/en-us/p/arc-touch-mouse/RVF-00052&#34;&gt;Arc mouse&lt;/a&gt; started behaving strangely today, very jumpy. Felt like the cursor was being pulled left. Turns out the problem was a tiny hair caught in the led sensor slot&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_148.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_148.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You could see there was a problem as the led was flashing a lot, when it is normally solidly on if turn over the mouse you look into the slot.&lt;/p&gt;
&lt;p&gt;Once I got it out all was fine again&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My <a href="http://www.microsoft.com/hardware/en-us/p/arc-touch-mouse/RVF-00052">Arc mouse</a> started behaving strangely today, very jumpy. Felt like the cursor was being pulled left. Turns out the problem was a tiny hair caught in the led sensor slot</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_148.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_148.png" title="image"></a></p>
<p>You could see there was a problem as the led was flashing a lot, when it is normally solidly on if turn over the mouse you look into the slot.</p>
<p>Once I got it out all was fine again</p>
]]></content:encoded>
    </item>
    <item>
      <title>Notes from my session of the Visual Studio 2013 launch at NDC London</title>
      <link>https://blog.richardfennell.net/posts/notes-from-my-session-of-the-visual-studio-2013-launch-at-ndc-london/</link>
      <pubDate>Wed, 04 Dec 2013 21:14:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/notes-from-my-session-of-the-visual-studio-2013-launch-at-ndc-london/</guid>
      <description>&lt;p&gt;Thanks to anyone who came to my session ‘TFS is not just for Visual Studio users’ at the Visual Studio 2013 at NDC London yesterday. Hope you found it useful and are now thinking of TFS as a tool for heterogeneous teams, not just developers using Visual Studio. As I discussed there are many options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Developers can work within their IDEs&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Visual Studio 2008, 2010, 2012, 2013&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Any IDE based on Eclipse&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to anyone who came to my session ‘TFS is not just for Visual Studio users’ at the Visual Studio 2013 at NDC London yesterday. Hope you found it useful and are now thinking of TFS as a tool for heterogeneous teams, not just developers using Visual Studio. As I discussed there are many options:</p>
<ul>
<li>
<p>Developers can work within their IDEs</p>
</li>
<li>
<p>Visual Studio 2008, 2010, 2012, 2013</p>
</li>
<li>
<p>Any IDE based on Eclipse</p>
</li>
<li>
<p>Any IDE using MSSCCI (VB6, VS2003, VS2005, MathLab, Enterprise Architect)</p>
</li>
<li>
<p>If not using an IDE can check code in and out  from</p>
</li>
<li>
<p>The command line (.NET and Java)</p>
</li>
<li>
<p>API (.NET and Java) and REST for your own third party developed tools (or from within PowerShell by loading the .NET API assemblies)</p>
</li>
<li>
<p>Windows Explorer Integration allows a checkin and out from Windows Explorer, great for graphics designer’s tools or IDEs with no source control integration</p>
</li>
<li>
<p>You can manage work items from</p>
</li>
<li>
<p>Within Microsoft Office, Excel and Project (2010-2013) – good for batch operations and general project manager activities.</p>
</li>
<li>
<p>But probably a web browsers will be the primary tool for most people, whether on a PC, Mac or tablet</p>
</li>
<li>
<p>Also if you are using a Git repository in your Team Project there are a while range of GIT clients for various platforms all of them will work</p>
</li>
</ul>
<p>The links from my last slide of suggestions were</p>
<ul>
<li>Create an account on Visual Studio Online (free for teams of up to 5) <a href="http://tfs.visualstudio.com/">http://tfs.visualstudio.com</a></li>
<li>Read the post on the Visual Studio UK Blog ‘But I’m not a .NET developer’<a href="http://tinyurl.com/VSUKBlogTEE">http://tinyurl.com/VSUKBlogTEE</a></li>
<li>Download the Brian Keller demo VM and HOL <a href="http://aka.ms/vs13almvm">http</a><a href="http://aka.ms/vs13almvm">://</a><a href="http://aka.ms/vs13almvm">aka.ms/vs13almvm</a></li>
<li>Download TEE to try it on Java Platforms <a href="http://tinyurl.com/tfstee">http</a><a href="http://tinyurl.com/tfstee">://</a><a href="http://tinyurl.com/tfstee">tinyurl.com/tfstee</a></li>
<li>Download the TFS PowerTools to enable Windows Integration <a href="http://tinyurl.com/tfspowertools">http</a><a href="http://tinyurl.com/tfspowertools">://</a><a href="http://tinyurl.com/tfspowertools">tinyurl.com/tfspowertools</a></li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>The TFS 2013 Brian Keller demo VM is available</title>
      <link>https://blog.richardfennell.net/posts/the-tfs-2013-brian-keller-demo-vm-is-available/</link>
      <pubDate>Thu, 28 Nov 2013 17:08:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-tfs-2013-brian-keller-demo-vm-is-available/</guid>
      <description>&lt;p&gt;The TFS 2013 Brian Keller demo VM is available at &lt;a href=&#34;http://aka.ms/vs13almvm&#34;&gt;http://aka.ms/vs13almvm&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The TFS 2013 Brian Keller demo VM is available at <a href="http://aka.ms/vs13almvm">http://aka.ms/vs13almvm</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for 0xc00d36b4 error when play MP4 videos on a Surface 2</title>
      <link>https://blog.richardfennell.net/posts/fix-for-0xc00d36b4-error-when-play-mp4-videos-on-a-surface-2/</link>
      <pubDate>Wed, 27 Nov 2013 14:51:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-0xc00d36b4-error-when-play-mp4-videos-on-a-surface-2/</guid>
      <description>&lt;p&gt;Whilst in the USA last week I bought a Surface 2 tablet. Upon boot it ran around 20 updates, as you expect, but unfortunately one of these seemed to remove its ability to play MP4 videos, giving a 0xc00d36b4 error whenever you try. A bit of a pain as one of the main reasons I wanted a tablet was for watching training videos and &lt;a href=&#34;http://www.pluralsight.com/training&#34;&gt;PluralSight&lt;/a&gt; on the move.&lt;/p&gt;
&lt;p&gt;After a fiddling and hunting on the web I found &lt;a href=&#34;http://answers.microsoft.com/en-us/musicandvideo/forum/xboxvideo-xvideoapp/0xc00d36b4-error/4ae0c298-1a56-4845-9792-392205a533c9?page=1&#34;&gt;I was not alone&lt;/a&gt;, so I added my voice to the thread, and eventually an answer appeared. It seems the Nvidia Audio Enhancements seem to be the problem. I guess they got updated within the first wave of updates.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst in the USA last week I bought a Surface 2 tablet. Upon boot it ran around 20 updates, as you expect, but unfortunately one of these seemed to remove its ability to play MP4 videos, giving a 0xc00d36b4 error whenever you try. A bit of a pain as one of the main reasons I wanted a tablet was for watching training videos and <a href="http://www.pluralsight.com/training">PluralSight</a> on the move.</p>
<p>After a fiddling and hunting on the web I found <a href="http://answers.microsoft.com/en-us/musicandvideo/forum/xboxvideo-xvideoapp/0xc00d36b4-error/4ae0c298-1a56-4845-9792-392205a533c9?page=1">I was not alone</a>, so I added my voice to the thread, and eventually an answer appeared. It seems the Nvidia Audio Enhancements seem to be the problem. I guess they got updated within the first wave of updates.</p>
<p>So the fix is according to the thread is as follows</p>
<ol>
<li>Go to the desktop view on your Surface</li>
<li>Tap and hold the volume icon. </li>
<li>Select sounds from the pop up menu - I only had to go this far as a dialog appeared asking of I wished to disable audio enhancements (maybe it found it was corrupt)</li>
<li>Go to the playback tab</li>
<li>Highlight the speakers option</li>
<li>Select properties</li>
<li>Go to the enhancements tab</li>
<li>Check the &ldquo;Disable all enhancements&rdquo; box</li>
<li>Tap OK.</li>
</ol>
<p>And videos should now play</p>
<p><strong>Updated 2 Dec  2013</strong> Seems you have to make this change for each audio device, this means speaker AND headphones</p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft Patterns &amp;amp; Practice publication ‘Building a Release Pipeline with Team Foundation Server 2012’</title>
      <link>https://blog.richardfennell.net/posts/microsoft-patterns-practice-publication-building-a-release-pipeline-with-team-foundation-server-2012/</link>
      <pubDate>Fri, 22 Nov 2013 01:47:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-patterns-practice-publication-building-a-release-pipeline-with-team-foundation-server-2012/</guid>
      <description>&lt;p&gt;Are you interested in building your own release pipeline using only the tools within TFS 2012 (or 2013)?&lt;/p&gt;
&lt;p&gt;If so why not have a look at the Microsoft Patterns &amp;amp; Practice publication &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/dn449957.aspx&#34;&gt;Building a Release Pipeline with Team Foundation Server 2012&lt;/a&gt; it provides great background and a full work through via its &lt;a href=&#34;http://www.microsoft.com/en-us/download/details.aspx?id=40295&#34;&gt;hands-on-lab&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Are you interested in building your own release pipeline using only the tools within TFS 2012 (or 2013)?</p>
<p>If so why not have a look at the Microsoft Patterns &amp; Practice publication <a href="http://msdn.microsoft.com/en-us/library/dn449957.aspx">Building a Release Pipeline with Team Foundation Server 2012</a> it provides great background and a full work through via its <a href="http://www.microsoft.com/en-us/download/details.aspx?id=40295">hands-on-lab</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My guest post on the UK MSDN blog on ‘Using MSDN Azure benefits with TFS Lab Management’</title>
      <link>https://blog.richardfennell.net/posts/my-guest-post-on-the-uk-msdn-blog-on-using-msdn-azure-benefits-with-tfs-lab-management/</link>
      <pubDate>Wed, 20 Nov 2013 19:12:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-guest-post-on-the-uk-msdn-blog-on-using-msdn-azure-benefits-with-tfs-lab-management/</guid>
      <description>&lt;p&gt;A guest post I wrote has just been published on the  &lt;a href=&#34;http://blogs.msdn.com/b/ukmsdn/archive/2013/11/19/how-vpns-can-help-you-use-your-msdn-azure-benefits-with-tfs-lab-management.aspx&#34;&gt;UK MSDN blog titled ‘How VPNs can help you use your MSDN Azure benefits with TFS Lab Management’&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A guest post I wrote has just been published on the  <a href="http://blogs.msdn.com/b/ukmsdn/archive/2013/11/19/how-vpns-can-help-you-use-your-msdn-azure-benefits-with-tfs-lab-management.aspx">UK MSDN blog titled ‘How VPNs can help you use your MSDN Azure benefits with TFS Lab Management’</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Announcing a CodePlex project that provides an IronPython DSL that can be used to define the handling of TFS Alerts</title>
      <link>https://blog.richardfennell.net/posts/announcing-a-codeplex-project-that-provides-an-ironpython-dsl-that-can-be-used-to-define-the-handling-of-tfs-alerts/</link>
      <pubDate>Sun, 17 Nov 2013 21:03:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/announcing-a-codeplex-project-that-provides-an-ironpython-dsl-that-can-be-used-to-define-the-handling-of-tfs-alerts/</guid>
      <description>&lt;p&gt;I have just published a new project to CodePlex &lt;a href=&#34;http://tfsalertsdsl.codeplex.com/&#34; title=&#34;http://tfsalertsdsl.codeplex.com/&#34;&gt;http://tfsalertsdsl.codeplex.com/&lt;/a&gt;.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Microsoft Team Foundation Server (TFS) provides an alerting model where given a certain condition, such as a check-in, work item edit or build completion, an email can be sent to an interest party or a call made to a SOAP based web service. Using this SOAP model it is possible to provide any bespoke operations you wish that are triggered by a change on the TFS server.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just published a new project to CodePlex <a href="http://tfsalertsdsl.codeplex.com/" title="http://tfsalertsdsl.codeplex.com/">http://tfsalertsdsl.codeplex.com/</a>.</p>
<blockquote>
<p><em>Microsoft Team Foundation Server (TFS) provides an alerting model where given a certain condition, such as a check-in, work item edit or build completion, an email can be sent to an interest party or a call made to a SOAP based web service. Using this SOAP model it is possible to provide any bespoke operations you wish that are triggered by a change on the TFS server.</em></p></blockquote>
<blockquote>
<p><em>This framework is designed to ease the development of these bespoke SOAP wen services by providing helper methods for common XML processing steps and API operations such as calling back to the TFS server or accessing SMTP services.</em></p></blockquote>
<blockquote>
<p><em>They main differentiator of this project is that it also provides a Python based DSL that allows the actual operation performed when the endpoint is called to be edited without the need to  to rebuild and redeploy the bespoke service. Operations are defined by script such as show below</em></p></blockquote>
<p>For more details have a look at the <a href="http://tfsalertsdsl.codeplex.com/">project site</a>, hope you find it useful</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fixing a WCF authentication schemes configured on the host (&#39;IntegratedWindowsAuthentication&#39;) do not allow those configured on the binding &#39;BasicHttpBinding&#39; (&#39;Anonymous&#39;) error</title>
      <link>https://blog.richardfennell.net/posts/fixing-a-wcf-authentication-schemes-configured-on-the-host-integratedwindowsauthentication-do-not-allow-those-configured-on-the-binding-basichttpbinding-anonymous-error/</link>
      <pubDate>Wed, 13 Nov 2013 12:54:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixing-a-wcf-authentication-schemes-configured-on-the-host-integratedwindowsauthentication-do-not-allow-those-configured-on-the-binding-basichttpbinding-anonymous-error/</guid>
      <description>&lt;p&gt;Whilst testing a WCF web service I got the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;h4 id=&#34;the-authentication-schemes-configured-on-the-host-&#34;&gt;&lt;em&gt;The authentication schemes configured on the host (&amp;lsquo;IntegratedWindowsAuthentication&amp;rsquo;) do not allow those configured on the binding &amp;lsquo;BasicHttpBinding&amp;rsquo; (&amp;lsquo;Anonymous&amp;rsquo;). Please ensure that the SecurityMode is set to Transport or TransportCredentialOnly. Additionally, this may be resolved by changing the authentication schemes for this application through the IIS management tool, through the ServiceHost.Authentication.AuthenticationSchemes property, in the application configuration file at the &lt;serviceAuthenticationManager&gt; element, by updating the ClientCredentialType property on the binding, or by adjusting the AuthenticationScheme property on the HttpTransportBindingElement.&lt;/em&gt;&lt;/h4&gt;&lt;/blockquote&gt;
&lt;p&gt;Now this sort of made sense as the web services was mean to be secured using Windows Authentication, so the IIS setting was correct, anonymous authentication was off&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst testing a WCF web service I got the error</p>
<blockquote>
<h4 id="the-authentication-schemes-configured-on-the-host-"><em>The authentication schemes configured on the host (&lsquo;IntegratedWindowsAuthentication&rsquo;) do not allow those configured on the binding &lsquo;BasicHttpBinding&rsquo; (&lsquo;Anonymous&rsquo;). Please ensure that the SecurityMode is set to Transport or TransportCredentialOnly. Additionally, this may be resolved by changing the authentication schemes for this application through the IIS management tool, through the ServiceHost.Authentication.AuthenticationSchemes property, in the application configuration file at the <serviceAuthenticationManager> element, by updating the ClientCredentialType property on the binding, or by adjusting the AuthenticationScheme property on the HttpTransportBindingElement.</em></h4></blockquote>
<p>Now this sort of made sense as the web services was mean to be secured using Windows Authentication, so the IIS setting was correct, anonymous authentication was off</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_147.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_147.png" title="image"></a></p>
<p>Turns out the issue was, as you might expect, an incorrect web.config entry</p>
<blockquote>
<p>  &lt;system.serviceModel&gt;<br>
    <bindings><br>
      <basicHttpBinding><br>
        &lt;binding name=&ldquo;windowsSecured&rdquo;&gt; &lt;!—this was the problem –&gt;<br>
          <security mode="TransportCredentialOnly"><br>
            <transport clientCredentialType="Windows" /><br>
          </security><br>
        </binding><br>
      </basicHttpBinding><br>
  </bindings><br>
    <services><br>
      <service behaviorConfiguration="CTAppBox.WebService.Service1Behavior" name="CTAppBox.WebService.TfsService"><br>
        &lt;endpoint address=&quot;&quot; binding=&ldquo;basicHttpBinding&rdquo;  contract=&ldquo;CTAppBox.WebService.ITfsService&rdquo;&gt;<br>
          <identity><br>
            <dns value="localhost"/><br>
          </identity><br>
        </endpoint><br>
        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/><br>
      </service><br>
    </services><br>
    <behaviors><br>
      <serviceBehaviors><br>
        <behavior name="CTAppBox.WebService.Service1Behavior"><br>
          <!-- To avoid disclosing metadata information, set the value below to false before deployment --><br>
          <serviceMetadata httpGetEnabled="true"/><br>
          <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information --><br>
          <serviceDebug includeExceptionDetailInFaults="true"/><br>
        </behavior><br>
      </serviceBehaviors><br>
    </behaviors><br>
  &lt;/system.serviceModel&gt;</p></blockquote>
<p>The problem was the <strong>basicHttpBinding</strong> had a named binding <strong>windowsSecured</strong> and no non-named default. When the service was bound to the binding it did not use the name binding, just the defaults (which were not shown in the config file).</p>
<p>The solution was to remove the name=&ldquo;windowsSecured&rdquo; entry, or we could have added a name to the service binding</p>
]]></content:encoded>
    </item>
    <item>
      <title>When your TFS Lab test agents can’t start check the DNS</title>
      <link>https://blog.richardfennell.net/posts/when-your-tfs-lab-test-agents-cant-start-check-the-dns/</link>
      <pubDate>Fri, 08 Nov 2013 16:11:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/when-your-tfs-lab-test-agents-cant-start-check-the-dns/</guid>
      <description>&lt;p&gt;Lab Management has a lot of moving parts, especially if you are using SCVMM based environments. All the parts have to communicate if the system is work.&lt;/p&gt;
&lt;p&gt;One of the most common problem I have seen are due to DNS issues. A slowly propagating DNS can cause chaos as the test controller will not be able to resolve the name of the dynamically registered lab VMs.&lt;/p&gt;
&lt;p&gt;The best fix is to sort out your DNS issues, but that is not always possible (some things just take the time they take, especially on large WANs).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Lab Management has a lot of moving parts, especially if you are using SCVMM based environments. All the parts have to communicate if the system is work.</p>
<p>One of the most common problem I have seen are due to DNS issues. A slowly propagating DNS can cause chaos as the test controller will not be able to resolve the name of the dynamically registered lab VMs.</p>
<p>The best fix is to sort out your DNS issues, but that is not always possible (some things just take the time they take, especially on large WANs).</p>
<p>An immediate fix is to use the local host files on the test controller to define IP address for the <strong>lab[guid].corp.domain</strong> names created when using network isolation. Once this is done the handshake between the controller and agent is usually possible.</p>
<p>If it isn’t then you are back to all the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/05/More-fun-with-creating-TFS-2012-SC-VMM-environments.aspx">usually diagnostics tools</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Building VMs for use in TFS Lab Management environments</title>
      <link>https://blog.richardfennell.net/posts/building-vms-for-use-in-tfs-lab-management-environments/</link>
      <pubDate>Thu, 07 Nov 2013 18:27:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/building-vms-for-use-in-tfs-lab-management-environments/</guid>
      <description>&lt;p&gt;We have recently gone through an exercise to provide a consistent set of prebuilt configured VMs for use in our TFS Lab Management environments.&lt;/p&gt;
&lt;p&gt;This is not an insignificant piece of work &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2013/09/29/Building-environments-for-Lab-Manager-Why-bare-metal-scripting-fails.aspx&#34;&gt;as this post by Rik Hepworth&lt;/a&gt; discusses detailing all the IT pro work he had to do to create them. This is all before we even think about the work required to create deployment TFS builds and the like.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have recently gone through an exercise to provide a consistent set of prebuilt configured VMs for use in our TFS Lab Management environments.</p>
<p>This is not an insignificant piece of work <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2013/09/29/Building-environments-for-Lab-Manager-Why-bare-metal-scripting-fails.aspx">as this post by Rik Hepworth</a> discusses detailing all the IT pro work he had to do to create them. This is all before we even think about the work required to create deployment TFS builds and the like.</p>
<p>It is well worth a read if you are planning to provision a library of VM for Lab Management as it has some really useful tips and tricks</p>
]]></content:encoded>
    </item>
    <item>
      <title>More on TF215106: Access denied from the TFS API after upgrade from 2012 to 2013</title>
      <link>https://blog.richardfennell.net/posts/more-on-tf215106-access-denied-from-the-tfs-api-after-upgrade-from-2012-to-2013/</link>
      <pubDate>Wed, 06 Nov 2013 15:23:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-on-tf215106-access-denied-from-the-tfs-api-after-upgrade-from-2012-to-2013/</guid>
      <description>&lt;p&gt;In my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/10/21/TF215106-Access-denied-from-the-TFS-API-after-upgrade-from-2012-to-2013.aspx&#34;&gt;previous post&lt;/a&gt; I thought I had fixed my problems with TF215106 errors&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;TF215106: Access denied. TYPHOONTFS\TFSService needs Update build information permissions for build definition ClassLibrary1.Main.Manual in team project Scrum to perform the action. For more information, contact the Team Foundation Server administrator.&amp;rdquo;}&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Turns out I had not, acutally I not idea why it worked for a while! There could well be an API version issue, but I had to actually also missed I needed to do what the error message said!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/10/21/TF215106-Access-denied-from-the-TFS-API-after-upgrade-from-2012-to-2013.aspx">previous post</a> I thought I had fixed my problems with TF215106 errors</p>
<blockquote>
<p>&ldquo;TF215106: Access denied. TYPHOONTFS\TFSService needs Update build information permissions for build definition ClassLibrary1.Main.Manual in team project Scrum to perform the action. For more information, contact the Team Foundation Server administrator.&rdquo;}</p></blockquote>
<p>Turns out I had not, acutally I not idea why it worked for a while! There could well be an API version issue, but I had to actually also missed I needed to do what the error message said!</p>
<p>If you check <a href="http://msdn.microsoft.com/en-us/library/vstudio/ms252587.aspx#Tables">MSDN</a>, it tells you how to check the permissions for a given build; on checking I saw that the update build information permission was not set for the build in question.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_146.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_146.png" title="image"></a></p>
<p>Once I set it for the domain account my service was running as, everything worked as expected.</p>
<p>All I can assume that there is a change from TSF 2012 to 2013 over defaulting the permission as I have not needed to set it explicitly in the past</p>
]]></content:encoded>
    </item>
    <item>
      <title>‘Upgrading’ my Lenovo W520 to Windows 8.1</title>
      <link>https://blog.richardfennell.net/posts/upgrading-my-lenovo-w520-to-windows-8-1/</link>
      <pubDate>Tue, 05 Nov 2013 16:35:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-my-lenovo-w520-to-windows-8-1/</guid>
      <description>&lt;p&gt;The upgrade experience for Windows 8 Enterprise is basically a reinstall, as you lose some many desktop apps and settings. So I decided to do a clean install, my PC was in need of a repave anyway.&lt;/p&gt;
&lt;p&gt;My PC has 3 disks, a bitlockered SSD for the OS, another bitlockered drive used for data and a non bitlockered drive I use for ISOs, downloads etc. Just to be on the safe side I removed the bitlocker encryption on my second drive, then I deleted the contents of my SSD and re-installed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The upgrade experience for Windows 8 Enterprise is basically a reinstall, as you lose some many desktop apps and settings. So I decided to do a clean install, my PC was in need of a repave anyway.</p>
<p>My PC has 3 disks, a bitlockered SSD for the OS, another bitlockered drive used for data and a non bitlockered drive I use for ISOs, downloads etc. Just to be on the safe side I removed the bitlocker encryption on my second drive, then I deleted the contents of my SSD and re-installed.</p>
<p>All seemed to go OK, the OS installed and then a rebooted but then hung. I then remember the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/06/01/Windows-8-RP-and-my-Lenovo-W520.aspx">problems I had had installing Windows 8 RP, if I did not have the Intel video card enabled</a> (I usually have it disabled as I find having both it and the Nvidia confuses projectors). Once I enabled both video cards the setup completed and I was able to join the PC to the domain and associate my LiveID with the domain account.</p>
<p>On checking device manager I saw a few items were missing drivers</p>
<ul>
<li>Microsoft basic display adapter – Just used Window update and it found the Nvidia driver</li>
<li>Base system device -  This needed the Ricoh Media Card Reader driver from the Lenovo site</li>
<li>PCI serial port -  This took a while to find but in the end I needed the Intel Management Engine Interface 7.1 and Serial Over LAN (SOL) Driver from the Lenovo site</li>
</ul>
<p>I also found, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/31/Audio-problem-on-Windows-8-RP-and-Lenovo-W520-with-Lync-2013.aspx">as before</a>, I need to use Windows update to update the soundcard drivers, the default ones  were working on the laptop, but not via the base station outputs.</p>
<p>I then set to installing my desktop applications. Other than Office and Visual Studio I tried to do this all with <a href="http://chocolatey.org/" title="http://chocolatey.org/">Chocolatey</a>, thus building up a script to make this easier in the future e.g</p>
<blockquote>
<p>cinst poshgit<br>
cinst 7zip<br>
cinst notepadplusplus<br>
cinst WindowsLiveWriter<br>
cinst sysinternals<br>
cinst skype<br>
cinst git-tf<br>
cinst eclipse-standard-kepler</p></blockquote>
<p>This left me with one issue, i could not work out how to setup fingerprint login. In Windows 8 there was a Biometric icon on the Control Panel, but not in Windows 8.1. So I installed the Lenovo provided fingerprint tools, as I had to do in Windows 7. This allowed me to scan a fingerprint and associate it with an account. However, then i tried to login or unlock the screen with the fingerprint I got a message that ‘cannot use a fingerprint for a domain user’. Now in previous versions of the OS there was a checkbox on the Biometric control panel that set whether fingerprints could be used for domain accounts, but as i said before this tool is not present in 8.1.</p>
<p>After much searching, using Google as Bing found nothing, <a href="http://technet.microsoft.com/en-us/library/dn344916.aspx">I found that you now have to enable fingerprint usage via a Windows 8.1 settings not the control panel</a>. Once this was done my fingerprint scanner worked as expected.</p>
<p>So just need to re-bitlocker my PC and we are good to go.</p>
<p><strong>Update 6th Nov 2013 –</strong>  I did have to disable the Intel graphics cards again when I had done the install. When I tried to use external projectors with it enabled I got all the same problems I had seen on Win 7 and Win 8. The BIOS handling of the Nvidia and the Intel graphics in tandem just does not work with most projectors. It allows you to extend to another screen but not duplicate. Switching to just the discrete Nvidia adaptor is the only solution I have found. Remember to suspend bitlocker on your boot drive if it is enable before you make the BIOS change, else you will be typing in long unlock codes.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Changes in Skydrive access on Windows 8.1</title>
      <link>https://blog.richardfennell.net/posts/changes-in-skydrive-access-on-windows-8-1/</link>
      <pubDate>Tue, 22 Oct 2013 17:38:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/changes-in-skydrive-access-on-windows-8-1/</guid>
      <description>&lt;p&gt;After &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/10/19/Upgrading-Windows-8-Media-Center-to-81.aspx&#34;&gt;upgrading to Windows 8.1 on my Media Center PC&lt;/a&gt; I have noticed a change in SkyDrive. The ‘upgrade’ process from 8 to 8.1 is really a reinstall of the OS and reapplication of Windows 8 applications. Some Windows desktop applications are removed. In the case of my Media Center PC the only desktop app installed was Windows Desktop Skydrive that I used to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/04/26/Thoughts-on-the-new-Skydrive.aspx&#34;&gt;sync photos from my MediaPC to the cloud&lt;/a&gt;. This is no longer needed as Windows 8.1 exposes the Skydrive files linked to the logged in LiveID as folders under the c:user[userid]documents folder, just like &lt;a href=&#34;http://windows.microsoft.com/en-GB/skydrive/download&#34;&gt;Windows Desktop client&lt;/a&gt; used to do.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/10/19/Upgrading-Windows-8-Media-Center-to-81.aspx">upgrading to Windows 8.1 on my Media Center PC</a> I have noticed a change in SkyDrive. The ‘upgrade’ process from 8 to 8.1 is really a reinstall of the OS and reapplication of Windows 8 applications. Some Windows desktop applications are removed. In the case of my Media Center PC the only desktop app installed was Windows Desktop Skydrive that I used to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/04/26/Thoughts-on-the-new-Skydrive.aspx">sync photos from my MediaPC to the cloud</a>. This is no longer needed as Windows 8.1 exposes the Skydrive files linked to the logged in LiveID as folders under the c:user[userid]documents folder, just like <a href="http://windows.microsoft.com/en-GB/skydrive/download">Windows Desktop client</a> used to do.</p>
<p>This means though the old dekstop Skydrive client has been removed my existing timer based jobs that backup files to the cloud by copying from a RAID5 box to the local Skydrive folder still work.</p>
<p>A word of warning here though, don’t rely on this model as you only backup. There is <a href="http://arstechnica.com/security/2013/10/youre-infected-if-you-want-to-see-your-data-again-pay-us-300-in-bitcoins/">a lot of ransonware around at the moment</a> and if you aren&rsquo;t careful an infected PC can infect your automated cloud backup too. Make your you cloud backup is versioned so you can old back to a pre-infected file and/or you have a more traditional offline backup too.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF215106: Access denied from the TFS API after upgrade from 2012 to 2013</title>
      <link>https://blog.richardfennell.net/posts/tf215106-access-denied-from-the-tfs-api-after-upgrade-from-2012-to-2013/</link>
      <pubDate>Mon, 21 Oct 2013 20:19:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf215106-access-denied-from-the-tfs-api-after-upgrade-from-2012-to-2013/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 6th Nov 2013&lt;/strong&gt; - Also see this updated post , the API mentioned here maybe an issue, but the rights change in this other post is probably the real issue&lt;/p&gt;
&lt;p&gt;After upgrading a test server from TFS 2012 to 2013 I started getting the following exception when trying to set the retention for a build policy via the TFS API&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;TF215106: Access denied. TYPHOONTFS\TFSService needs Update build information permissions for build definition ClassLibrary1.Main.Manual in team project Scrum to perform the action. For more information, contact the Team Foundation Server administrator.&amp;rdquo;}&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 6th Nov 2013</strong> - Also see this updated post , the API mentioned here maybe an issue, but the rights change in this other post is probably the real issue</p>
<p>After upgrading a test server from TFS 2012 to 2013 I started getting the following exception when trying to set the retention for a build policy via the TFS API</p>
<blockquote>
<p>&ldquo;TF215106: Access denied. TYPHOONTFS\TFSService needs Update build information permissions for build definition ClassLibrary1.Main.Manual in team project Scrum to perform the action. For more information, contact the Team Foundation Server administrator.&rdquo;}</p></blockquote>
<p>This was a surprising error, the code had been working OK and the TFSService account is, well the service account, so has full rights.</p>
<p>The issue was I also needed to rebuild my application with the TFS 2013 API, once I rebuild with the 2013 DLLs it all worked fine.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A fix for power saving stopping my slow application installation</title>
      <link>https://blog.richardfennell.net/posts/a-fix-for-power-saving-stopping-my-slow-application-installation/</link>
      <pubDate>Mon, 21 Oct 2013 13:14:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-fix-for-power-saving-stopping-my-slow-application-installation/</guid>
      <description>&lt;p&gt;I am getting sick of the fact that the &lt;a href=&#34;http://www.samsung.com/us/computer/pcs/XE500T1C-A01US&#34;&gt;Samsung 500T tablet&lt;/a&gt; running Windows 8.1 I am installing applications on  keeps going into sleep mode to save power. I start the install, leave it to run, look back later it has been saving power, started and stopped Wifi and I have one very confused install. It is not as if it is anything weird, just Office 2013.&lt;/p&gt;
&lt;p&gt;So the the workaround (as I admit I forgot to pickup it’s PSU this morning) is pop it into Presenter Mode (via the Windows key X and Mobility Center). This means it ignore Power saving and just runs for me. Install finish fine and I am good to go&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am getting sick of the fact that the <a href="http://www.samsung.com/us/computer/pcs/XE500T1C-A01US">Samsung 500T tablet</a> running Windows 8.1 I am installing applications on  keeps going into sleep mode to save power. I start the install, leave it to run, look back later it has been saving power, started and stopped Wifi and I have one very confused install. It is not as if it is anything weird, just Office 2013.</p>
<p>So the the workaround (as I admit I forgot to pickup it’s PSU this morning) is pop it into Presenter Mode (via the Windows key X and Mobility Center). This means it ignore Power saving and just runs for me. Install finish fine and I am good to go</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can I use the HP ALM Synchronizer with TF Service?</title>
      <link>https://blog.richardfennell.net/posts/can-i-use-the-hp-alm-synchronizer-with-tf-service/</link>
      <pubDate>Mon, 21 Oct 2013 11:39:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/can-i-use-the-hp-alm-synchronizer-with-tf-service/</guid>
      <description>&lt;p&gt;I recently tried to get the free &lt;a href=&#34;https://hpln.hp.com/group/hp-alm-synchronizer&#34;&gt;HP ALM Synchronizer&lt;/a&gt; to link to Microsoft’s &lt;a href=&#34;http://tfs.visualstudio.com&#34;&gt;TF Service&lt;/a&gt;, the summary is it does not work. However, it took me a while to realise this.&lt;/p&gt;
&lt;p&gt;The HP ALM Synchronizer was designed for TFS 2008/2010 so the first issue you hit is that TF Services is today basically TFS 2013 (and is a moving goal post as it is updated so often). This means when you try to configure the TFS connection in HP ALM Synchronizer  it fails because it cannot see any TFS client it supports. This is fairly simple to address, just &lt;a href=&#34;http://www.microsoft.com/en-us/download/details.aspx?id=29082&#34;&gt;install Visual Studio Team Explorer 2010 and patch it up to date&lt;/a&gt; so that it can connect to TF Service (&lt;a href=&#34;http://tfs.visualstudio.com/learn/get-started&#34;&gt;you could go back to the 2008 and achieve the same if you really needed to&lt;/a&gt;)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently tried to get the free <a href="https://hpln.hp.com/group/hp-alm-synchronizer">HP ALM Synchronizer</a> to link to Microsoft’s <a href="http://tfs.visualstudio.com">TF Service</a>, the summary is it does not work. However, it took me a while to realise this.</p>
<p>The HP ALM Synchronizer was designed for TFS 2008/2010 so the first issue you hit is that TF Services is today basically TFS 2013 (and is a moving goal post as it is updated so often). This means when you try to configure the TFS connection in HP ALM Synchronizer  it fails because it cannot see any TFS client it supports. This is fairly simple to address, just <a href="http://www.microsoft.com/en-us/download/details.aspx?id=29082">install Visual Studio Team Explorer 2010 and patch it up to date</a> so that it can connect to TF Service (<a href="http://tfs.visualstudio.com/learn/get-started">you could go back to the 2008 and achieve the same if you really needed to</a>)</p>
<p>Once you have a suitably old client you can progress to the point that it asks you for your TFS login credentials. HP ALM Synchronizer validates they are in the form DOMAINUSER, this is a problem.</p>
<p>On TF Service you usually login with a LiveID, this is a non-starter in this case. However, <a href="http://blogs.msdn.com/b/buckh/archive/2013/01/07/how-to-connect-to-tf-service-without-a-prompt-for-liveid-credentials.aspx">you can configure alternative credentials</a>, but these are in the form USER and PASSWORD. The string pattern verification on the credentials entry form in  HP ALM Synchronizer does not accept them, it must have a domain and slash. I could not find any pattern at satisfies both TF Service and the HP ALM Synchronizer setup tool. So basically you are stuck.</p>
<p>So for my client we ended up moving to a TFS 2013 Basic Install on premises and at all worked fine, they could sync HP QC defects into TFS using the HP ALM Synchronizer, so they were happy.</p>
<p>However, is there a better solution? One might be to a commercial product such as <a href="http://tasktop.com/sync">Tasktop Sync</a>, this is designed to provide synchronisation services between a whole range of ALM like products. I need to find out if that supports TF Service as yet?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading Windows 8 Media Center to 8.1</title>
      <link>https://blog.richardfennell.net/posts/upgrading-windows-8-media-center-to-8-1/</link>
      <pubDate>Sat, 19 Oct 2013 00:19:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-windows-8-media-center-to-8-1/</guid>
      <description>&lt;p&gt;Just done the update of my Windows 8 Media Center to 8.1. First issue was I looked in the store and could not see the update. I was looking for an ‘update’ in the top right not a huge application entry in the middle of the screen. Strange how you can miss the obvious&lt;/p&gt;
&lt;p&gt;The download took about 30 minutes which worked in background whilst I was using Media Center. It then did a reboot,  the main install of files took about 30 minutes too. It seemed to sit at 85% for a long time, then another reboot. A few ‘preparing devices’ messages and ‘getting ready’ for a good 10 minutes, then ‘Applying PC settings&amp;rsquo;, another reboot then ‘setting up a few more things’ . I then had to accept some licenses, and login with a LiveID, and eventually it started, and everything was working.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just done the update of my Windows 8 Media Center to 8.1. First issue was I looked in the store and could not see the update. I was looking for an ‘update’ in the top right not a huge application entry in the middle of the screen. Strange how you can miss the obvious</p>
<p>The download took about 30 minutes which worked in background whilst I was using Media Center. It then did a reboot,  the main install of files took about 30 minutes too. It seemed to sit at 85% for a long time, then another reboot. A few ‘preparing devices’ messages and ‘getting ready’ for a good 10 minutes, then ‘Applying PC settings&rsquo;, another reboot then ‘setting up a few more things’ . I then had to accept some licenses, and login with a LiveID, and eventually it started, and everything was working.</p>
<p>I did not time it exactly but I think about 90 minutes all told.</p>
<p><strong>Update 4th Nov 2013</strong> – I have been away on holiday and came back to find not much recorded by my Media Center. Turns out the problem was the EPG was not updating and showing ‘No data’ for about 70% of my channels (all the BBC ones plus some others). Turns out the upgrade corrupts/loses the EPG mappings. <a href="http://www.microsoft.com/en-us/windows/compatibility/CompatCenter/ProductDetailsViewer?Name=Microsoft%20Windows%20Media%20Center&amp;vendor=Microsoft&amp;ModelOrVersion=6&amp;Type=Software&amp;tempOsid=Windows&#43;8.1">The fix is to do a full channel rescan</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems with Microsoft Fake Stubs and IronPython</title>
      <link>https://blog.richardfennell.net/posts/problems-with-microsoft-fake-stubs-and-ironpython/</link>
      <pubDate>Tue, 15 Oct 2013 14:45:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-with-microsoft-fake-stubs-and-ironpython/</guid>
      <description>&lt;p&gt;I have been changing the mocking framework used on a project I am planning to open source. Previously it had been using &lt;a href=&#34;http://www.typemock.com&#34;&gt;Typemock&lt;/a&gt; to mock out items in the TFS API. This had been working well but used features of the toolset that are only available on the licensed product (not the free version). As I don’t like to publish tests people cannot run I thought it best to swap to &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/jj159340.aspx&#34;&gt;Microsoft Fakes&lt;/a&gt; as there is a better chance any user will have a version of Visual Studio that provides this toolset.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been changing the mocking framework used on a project I am planning to open source. Previously it had been using <a href="http://www.typemock.com">Typemock</a> to mock out items in the TFS API. This had been working well but used features of the toolset that are only available on the licensed product (not the free version). As I don’t like to publish tests people cannot run I thought it best to swap to <a href="http://msdn.microsoft.com/en-us/library/jj159340.aspx">Microsoft Fakes</a> as there is a better chance any user will have a version of Visual Studio that provides this toolset.</p>
<p>Most of the changes were straightforward but I hit a problem when I tried to run a test that returned a TFS IBuildDetails object for use inside an IronPython DSL.</p>
<p>My working Typemock based test was as follows</p>
<blockquote>
<p>[Test]<br>
     public void Can_use_Dsl_to_get_build_details()<br>
     {<br>
        // arrange<br>
         var consoleOut = Helpers.Logging.RedirectConsoleOut();</p>
<p>         var tfsProvider = Isolate.Fake.Instance<ITfsProvider>();<br>
         var emailProvider = Isolate.Fake.Instance<IEmailProvider>();<br>
         var build = Isolate.Fake.Instance<IBuildDetail>();</p>
<p>         var testUri = new Uri(&ldquo;vstfs:///Build/Build/123&rdquo;);<br>
         Isolate.WhenCalled(() =&gt; build.Uri).WillReturn(testUri);<br>
         Isolate.WhenCalled(() =&gt; build.Quality).WillReturn(&ldquo;Test Quality&rdquo;);<br>
         Isolate.WhenCalled(() =&gt; tfsProvider.GetBuildDetails(null)).WillReturn(build);</p>
<p>        // act         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@&ldquo;dsltfsloadbuild.py&rdquo;, tfsProvider, emailProvider);</p>
<p>        // assert<br>
         Assert.AreEqual(&ldquo;Build &lsquo;vstfs:///Build/Build/123&rsquo; has the quality &lsquo;Test Quality&rsquo;&rdquo; + Environment.NewLine, consoleOut.ToString());<br>
     }</p></blockquote>
<p>Swapping to Fakes I got</p>
<blockquote>
<p>[Test]<br>
     public void Can_use_Dsl_to_get_build_details()<br>
     {<br>
        // arrange<br>
         var consoleOut = Helpers.Logging.RedirectConsoleOut();<br>
         var testUri = new Uri(&ldquo;vstfs:///Build/Build/123&rdquo;);</p>
<p>         var emailProvider = new Providers.Fakes.StubIEmailProvider();<br>
         var build = new StubIBuildDetail()<br>
                         {<br>
                             UriGet = () =&gt; testUri,<br>
                             QualityGet = () =&gt; &ldquo;Test Quality&rdquo;,<br>
                         };<br>
         var tfsProvider = new Providers.Fakes.StubITfsProvider()<br>
         {<br>
             GetBuildDetailsUri = (uri) =&gt; (IBuildDetail)build<br>
         };</p>
<p>         // act<br>
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@&ldquo;dsltfsloadbuild.py&rdquo;, tfsProvider, emailProvider);<br>
     </p>
<p>        // assert<br>
         Assert.AreEqual(&ldquo;Build &lsquo;vstfs:///Build/Build/123&rsquo; has the quality &lsquo;Test Quality&rsquo;&rdquo; + Environment.NewLine, consoleOut.ToString());<br>
     }</p></blockquote>
<p>But this gave me the error</p>
<blockquote>
<p>Test Name:    Can_use_Dsl_to_get_build_details<br>
Test FullName:    TFSEventsProcessor.Tests.Dsl.DslTfsProcessingTests.Can_use_Dsl_to_get_build_details<br>
Test Source:    c:Projectstfs2012TFSTFSEventsProcessorMainSrcWorkItemEventProcessor.TestsDslDslTfsProcessingTests.cs : line 121<br>
Test Outcome:    Failed<br>
Test Duration:    0:00:01.619</p>
<p>Result Message:    System.MissingMemberException : &lsquo;StubIBuildDetail&rsquo; object has no attribute &lsquo;Uri&rsquo;<br>
Result StackTrace:   <br>
at IronPython.Runtime.Binding.PythonGetMemberBinder.FastErrorGet`1.GetError(CallSite site, TSelfType target, CodeContext context)<br>
at System.Dynamic.UpdateDelegates.UpdateAndExecute2[T0,T1,TRet](CallSite site, T0 arg0, T1 arg1)<br>
at Microsoft.Scripting.Interpreter.DynamicInstruction`3.Run(InterpretedFrame frame)<br>
at Microsoft.Scripting.Interpreter.Interpreter.Run(InterpretedFrame frame)<br>
at Microsoft.Scripting.Interpreter.LightLambda.Run2[T0,T1,TRet](T0 arg0, T1 arg1)<br>
at IronPython.Compiler.PythonScriptCode.RunWorker(CodeContext ctx)<br>
at IronPython.Compiler.PythonScriptCode.Run(Scope scope)<br>
at IronPython.Compiler.RuntimeScriptCode.InvokeTarget(Scope scope)<br>
at IronPython.Compiler.RuntimeScriptCode.Run(Scope scope)<br>
at Microsoft.Scripting.SourceUnit.Execute(Scope scope, ErrorSink errorSink)<br>
at Microsoft.Scripting.SourceUnit.Execute(Scope scope)<br>
at Microsoft.Scripting.Hosting.ScriptSource.Execute(ScriptScope scope)<br>
at TFSEventsProcessor.Dsl.DslProcessor.RunScript(String scriptname, Dictionary`2 args, ITfsProvider iTfsProvider, IEmailProvider iEmailProvider) in c:Projectstfs2012TFSTFSEventsProcessorMainSrcWorkItemEventProcessorDslDslProcessor.cs:line 78<br>
at TFSEventsProcessor.Dsl.DslProcessor.RunScript(String scriptname, ITfsProvider iTfsProvider, IEmailProvider iEmailProvider) in c:Projectstfs2012TFSTFSEventsProcessorMainSrcWorkItemEventProcessorDslDslProcessor.cs:line 31<br>
at TFSEventsProcessor.Tests.Dsl.DslTfsProcessingTests.Can_use_Dsl_to_get_build_details() in c:Projectstfs2012TFSTFSEventsProcessorMainSrcWorkItemEventProcessor.TestsDslDslTfsProcessingTests.cs:line 141</p></blockquote>
<p>If I altered my test to not use my IronPython DSL but call the C# DSL Library directly the error went away. So the issue lay in the dynamic IronPython engine – not something I am going to even think of trying to fix.</p>
<p>So I swapped the definition of the mock IBuildDetails to use <a href="http://code.google.com/p/moq/">Moq</a> (could have used the free version of Typemock or any framework) instead of a Microsoft Fake Stubs and the problem went away.</p>
<p>So I had</p>
<blockquote>
<p>[Test]<br>
     public void Can_use_Dsl_to_get_build_details()<br>
     {<br>
         // arrange<br>
         var consoleOut = Helpers.Logging.RedirectConsoleOut();<br>
         var testUri = new Uri(&ldquo;vstfs:///Build/Build/123&rdquo;);</p>
<p>         var emailProvider = new Providers.Fakes.StubIEmailProvider();<br>
         var build = new Moq.Mock<IBuildDetail>();<br>
         build.Setup(b =&gt; b.Uri).Returns(testUri);<br>
         build.Setup(b =&gt; b.Quality).Returns(&ldquo;Test Quality&rdquo;);</p>
<p>         var tfsProvider = new Providers.Fakes.StubITfsProvider()<br>
         {<br>
             GetBuildDetailsUri = (uri) =&gt; build.Object<br>
         };</p>
<p>        // act<br>
         TFSEventsProcessor.Dsl.DslProcessor.RunScript(@&ldquo;dsltfsloadbuild.py&rdquo;, tfsProvider, emailProvider);</p>
<p>         // assert<br>
         Assert.AreEqual(&ldquo;Build &lsquo;vstfs:///Build/Build/123&rsquo; has the quality &lsquo;Test Quality&rsquo;&rdquo; + Environment.NewLine, consoleOut.ToString());<br>
     }</p></blockquote>
<p>So I have a working solution, but it is a bit of a bit of a mess, I am using Fake Stubs and Moq in the same test. There is no good reason not to swap all the mocking of interfaces to Moq. So going forward on this project I only use Microsoft Fakes for Shims to mock out items such as  TFS WorkItem objects which have no public constructor.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Links from my DDDNorth session ‘Automated Build Is Not The End Of The Story’</title>
      <link>https://blog.richardfennell.net/posts/links-from-my-dddnorth-session-automated-build-is-not-the-end-of-the-story/</link>
      <pubDate>Sun, 13 Oct 2013 20:14:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/links-from-my-dddnorth-session-automated-build-is-not-the-end-of-the-story/</guid>
      <description>&lt;p&gt;Thanks to everyone who came to my DDDNorth session ‘Automated Build Is Not The End Of The Story’, the links to the tools I discussed are as follows&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Visual Studio Lab Management&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;MSDN &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/dd997438.aspx&#34;&gt;http&lt;/a&gt;&lt;a href=&#34;http://msdn.microsoft.com/en-us/library/dd997438.aspx&#34;&gt;://&lt;/a&gt;&lt;a href=&#34;http://msdn.microsoft.com/en-us/library/dd997438.aspx&#34;&gt;msdn.microsoft.com/en-us/library/dd997438.aspx&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ALM Rangers best practice guidance &lt;a href=&#34;http://vsarlabman.codeplex.com/&#34;&gt;http://vsarlabman.codeplex.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Building Environments&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SC-VMM &lt;a href=&#34;http://technet.microsoft.com/en-us/library/gg610610.aspx&#34;&gt;http://&lt;/a&gt;&lt;a href=&#34;http://technet.microsoft.com/en-us/library/gg610610.aspx&#34;&gt;technet.microsoft.com/en-us/library/gg610610.aspx&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ALM Rangers VM Factory &lt;a href=&#34;http://vsarvmfactory.codeplex.com/&#34;&gt;http://vsarvmfactory.codeplex.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Chef &lt;a href=&#34;http://www.opscode.com/chef/&#34;&gt;http://www.opscode.com/chef/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Puppet &lt;a href=&#34;http://puppetlabs.com/&#34;&gt;http://puppetlabs.com&lt;/a&gt;&lt;a href=&#34;http://puppetlabs.com/&#34;&gt;/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deployment&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ALM Rangers Test Infrastructure Guidance &lt;a href=&#34;http://tig.codeplex.com/&#34;&gt;http://tig.codeplex.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ALM Rangers DevOps Workbench Express Edition &lt;a href=&#34;http://vsardevops.codeplex.com/releases/view/111294&#34;&gt;http://vsardevops.codeplex.com/releases/view/111294&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Octopus Deploy &lt;a href=&#34;http://octopusdeploy.com/&#34;&gt;http://octopusdeploy.com/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Buildmaster &lt;a href=&#34;http://inedo.com/buildmaster/overview&#34;&gt;http://inedo.com/buildmaster/overview&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;InRelease &lt;a href=&#34;http://www.microsoft.com/visualstudio/inrelease/&#34;&gt;http://www.microsoft.com/visualstudio/inrelease/&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who came to my DDDNorth session ‘Automated Build Is Not The End Of The Story’, the links to the tools I discussed are as follows</p>
<ul>
<li>
<p>Visual Studio Lab Management</p>
</li>
<li>
<p>MSDN <a href="http://msdn.microsoft.com/en-us/library/dd997438.aspx">http</a><a href="http://msdn.microsoft.com/en-us/library/dd997438.aspx">://</a><a href="http://msdn.microsoft.com/en-us/library/dd997438.aspx">msdn.microsoft.com/en-us/library/dd997438.aspx</a></p>
</li>
<li>
<p>ALM Rangers best practice guidance <a href="http://vsarlabman.codeplex.com/">http://vsarlabman.codeplex.com/</a></p>
</li>
<li>
<p>Building Environments</p>
</li>
<li>
<p>SC-VMM <a href="http://technet.microsoft.com/en-us/library/gg610610.aspx">http://</a><a href="http://technet.microsoft.com/en-us/library/gg610610.aspx">technet.microsoft.com/en-us/library/gg610610.aspx</a></p>
</li>
<li>
<p>ALM Rangers VM Factory <a href="http://vsarvmfactory.codeplex.com/">http://vsarvmfactory.codeplex.com/</a></p>
</li>
<li>
<p>Chef <a href="http://www.opscode.com/chef/">http://www.opscode.com/chef/</a></p>
</li>
<li>
<p>Puppet <a href="http://puppetlabs.com/">http://puppetlabs.com</a><a href="http://puppetlabs.com/">/</a></p>
</li>
<li>
<p>Deployment</p>
</li>
<li>
<p>ALM Rangers Test Infrastructure Guidance <a href="http://tig.codeplex.com/">http://tig.codeplex.com/</a></p>
</li>
<li>
<p>ALM Rangers DevOps Workbench Express Edition <a href="http://vsardevops.codeplex.com/releases/view/111294">http://vsardevops.codeplex.com/releases/view/111294</a></p>
</li>
<li>
<p>Octopus Deploy <a href="http://octopusdeploy.com/">http://octopusdeploy.com/</a></p>
</li>
<li>
<p>Buildmaster <a href="http://inedo.com/buildmaster/overview">http://inedo.com/buildmaster/overview</a></p>
</li>
<li>
<p>InRelease <a href="http://www.microsoft.com/visualstudio/inrelease/">http://www.microsoft.com/visualstudio/inrelease/</a></p>
</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Get rid of that that zombie build</title>
      <link>https://blog.richardfennell.net/posts/get-rid-of-that-that-zombie-build/</link>
      <pubDate>Thu, 10 Oct 2013 22:03:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/get-rid-of-that-that-zombie-build/</guid>
      <description>&lt;p&gt;Whilst upgrading a TFS 2010 server to 2012 I had a problem that a build was showing in the queue as active after the upgrade. This build was queued in January, 10 months ago, so should have finished a long long time ago. This build had the effect that it blocked any newly queued builds, but the old build did not appear to be running on any agent – a zombie build.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst upgrading a TFS 2010 server to 2012 I had a problem that a build was showing in the queue as active after the upgrade. This build was queued in January, 10 months ago, so should have finished a long long time ago. This build had the effect that it blocked any newly queued builds, but the old build did not appear to be running on any agent – a zombie build.</p>
<p>I tried to stop it, delete it, everything I could think of, all to no effect. It would not go away.</p>
<p>In the end I had to use the brute force solution to delete the rows in the TPC’s SQL DB for the build. I did this in both the tbl_BuildQueue (use the QueueID) and tbl_Build (use the buildID) tables.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for - Could not load file or assembly &#39;Microsoft.VisualStudio.Shell’ on TFS 2010 Build Controller</title>
      <link>https://blog.richardfennell.net/posts/fix-for-could-not-load-file-or-assembly-microsoft-visualstudio-shell-on-tfs-2010-build-controller/</link>
      <pubDate>Thu, 10 Oct 2013 21:47:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-could-not-load-file-or-assembly-microsoft-visualstudio-shell-on-tfs-2010-build-controller/</guid>
      <description>&lt;p&gt;I have &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/17/what-to-do-when-your-tfs-build-agent-says-it-is-ready-but-the-icon-says-it-is-not.aspx&#34;&gt;previously posted about when TFS build controllers don’t start properly&lt;/a&gt;. Well I saw the same problem today whilst upgrading a TFS 2010 server to TFS 2012.3. The client did not want to immediately upgrade their build processes and decided to keep their 2010 build VMs just pointing them at the updated server (remember TFS 2012.2 and later servers can support either 2012 or 2010 build controllers).&lt;/p&gt;
&lt;p&gt;The problem was that when we restarted the build service the controller and agents appeared to start, but then we got a burst of errors in the event log and we saw the controller say it was ready, but have the stopped icon.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/17/what-to-do-when-your-tfs-build-agent-says-it-is-ready-but-the-icon-says-it-is-not.aspx">previously posted about when TFS build controllers don’t start properly</a>. Well I saw the same problem today whilst upgrading a TFS 2010 server to TFS 2012.3. The client did not want to immediately upgrade their build processes and decided to keep their 2010 build VMs just pointing them at the updated server (remember TFS 2012.2 and later servers can support either 2012 or 2010 build controllers).</p>
<p>The problem was that when we restarted the build service the controller and agents appeared to start, but then we got a burst of errors in the event log and we saw the controller say it was ready, but have the stopped icon.</p>
<p>On checking the Windows error log we saw the issue was it could not load the assembly</p>
<blockquote>
<p><em>Exception Message: Problem with loading custom assemblies: Could not load file or assembly &lsquo;Microsoft.VisualStudio.Shell, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a&rsquo; or one of its dependencies. The system cannot find the file specified. (type Exception)</em></p></blockquote>
<p>Turns out this was because the <strong><em>StyleCop</em>.VSPackage.<em>dll</em></strong> has been checked into the build controllers <strong>CustomAssemblies</strong> folder (how and why we never found out, also why it had not failed before was unclear as it was checked in about 6 weeks ago!). Anyway as soon as the DLL was removed from the custom assemblies folder, leaving just the StyleCop files from the <strong>c:program Files (x86)|StyleCop4.7</strong> folder all was OK.</p>
]]></content:encoded>
    </item>
    <item>
      <title>And the most ridiculous packaging award of the day goes to…</title>
      <link>https://blog.richardfennell.net/posts/and-the-most-ridiculous-packaging-award-of-the-day-goes-to/</link>
      <pubDate>Mon, 30 Sep 2013 12:32:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-the-most-ridiculous-packaging-award-of-the-day-goes-to/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://cpc.farnell.com/&#34;&gt;CPC&lt;/a&gt; who managed to send two resistor-capacitor balances for LED lights, which as about 1cm in size&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/adawson/image.axd?picture=Capacitor_Resistor_Package.jpg&#34;&gt;&lt;/p&gt;
&lt;p&gt;in a box as shown here&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_144.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_144.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Other than loads of that inflatable packing material there was a huge CPC catalogue, with the interesting sticker&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_145.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_145.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;So an online electronics company, that provides free shipping (a really good thing when the components were only a couple of £s), chose to also send a very heavy catalogue that they know is out of date, when I have already used their quick and easy web site.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://cpc.farnell.com/">CPC</a> who managed to send two resistor-capacitor balances for LED lights, which as about 1cm in size</p>
<p><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/adawson/image.axd?picture=Capacitor_Resistor_Package.jpg"></p>
<p>in a box as shown here</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_144.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_144.png" title="image"></a></p>
<p>Other than loads of that inflatable packing material there was a huge CPC catalogue, with the interesting sticker</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_145.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_145.png" title="image"></a></p>
<p>So an online electronics company, that provides free shipping (a really good thing when the components were only a couple of £s), chose to also send a very heavy catalogue that they know is out of date, when I have already used their quick and easy web site.</p>
<p>You have to wonder why?</p>
]]></content:encoded>
    </item>
    <item>
      <title>‘TF400499: You have not set your team field’  when trying to update Team Settings via the TFS API</title>
      <link>https://blog.richardfennell.net/posts/tf400499-you-have-not-set-your-team-field-when-trying-to-update-team-settings-via-the-tfs-api/</link>
      <pubDate>Mon, 16 Sep 2013 12:50:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf400499-you-have-not-set-your-team-field-when-trying-to-update-team-settings-via-the-tfs-api/</guid>
      <description>&lt;p&gt;I have recently been automating TFS admin processes such as creating a new team within an existing team project. The TFS team is now our primary means we use to segment work so we need to create new teams fairly often, so automation makes good sense.&lt;/p&gt;
&lt;p&gt;As far as I can see, there are no command line tools, like TF.EXE or WITADMIN.EXE, to do most of the team operations. They are usually browser based. So I have been using PowerShell and the TFS API for the automation.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently been automating TFS admin processes such as creating a new team within an existing team project. The TFS team is now our primary means we use to segment work so we need to create new teams fairly often, so automation makes good sense.</p>
<p>As far as I can see, there are no command line tools, like TF.EXE or WITADMIN.EXE, to do most of the team operations. They are usually browser based. So I have been using PowerShell and the TFS API for the automation.</p>
<p>I hit the problem that when trying to save the team’s backlog iteration, iteration paths etc. for a newly created team using  the SetTeamSettings method. I was seeing</p>
<blockquote>
<p>Exception calling &ldquo;SetTeamSettings&rdquo; with &ldquo;2&rdquo; argument(s): &ldquo;TF400499: You have not set your team field.&rdquo;<br>
At C:Projectstfs2012TFSPowerShellSet-TfsTeamConfig.ps1:37 char:1<br>
+ $configSvc.SetTeamSettings($configs[0].TeamId , $configs[0].TeamSettings)<br>
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException<br>
    + FullyQualifiedErrorId : SoapException</p></blockquote>
<p>This was strange as I had previously tested my PowerShell script file with hard coded values to update an existing team without error. When I inspecting the parameters being passed all looked OK, I had set a value for the team field and the read only property defining where to store the team data was correct (into the AreaPath).</p>
<p>After far to long I realised the problem was I had set the team field to the correct value e.g. ‘My projectMy team’, but I had not created this area path before trying to reference it. Once I had created the required area path my scripted worked as expect.</p>
<p>So the TF400499 error is a little confusing, it does not mean ‘not set’ but ‘not set to a valid value’</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving from Ubuntu to Mint for my TEE demos</title>
      <link>https://blog.richardfennell.net/posts/moving-from-ubuntu-to-mint-for-my-tee-demos/</link>
      <pubDate>Tue, 10 Sep 2013 21:50:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-from-ubuntu-to-mint-for-my-tee-demos/</guid>
      <description>&lt;p&gt;I posted a while ago about the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/07/DHCP-does-not-seem-to-work-on-Ubuntu-for-wireless-based-Hyper-V-virtual-switches.aspx&#34;&gt;problems with DHCP using a Hyper-V virtual switch with WIFI and an Ubuntu VM&lt;/a&gt;, well I never found a good solution without hard coding IP addresses.&lt;/p&gt;
&lt;p&gt;I recently tried using &lt;a href=&#34;http://www.linuxmint.com/download.php&#34;&gt;Mint 15&lt;/a&gt; and was please to see this did not suffer the same problems, it seems happy with DHCP over Hyper-V virtual switches. I think I will give it a go a do a while for any cross platform &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/gg413285.aspx&#34;&gt;TEE&lt;/a&gt; demos I need for a while.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted a while ago about the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/07/DHCP-does-not-seem-to-work-on-Ubuntu-for-wireless-based-Hyper-V-virtual-switches.aspx">problems with DHCP using a Hyper-V virtual switch with WIFI and an Ubuntu VM</a>, well I never found a good solution without hard coding IP addresses.</p>
<p>I recently tried using <a href="http://www.linuxmint.com/download.php">Mint 15</a> and was please to see this did not suffer the same problems, it seems happy with DHCP over Hyper-V virtual switches. I think I will give it a go a do a while for any cross platform <a href="http://msdn.microsoft.com/en-us/library/gg413285.aspx">TEE</a> demos I need for a while.</p>
<p><strong>Update</strong>: Just noticed that I still get a DHCP problem with Mint when I connect to my dual band Netgear N600 router via 2.4Ghz, but not when I use the same router via 5Ghz. I just know I am not at the bottom of this problem yet!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where did that email go?</title>
      <link>https://blog.richardfennell.net/posts/where-did-that-email-go/</link>
      <pubDate>Tue, 10 Sep 2013 12:05:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-did-that-email-go/</guid>
      <description>&lt;p&gt;We use the TFS Alerts system to signal to our teams what state project build are at. So when a developer changes a build quality to ‘ready for test’ an email is sent to everyone in the team and we make sure the build retention policy is set to keep. Now this is not the standard behaviour of the TFS build alerts system, so we do all this by calling a SOAP based web service which in turn uses the TFS API.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We use the TFS Alerts system to signal to our teams what state project build are at. So when a developer changes a build quality to ‘ready for test’ an email is sent to everyone in the team and we make sure the build retention policy is set to keep. Now this is not the standard behaviour of the TFS build alerts system, so we do all this by calling a SOAP based web service which in turn uses the TFS API.</p>
<p>This had all been working well until we did some tidying up and patching on our Exchange server. The new behaviour was:</p>
<ul>
<li>Email sent directly via SMTP by the TFS Alert system  worked</li>
<li>Email sent via our web service called by the TFS Alert system disappeared, but no errors were shown</li>
</ul>
<p>As far as we could see emails were leaving our web service (which was running as the same domain service account as TFS, but its own AppPool) and dying inside our email system, we presumed due to some spam filter rule?</p>
<p>After a bit of digging we spotted the real problem.</p>
<p>If you look at the advanced settings of the TFS Alerts email configuration it points out that if you don’t supply credentials for the SMTP server it passes those for the TFS Service process</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_143.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_143.png" title="image"></a></p>
<p>Historically our internal SMTP server had allowed anonymous posting so this was not an issue, but in our tidy it now required authentication, so this setting became important.</p>
<p>We thought this should not be an issue as the TFS service account was correctly registered in Exchange, and it was working for the TFS generated alert emails, but on checking the code of the web service noticed a vital missing line, we were not setting the credentials on the message, we were leaving it as anonymous, so the email was being blocked</p>
<blockquote>
<p>using (var msg = new MailMessage())<br>
            {<br>
                msg.To.Add(to);<br>
                msg.From = new MailAddress(this.fromAddress);<br>
                msg.Subject = subject;<br>
                msg.IsBodyHtml = true;<br>
                msg.Body = body;<br>
                using (var client = new SmtpClient(this.smptServer))<br>
                {<br>
                    client.Credentials = CredentialCache.DefaultNetworkCredentials;<br>
                    client.Send(msg);<br>
                }</p>
<p>            }</p></blockquote>
<p>Once this line was added and the web service redeployed it worked as expect again</p>
]]></content:encoded>
    </item>
    <item>
      <title>Maybe I should have got the Nokia 925</title>
      <link>https://blog.richardfennell.net/posts/maybe-i-should-have-got-the-nokia-925/</link>
      <pubDate>Sun, 01 Sep 2013 12:13:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/maybe-i-should-have-got-the-nokia-925/</guid>
      <description>&lt;p&gt;I was at &lt;a href=&#34;http://www.livefromjodrellbank.com/news/sigur-ros-friday-30th-august/&#34;&gt;Jodrell Bank to see Sigur Ros&lt;/a&gt; last Friday. I could have really done with the low light features of Nokia 925 as opposed to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/08/21/A-week-with-a-Nokia-820.aspx&#34;&gt;my 820&lt;/a&gt;. The shots I took of the projected light show onto the radio telescope disk were not as good as I hoped.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_20130830_016.jpg&#34;&gt;&lt;img alt=&#34;WP_20130830_016&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_20130830_016_thumb.jpg&#34; title=&#34;WP_20130830_016&#34;&gt;&lt;/a&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_20130830_015.jpg&#34;&gt;&lt;img alt=&#34;WP_20130830_015&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_20130830_015_thumb.jpg&#34; title=&#34;WP_20130830_015&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;But the panorama earlier in the evening worked well&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_20130830_19_47_33_Panorama.jpg&#34;&gt;&lt;img alt=&#34;WP_20130830_19_47_33_Panorama&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_20130830_19_47_33_Panorama_thumb.jpg&#34; title=&#34;WP_20130830_19_47_33_Panorama&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was at <a href="http://www.livefromjodrellbank.com/news/sigur-ros-friday-30th-august/">Jodrell Bank to see Sigur Ros</a> last Friday. I could have really done with the low light features of Nokia 925 as opposed to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/08/21/A-week-with-a-Nokia-820.aspx">my 820</a>. The shots I took of the projected light show onto the radio telescope disk were not as good as I hoped.</p>
<p><a href="/wp-content/uploads/sites/2/historic/WP_20130830_016.jpg"><img alt="WP_20130830_016" loading="lazy" src="/wp-content/uploads/sites/2/historic/WP_20130830_016_thumb.jpg" title="WP_20130830_016"></a><a href="/wp-content/uploads/sites/2/historic/WP_20130830_015.jpg"><img alt="WP_20130830_015" loading="lazy" src="/wp-content/uploads/sites/2/historic/WP_20130830_015_thumb.jpg" title="WP_20130830_015"></a></p>
<p>But the panorama earlier in the evening worked well</p>
<p><a href="/wp-content/uploads/sites/2/historic/WP_20130830_19_47_33_Panorama.jpg"><img alt="WP_20130830_19_47_33_Panorama" loading="lazy" src="/wp-content/uploads/sites/2/historic/WP_20130830_19_47_33_Panorama_thumb.jpg" title="WP_20130830_19_47_33_Panorama"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>DDDNorth registration is now open</title>
      <link>https://blog.richardfennell.net/posts/dddnorth-registration-is-now-open/</link>
      <pubDate>Thu, 29 Aug 2013 16:21:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dddnorth-registration-is-now-open/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.dddnorth.co.uk/Home/Register&#34;&gt;DDDNorth registration&lt;/a&gt; is now open, and I am sure will soon close and move to a wait list. So if you want to attend I would get in early&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD North Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddnorth.co.uk/Content/images/logo.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.dddnorth.co.uk/Home/Register">DDDNorth registration</a> is now open, and I am sure will soon close and move to a wait list. So if you want to attend I would get in early</p>
<p><img alt="DDD North Logo" loading="lazy" src="http://www.dddnorth.co.uk/Content/images/logo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Review of ‘Software Testing using Visual Studio 2012’  from Packt Publishing</title>
      <link>https://blog.richardfennell.net/posts/review-of-software-testing-using-visual-studio-2012-from-packt-publishing/</link>
      <pubDate>Wed, 28 Aug 2013 10:01:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/review-of-software-testing-using-visual-studio-2012-from-packt-publishing/</guid>
      <description>&lt;p&gt;I have just been reading &lt;a href=&#34;http://www.packtpub.com/software-testing-using-visual-studio-2012/book&#34;&gt;Software Testing using Visual Studio 2012&lt;/a&gt; by &lt;a href=&#34;http://www.packtpub.com/authors/profiles/subashni-s&#34;&gt;Subashni. S&lt;/a&gt; and &lt;a href=&#34;http://www.packtpub.com/authors/profiles/n-satheesh-kumar-0&#34;&gt;Satheesh Kumar. N&lt;/a&gt; from Packt Publishing&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/9540EN_cov-stwvs2012.jpg&#34;&gt;&lt;img alt=&#34;9540EN_cov-stwvs2012&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/9540EN_cov-stwvs2012_thumb.jpg&#34; title=&#34;9540EN_cov-stwvs2012&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This book does what it says on the cover, it is a general introduction to the testing tools within the Visual Studio 2012 family. My comment is not about how well it is done, it is a clear enough introduction, but why produce a book that really just covers what is in &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/aa718325.aspx&#34;&gt;MSDN&lt;/a&gt;, &lt;a href=&#34;http://channel9.msdn.com/&#34;&gt;Channel9&lt;/a&gt;, numerous podcasts, blogs and &lt;a href=&#34;http://blogs.msdn.com/b/willy-peter_schaub/archive/2013/05/16/visual-studio-alm-ranger-solutions-catalog.aspx&#34;&gt;ALM Rangers documentation&lt;/a&gt;?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just been reading <a href="http://www.packtpub.com/software-testing-using-visual-studio-2012/book">Software Testing using Visual Studio 2012</a> by <a href="http://www.packtpub.com/authors/profiles/subashni-s">Subashni. S</a> and <a href="http://www.packtpub.com/authors/profiles/n-satheesh-kumar-0">Satheesh Kumar. N</a> from Packt Publishing</p>
<p><a href="/wp-content/uploads/sites/2/historic/9540EN_cov-stwvs2012.jpg"><img alt="9540EN_cov-stwvs2012" loading="lazy" src="/wp-content/uploads/sites/2/historic/9540EN_cov-stwvs2012_thumb.jpg" title="9540EN_cov-stwvs2012"></a></p>
<p>This book does what it says on the cover, it is a general introduction to the testing tools within the Visual Studio 2012 family. My comment is not about how well it is done, it is a clear enough introduction, but why produce a book that really just covers what is in <a href="http://msdn.microsoft.com/en-us/vstudio/aa718325.aspx">MSDN</a>, <a href="http://channel9.msdn.com/">Channel9</a>, numerous podcasts, blogs and <a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2013/05/16/visual-studio-alm-ranger-solutions-catalog.aspx">ALM Rangers documentation</a>?</p>
<p>I suppose this is a question of target audience, some people like to browse a physical book for ‘new’ technology, I can see that (though I tried it on Kindle, more of that later). This book certainly does cover the core areas, but sits strangely between a technology briefing for a manager/person who just needs an overview (it is all a bit long winded, list all the features and flags of tools) and not enough detail for the practitioner (the exercises do not go deep enough unlike those provide by <a href="http://blogs.msdn.com/b/briankel/archive/2013/08/02/visual-studio-2013-application-lifecycle-management-virtual-machine-and-hands-on-labs-demo-scripts.aspx">Microsoft in Brian Keller VS/TFS demo VM series</a>)</p>
<p>Given this concern I wonder who the target audience really is?</p>
<p>A real issue here is that Microsoft have gone to quarterly updates, so the product is always advancing, faster than any print book can manage (Microsoft’s own MSDN documentation has enough problems keeping up, and frequently is play catch up). For a book on testing this is a major problem as ‘test’ has been a key focus for the updates. This means when the book’s contents is compared to Visual Studio/TFS 2012.3 (the current shipping version at the time of this review) there are major features missing such as</p>
<ul>
<li>The improvements in Test Explorer to support other non Microsoft test framework, playlists etc.,</li>
<li>SKU changes in licensing, MTM dropping down to Premium form Ultimate</li>
<li>Azure based load testing</li>
<li>The test experience in the web browser (as opposed to MTM)</li>
</ul>
<p>The list will always grow while Microsoft stick to their newer faster release cycle. This was not too much of a problem when Microsoft shipped every couple of years, a new book opportunity, but now how can any book try to keep up on a 12 week cycle?</p>
<p>One option you would think is Kindle or eBooks in general, as at least the book can be updated . However there is still the issue of the extra effort of the authors and editors, so in general I find these updates are not that common. The authors will usually have moved onto their next project and not be focused on yet another unpaid update to a book they published last quarter.</p>
<p>As to my experience on the Kindle, this was the first technical book I have read on one. I have used the Kindle App on a phone for a couple of years for my novel reading, but always felt the screen was too small for anything that might have a diagram in it. I recently bought a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/07/20/Experiences-with-a-Kindle-Paperwhite.aspx">Kindle Paperwhite</a> so though I would give this book a go on it. I initially tried to email the book from the Packt site straight to my Kindle, but this failed (a file size issue I am told by Packt customer support), but a local copy of USB was fine.</p>
<p>So how was the Kindle experience? OK, it did the job, everything was clear enough,  it was not a super engaging reading experience but it is a technical book, what do you expect? It was good enough that I certainly don’t see my getting too many paper books going forward whether thet be novels or technical books.</p>
<p>So in summary, was the book worth the effort to read? I always gauge this question on ‘did I learn something?’ and I did. There is always a nugget or two in books on subjects you think you know. However, ‘would I say it is a really useful/essential read for anyone who already has a working knowledge in this subject?’, probably not. I would say their time is better spent doing a hand on lab or watching conference recordings on Channel9.</p>
<p>Leave this book to anyone who wants a general written introduction to the subject of Microsoft specific testing tooling.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A week with a Nokia 820</title>
      <link>https://blog.richardfennell.net/posts/a-week-with-a-nokia-820/</link>
      <pubDate>Wed, 21 Aug 2013 20:42:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-week-with-a-nokia-820/</guid>
      <description>&lt;p&gt;I have been using a &lt;a href=&#34;http://www.nokia.com/gb-en/phones/phone/lumia800/&#34;&gt;Nokia 800&lt;/a&gt; (Windows Phone 7.8) for a year or so and been happy with it. It does what I needed i.e. phone, email, media player mainly for podcasts. So given all the  reported issues with WP8 and podcasts (no Zune client to manage the subscription/sync and you can only subscribe through the store if you are in the USA) I was not too keen to ‘upgrade’&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been using a <a href="http://www.nokia.com/gb-en/phones/phone/lumia800/">Nokia 800</a> (Windows Phone 7.8) for a year or so and been happy with it. It does what I needed i.e. phone, email, media player mainly for podcasts. So given all the  reported issues with WP8 and podcasts (no Zune client to manage the subscription/sync and you can only subscribe through the store if you are in the USA) I was not too keen to ‘upgrade’</p>
<p>Anyway last week I was persuaded to give a <a href="http://www.nokia.com/gb-en/phones/phone/lumia820/?cid=ncomprod-fw-src-na-uklfdevicerangelumia820201304-na-bing-gb-en-1todtmx2e7d7f">Nokia 820</a> a try, I did not want to try the 920/925 as I don’t like too larger phone. The fact the 820 is bigger than my 800 I thought that might be an issue.</p>
<p><img loading="lazy" src="http://i-cdn.phonearena.com/images/phones/37937-xlarge/Nokia-Lumia-820.jpg"></p>
<p>So how did it go?</p>
<p>The first couple of days were horrible. However, turns out many of the problems I had were due to poor quality USB cables. Once I used the short one that came with the phone as opposed to one I had used for months connected to my laptop base station, all the sync issues I had went away.</p>
<p>The 820 only had 7Gb of usable memory as opposed to 13Gb on the 800, so I had to put in a MicroSD</p>
<p>The 820 did not come with a rubber bumper case in the box, unlike the 800. I think these are essential to deal with the inevitable drops the phone will suffer, and the couple of millimetre bezel lifts the screen to avoid scratches, so I bought one.</p>
<p>I had to install the language pack before all the speech based functions worked. Now I really can’t remember if I had to do this on the 800, but I don’t recall it, I thought they were preinstalled.</p>
<p>But all these are niggles, the real issue was the podcasting, it was awful. Now I know podcasts are either a feature you use or not, there is little middle ground, what we in the UK call a <a href="http://en.wikipedia.org/wiki/Marmite">Marmite</a> feature. if you listen to podcasts it is probably the primary use of your phone. What was in Microsoft’s head when they cut the Zune functionality and suggested using ITunes for the sync I do not know. The current release of the <a href="http://www.windowsphone.com/en-US/how-to/wp8/windows-phone-app-for-desktop">Windows Phone Desktop</a> is meant to address this issue allowing podcasts to be sync’d from iTunes for folders (which in turn can be sync’d via Zune). The problem is it just does not do the job, it does not honour the played flag, just syncing what it finds in the folder whether it is played or not. This becomes a real problem when you have about half the free space I had on my 800 (until I put in a MicroSD).</p>
<p>I did try with the <a href="http://www.windowsphone.com/en-US/how-to/wp8/windows-phone-app-for-desktop">Windows Phone Desktop</a> for a day or two and gave up. So I moved to an App. The best I found was <a href="http://www.windowsphone.com/en-us/store/app/i-podcast/c0e9110d-5d75-4970-9937-1854393b3f0e">i Podcast</a>, it just works. It is a shame it cannot drop the files into the phone media hub or make use of the MicroSD card, but these are minor issues. I did have a problem that it crashed and wiped out my subscriptions but I am told by the developers that this exception related bug was fixed in 2.1 which was released this week. Their email response to my query was excellent. Other than that it has been great. I would recommend purchasing the premium features so that it can store you subscription/playlist and sync them between device – very nice</p>
<p>I await with interest to see if the <a href="http://nokiatheone.com/2013/07/28/wp8-gdr2lumia-amber-update-releasing-mid-aug-13/">GDR2 update</a> addresses the horror story that is WP8 out the box podcasting.</p>
<p>So one week down, will i go back to my 800? I think not, actually the 800 screen felt a little small now. However I will still say I am not using any of the new WP8 feature, so I would not pay a premium to upgrade? I would hold off, at least until the free upgrade point on my phone contract.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDDNorth session voting is now open</title>
      <link>https://blog.richardfennell.net/posts/dddnorth-session-voting-is-now-open/</link>
      <pubDate>Thu, 15 Aug 2013 17:43:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dddnorth-session-voting-is-now-open/</guid>
      <description>&lt;p&gt;You can now &lt;a href=&#34;http://www.dddnorth.co.uk/Sessions&#34;&gt;vote for the sessions&lt;/a&gt; you wish to see at DDNorth2&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD North Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddnorth.co.uk/Content/images/logo.png&#34;&gt;&lt;/p&gt;
&lt;p&gt;Hope to see you there on Saturday 12th October 2013 in Sunderland&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can now <a href="http://www.dddnorth.co.uk/Sessions">vote for the sessions</a> you wish to see at DDNorth2</p>
<p><img alt="DDD North Logo" loading="lazy" src="http://www.dddnorth.co.uk/Content/images/logo.png"></p>
<p>Hope to see you there on Saturday 12th October 2013 in Sunderland</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why is my TFS Lab build picking a really old build to deploy?</title>
      <link>https://blog.richardfennell.net/posts/why-is-my-tfs-lab-build-picking-a-really-old-build-to-deploy/</link>
      <pubDate>Thu, 01 Aug 2013 14:51:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-is-my-tfs-lab-build-picking-a-really-old-build-to-deploy/</guid>
      <description>&lt;p&gt;I was working on a TFS lab deployment today and to speed up testing I set it to pick the &lt;Latest&gt; build of the TFS build that actually builds the code, as opposed to queuing a new one each time. It took me ages to remember/realise why it kept trying to deploy some ancient build that had long since been deleted from my test system.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_142.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_142.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The reason was the &lt;Latest&gt; build means the last successful build, not the last partially successful build. I had a problem with my code build that means a test was failing (a custom build activity on my build agent was the root cause if the issue). Once I fixed my build box so the code build did not fail the lab build was able to pickup the files I expected and deploy them&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was working on a TFS lab deployment today and to speed up testing I set it to pick the <Latest> build of the TFS build that actually builds the code, as opposed to queuing a new one each time. It took me ages to remember/realise why it kept trying to deploy some ancient build that had long since been deleted from my test system.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_142.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_142.png" title="image"></a></p>
<p>The reason was the <Latest> build means the last successful build, not the last partially successful build. I had a problem with my code build that means a test was failing (a custom build activity on my build agent was the root cause if the issue). Once I fixed my build box so the code build did not fail the lab build was able to pickup the files I expected and deploy them</p>
<p>Note that if you tell the lab deploy build to queue it’s own build it all attempt to deploy this even if it is partially successful.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for ‘Cannot install test agent on these machines because another environment is being created using the same machines’</title>
      <link>https://blog.richardfennell.net/posts/fix-for-cannot-install-test-agent-on-these-machines-because-another-environment-is-being-created-using-the-same-machines/</link>
      <pubDate>Mon, 29 Jul 2013 13:56:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-cannot-install-test-agent-on-these-machines-because-another-environment-is-being-created-using-the-same-machines/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/07/23/Adding-another-VM-to-a-running-Lab-Management-environment.aspx&#34;&gt;recently posted&lt;/a&gt; about adding extra VMs to existing environments in Lab Management. Whilst following this process I hit a problem, I could not create my new environment there was a problem at the verify stage. It was fine adding the new VMs, but the one I was reusing gave the error ‘Microsoft test manager cannot install test agent on these machines because another environment is being created using the same machines’&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/07/23/Adding-another-VM-to-a-running-Lab-Management-environment.aspx">recently posted</a> about adding extra VMs to existing environments in Lab Management. Whilst following this process I hit a problem, I could not create my new environment there was a problem at the verify stage. It was fine adding the new VMs, but the one I was reusing gave the error ‘Microsoft test manager cannot install test agent on these machines because another environment is being created using the same machines’</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_141.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_141.png" title="image"></a></p>
<p>I<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/02/26/Fix-for-Cannot-install-test-agent-on-these-machines-because-another-environment-is-being-created-using-the-same-machines.aspx">I had seen this issue before</a> and so I tried a variety of things that had sorted it in the past, removing the TFS Agent on the VM, manually installing and trying to configure them, <a href="http://blogs.msdn.com/b/aseemb/archive/2009/11/28/how-to-enable-test-controller-logs.aspx">reading through the Test Controller logs</a>, all to no effect. I eventually got a solution today with the help of Microsoft.</p>
<p>The answer was to do the following on the VM showing the problem</p>
<ol>
<li>Kill <strong>TestAgentInstaller.exe</strong> process, if running on failing machine</li>
<li>Delete “TestAgentInstaller” service from services, using <strong>sc delete testagentinstaller</strong> command (gotcha here, use a DOS style command prompt not PowerShell as sc has a different default meaning in PowerShell, it is an alias for set-content. if using PowerShell you need the full path to the sc.exe)</li>
<li>Delete <strong>c:WindowsVSTLM_RES</strong> folder</li>
<li>Restart machine and then try Lab Environment creation again and all should be OK</li>
<li>As usual once the environment is created you might need to restart it to get all the test agents to link up to the controller OK</li>
</ol>
<p>So it seems that the removal of the VM from its old environment left some debris that was confusing the verify step. Seems this only happens rarely but can be a bit of a show stopper if you can’t get around it</p>
]]></content:encoded>
    </item>
    <item>
      <title>Making the drops location for a TFS build match the assembly version number</title>
      <link>https://blog.richardfennell.net/posts/making-the-drops-location-for-a-tfs-build-match-the-assembly-version-number/</link>
      <pubDate>Sat, 27 Jul 2013 16:20:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/making-the-drops-location-for-a-tfs-build-match-the-assembly-version-number/</guid>
      <description>&lt;p&gt;A couple of years ago &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/26/syncing-the-build-number-and-assembly-version-numbers-in-a-tfs-build-when-using-the-tfsversion-activity.aspx&#34;&gt;I wrote about using the TFSVersion build activity to try to sync the assembly and build number&lt;/a&gt;. I did not want to see build names/drop location in the format &amp;lsquo;BuildCustomisation_20110927.17’, I wanted to see the version number in the build something like  &amp;lsquo;BuildCustomisation 4.5.269.17&amp;rsquo;. The problem as I outlined in that post was that by fiddling with the &lt;strong&gt;BuildNumberFormat&lt;/strong&gt; you could easily cause an error where duplicated drop folder names were generated, such as&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A couple of years ago <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/26/syncing-the-build-number-and-assembly-version-numbers-in-a-tfs-build-when-using-the-tfsversion-activity.aspx">I wrote about using the TFSVersion build activity to try to sync the assembly and build number</a>. I did not want to see build names/drop location in the format &lsquo;BuildCustomisation_20110927.17’, I wanted to see the version number in the build something like  &lsquo;BuildCustomisation 4.5.269.17&rsquo;. The problem as I outlined in that post was that by fiddling with the <strong>BuildNumberFormat</strong> you could easily cause an error where duplicated drop folder names were generated, such as</p>
<blockquote>
<p><em>TF42064: The build number &lsquo;BuildCustomisation_20110927.17 (4.5.269.17)&rsquo; already exists for build definition &lsquo;MSF AgileBuildCustomisation&rsquo;.</em></p></blockquote>
<p>I had put this problem aside, thinking there was no way around the issue, until I was recently reviewing the new <a href="http://tig.codeplex.com/">ALM Rangers ‘Test Infrastructure Guidance’</a>. This had a solution to the problem included in the first hands on lab. The trick is that you need to use the <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20TfsVersion%20build%20activity&amp;referringTitle=Documentation">TFSVersion community extension</a> twice in you build.</p>
<ul>
<li>You use it as normal to set the version of your assemblies after you have got the files into the build workspace, just as the <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20TfsVersion%20build%20activity&amp;referringTitle=Documentation">wiki documentation</a> shows</li>
<li>But you also call it in ‘get mode’ at the start of the build process prior to calling the ‘Update Build Number ‘ activity. The core issue being you cannot call ‘Update Build Number’ more than once else you tend to see the TF42064 issues. By using it in this manner you will set the <strong>BuildNumberFomat</strong> to the actual version number you want, which will be used for the drops folder and any assembly versioning.</li>
</ul>
<p>So what do you need to do?</p>
<ol>
<li>
<p>Open you process template for editing (<a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;referringTitle=Documentation">see the custom build activities documentation if you don’t know how to do this</a>)</p>
</li>
<li>
<p>Find the sequence ‘ Update Build Number for Triggered Builds’ and at the top of the process template</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_136.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_136.png" title="image"></a></p>
<ul>
<li>Add <strong>TFSVersion</strong> activity – I called mine ‘Generate Version number for drop’</li>
<li>Add an <strong>Assign</strong> activity – I called mine ‘Set new BuildNumberFormat’</li>
<li>Add a <strong>WriteBuildMessage</strong> activity – This is option but I do like to see what it generated</li>
</ul>
</li>
<li>
<p>Add a string variable <strong>GeneratedBuildNumber</strong> with the scope of ‘Update Build Number for Triggered Builds’</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_137.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_137.png" title="image"></a></p>
</li>
<li>
<p>The properties for the <strong>TFSVersion</strong> activity should be set as shown below</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_138.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_138.png" title="image"></a></p>
</li>
</ol>
<ul>
<li>The <strong>Action</strong> is the key setting, this needs to be set to <strong>GetVersion</strong>, we only need to generate a version number not set any file versions</li>
<li>You need to set the <strong>Major</strong>, <strong>Minor</strong> and <strong>StartDate</strong> settings to match the other copy of the activity in your build process. I good tip is to just cut and paste from the other instance to create this one, so that the bulk of the properties are correct</li>
<li>The <strong>Version</strong> needs to be set to you variable <strong>GeneratedBuildNumber</strong> this is the outputed version value</li>
</ul>
<ol start="6">
<li>
<p>The properties for the <strong>Assign</strong> activities are as follows</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_139.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_139.png" title="image"></a></p>
</li>
</ol>
<ul>
<li>Set <strong>To</strong> to <strong>BuildNumberFormat</strong></li>
<li>Set <strong>Value</strong> to <strong>String.Format(&quot;$(BuildDefinitionName)_{0}&quot;, GeneratedBuildNumber),</strong> you can vary this format to meet your own needs [updated 31 Jul 13 - better to use an _ rarther than a space as this will be used in the drop path)</li>
</ul>
<ol start="8">
<li>I also added a <strong>WriteMessage</strong> activity that outputs the generated build value, but that is optional</li>
</ol>
<p>Once all this was done and saved back to TFS it could be used for a build. You now see that the build name, and drops location is in the form</p>
<blockquote>
<p>[Build name] [Major].[Minor].[Days since start date].[TFS build number]</p></blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_140.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_140.png" title="image"></a></p>
<p>This is a slight change from what I previously attempted where the 4th block was the count of builds of a given type on a day, now it is the unique TFS generate build number, the number shown before the build name is generated. I am happy with that. My key aim is reached that the drops location contains the product version number so it is easy to relate a build to a given version without digging into the build reports.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I can never remember the command line to add use to the TFS Service Accounts group</title>
      <link>https://blog.richardfennell.net/posts/i-can-never-remember-the-command-line-to-add-use-to-the-tfs-service-accounts-group/</link>
      <pubDate>Fri, 26 Jul 2013 10:57:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-can-never-remember-the-command-line-to-add-use-to-the-tfs-service-accounts-group/</guid>
      <description>&lt;p&gt;I keep forgetting when you use &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/eb77e739-c98c-4e36-9ead-fa115b27fefe&#34;&gt;TFS Integration Platform&lt;/a&gt; that the user who the tool (or service account is running as a service) is running as has to be in the “Team Foundation Service Accounts” group on the TFS servers involved. If they are not you get a runtime conflict something like&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Microsoft.TeamFoundation.Migration.Tfs2010WitAdapter.PermissionException: TFS WIT bypass-rule submission is enabled. However, the migration service account &amp;lsquo;Richard Fennell&amp;rsquo; is not in the Service Accounts Group on server &amp;lsquo;http://tfsserver:8080/tfs&amp;rsquo;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I keep forgetting when you use <a href="http://visualstudiogallery.msdn.microsoft.com/eb77e739-c98c-4e36-9ead-fa115b27fefe">TFS Integration Platform</a> that the user who the tool (or service account is running as a service) is running as has to be in the “Team Foundation Service Accounts” group on the TFS servers involved. If they are not you get a runtime conflict something like</p>
<blockquote>
<p>Microsoft.TeamFoundation.Migration.Tfs2010WitAdapter.PermissionException: TFS WIT bypass-rule submission is enabled. However, the migration service account &lsquo;Richard Fennell&rsquo; is not in the Service Accounts Group on server &lsquo;http://tfsserver:8080/tfs&rsquo;.</p></blockquote>
<p>The easiest way to do this is to use the TFSSecurity command line tool on the TFS server. Now you will find some older blog posts about setting the user as a TFS admin console user to get the same effect, but this only seems to work on TFS 2010. This command is good for all versions</p>
<blockquote>
<p>C:Program FilesMicrosoft Team Foundation Server 12.0tools&gt; .TFSSecurity.exe /g+ &ldquo;Team Foundation Service Accounts<br>
&quot; n:mydomainrichard /server:http://localhost:8080/tfs</p></blockquote>
<p>and expect to see</p>
<blockquote>
<p><em>Microsoft (R) TFSSecurity - Team Foundation Server Security Tool<br>
Copyright (c) Microsoft Corporation.  All rights reserved.</em></p>
<p><em>The target Team Foundation Server is</em> <em>http://localhost:8080/tfs</em>_.<br>
Resolving identity &ldquo;Team Foundation Service Accounts&rdquo;&hellip;<br>
s [A] [TEAM FOUNDATION]Team Foundation Service Accounts<br>
Resolving identity &ldquo;n:mydomainrichard&rdquo;&hellip;<br>
  [U] mydomainRichard<br>
Adding Richard to [TEAM FOUNDATION]Team Foundation Service Accounts&hellip;<br>
Verifying&hellip;_</p>
<p><em>SID: S-1-9-1551374245-1204400969-2333986413-2179408616-0-0-0-0-2</em></p>
<p><em>DN:</em></p>
<p><em>Identity type: Team Foundation Server application group<br>
   Group type: ServiceApplicationGroup<br>
Project scope: Server scope<br>
Display name: [TEAM FOUNDATION]Team Foundation Service Accounts<br>
  Description: Members of this group have service-level permissions for the Team Foundation Application Instance. For se<br>
rvice accounts only.</em></p>
<p><em>1 member(s):<br>
  [U] mydomainRichard</em></p>
<p><em>Member of 2 group(s):<br>
e [A] [TEAM FOUNDATION]Team Foundation Valid Users<br>
s [A] [DefaultCollection]Project Collection Service Accounts</em></p>
<p><em>Done.</em></p></blockquote>
<p>Once this is done, and the integration platform run restarted all should be OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>An attempted return for ‘Brian the build bunny’</title>
      <link>https://blog.richardfennell.net/posts/an-attempted-return-for-brian-the-build-bunny/</link>
      <pubDate>Tue, 23 Jul 2013 22:51:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/an-attempted-return-for-brian-the-build-bunny/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;Back in 2008 Martin Woodward did a post on using a &lt;a href=&#34;http://en.wikipedia.org/wiki/Nabaztag&#34;&gt;Nabaztag&lt;/a&gt; as a build monitor for TFS, &lt;a href=&#34;http://www.woodwardweb.com/gadgets/000434.html&#34;&gt;‘Brian the build bunny’&lt;/a&gt;. I did a bit more work on this idea and wired into our internal build monitoring system. We ended up with a system where a build definition could be tagged so that it’s success or failure caused the Nabaztag to say a message.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_134.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_134.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This all worked well until the company that made Nabaztag went out of business, the problem being all communication with your rabbit was via their web servers. At the time we did nothing about this, so just stopped using this feature of our build monitors.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>Back in 2008 Martin Woodward did a post on using a <a href="http://en.wikipedia.org/wiki/Nabaztag">Nabaztag</a> as a build monitor for TFS, <a href="http://www.woodwardweb.com/gadgets/000434.html">‘Brian the build bunny’</a>. I did a bit more work on this idea and wired into our internal build monitoring system. We ended up with a system where a build definition could be tagged so that it’s success or failure caused the Nabaztag to say a message.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_134.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_134.png" title="image"></a></p>
<p>This all worked well until the company that made Nabaztag went out of business, the problem being all communication with your rabbit was via their web servers. At the time we did nothing about this, so just stopped using this feature of our build monitors.</p>
<h2 id="getting-it-going-again">Getting it going again</h2>
<p>When the company that made Nabaztag went out of business a <a href="http://en.wikipedia.org/wiki/Nabaztag">few replacements for their servers appeared</a>. I choose to look at the PHP based one <a href="http://sourceforge.net/projects/opennab/">OpenNab</a>, my longer plan being to use a <a href="http://www.raspberrypi.org/">Raspberry PI</a> as a ‘backpack’ server for the Nabaztag.</p>
<h3 id="setting-up-your-apachephp-server">Setting up your Apache/PHP server</h3>
<p>I decided to start with a Ubuntu 12.04 LT VM to check out the PHP based server, it was easier to fiddle with whilst travelling as I did not want to carry around all the hardware.</p>
<p>Firstly I installed Apache 2 and PHP 5, using the command</p>
<blockquote>
<p>sudo apt-get install apache2<br>
sudo apt-get install php5<br>
sudo apt-get install libapache2-mod-php5<br>
sudo /etc/init.d/apache2 restart</p></blockquote>
<p>I then <a href="http://sourceforge.net/projects/opennab/files/?source=navbar">downloaded the OpenNab files</a> and unzipped them into <strong>/var/www/vl</strong></p>
<p>Next I tried started to work through the instructions on the <a href="http://localhost/vl/check_install.html">http://localhost/vl/check_install.html</a> I instantly got problems.</p>
<p>The first test is to check is that if you ask for a page that does not exist (a 404 error) it should redirect to the <strong>bc.php</strong> page. The need for this is that the Nabaztag will make a call to <strong>bc.jsp</strong>, this cannot be altered so we need to redirect the call. The problem is this is mean to be handled by a <strong>.htaccess</strong> file in the <strong>/var/www/vl</strong> folder that contains</p>
<blockquote>
<p>ErrorDocument 404 /vl/bc.php</p></blockquote>
<p>I could not get this to work. In the end I edited the Apache <strong>/etc/apache2/httpd.conf</strong>  and put the same text in this file. I am not expert on Apache but the notes I read seemed to infer that <strong>httpd.conf</strong> was being favoured over <strong>.htaccess</strong>, so it might be a version issue.</p>
<p>Once this change was made I got the expected redirections, asking for an invalid folder or page caused the <strong>bc.php</strong> file to be loaded (it showing a special 404 message – watch out for this the text in the message it is important, I had thought mine was working before as I saw 404, but it was Apache reporting the error not the <strong>bc.php</strong> page)</p>
<p>Next I checked the <a href="http://localhost/vl/tests">http://localhost/vl/tests</a> to run all the PHP tests. Most passed but I did see a couple of failures and loads of exceptions. The fixes were</p>
<ul>
<li>Failure of &rsquo;testFileGetContents&rsquo; – this is down to whether Apache returns compressed content or not. You need to disable this feature by running the command</li>
</ul>
<blockquote>
<p>sudo a2dismod deflate</p></blockquote>
<ul>
<li>All the exceptions are because deprecated calls are being made (OpenNab is a few years old). I edited the <strong>/etc/php5/apache2/php.ini</strong> file and set the error reporting to not show deprecation warnings. Once this was done the PHP tests all passed</li>
</ul>
<blockquote>
<p>error_reporting = E_ALL &amp; ~E_NOTICE &amp; ~E_DEPRECATED</p></blockquote>
<p>Next I could try a call to a dummy address and see that files ‘burrows’ folder and I got some gibberish message returned. This proved the redirect worked and I had all the tools wired up</p>
<p><strong>Note:</strong> Some people have had permission problems, you might need to grant to write permissions to folder as temporary files are created, but this was not something I needed to alter.</p>
<p>It is a good idea to make sure you have not firewall issues by accessing the test pages from another PC/VM</p>
<h3 id="getting-the-rabbit-on-the-lan">Getting the Rabbit on the LAN</h3>
<p><strong>Note:</strong> I needed to know the IP address of my PHP server. Usually you would use DNS and maybe DHCP leases to manage this. For my test I just hard coded it. The main reason was that <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/06/07/DHCP-does-not-seem-to-work-on-Ubuntu-for-wireless-based-Hyper-V-virtual-switches.aspx">Ubuntu cannot use Wifi based DHCP on a Hyper-V</a></p>
<p>The first thing to note is that the Nabaztag Version 2 I am using does not support WPA2 for WIFI security. It only supports WPA, so I had to dig out a base station for it to use as I did not want to downgrade my WIFI security.</p>
<p><strong>Note:</strong> The Nabaztag Version 1 only does WEP, if you have one of them you need to set your security appropriately.</p>
<p>To setup the Nabaztag</p>
<ul>
<li>Hold down the button on the top and switch it on, the nose should go purple</li>
<li>On a PC look for new WIFI base stations, connect to the one with a name like Nabaztag1D</li>
<li>In a browser connect to 192.168.0.1 and set the Nabaztag to connect to your WIFI base station</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_135.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_135.png" title="image"></a></p>
<ul>
<li>In the advance options, you also need to set the IP address or DNS name of your new OpenNab server</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image9.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image9_thumb.png" title="image"></a></p>
<ul>
<li>When you save the unit should reboot</li>
<li>Look to see that a new entry in the <strong>/vl/burrows</strong> folder on your OpenNab server</li>
</ul>
<p>So at this point I thought it was working, but the Nabaztag kept rebooting, I saw three tummy LEDs go green but the nose was flashing orange/green then a reboot.</p>
<p>After much fiddling I think I worked out the problem. The <a href="http://sourceforge.net/p/opennab/wiki/Home/">OpenNab software is a proxy</a>. It still, by default, calls to the old Nabaztag site. Part of the boot process is to pull a <strong>bootcode.bin</strong> file down from the server to allow the unit to boot. This was failing.</p>
<p>To fix this I did the following</p>
<ul>
<li>Edited the <strong>/vl/opennab.ini</strong> file
<ul>
<li>Set the <strong>LogLevel = 4</strong> so I got as much logging as possible in the <strong>/vl/logs</strong> folder</li>
<li>Set the <strong>ServerMode = standalone</strong> so that it does not try to talk to the original Nabaztag  site</li>
<li>I saw that the entry <strong>BootCode = /vl/plugin/saveboot/files/bootcode.bin,</strong> a file I did not have. The only place I could find a copy was on the <a href="http://code.google.com/p/volk/source/browse/trunk/volk/bootcode.bin?r=69">volk Nabaztag tools site</a></li>
</ul>
</li>
<li>Once all these changes were made my Nabaztag booted OK, I got four green LEDs, the ears rotated</li>
</ul>
<blockquote>
<p>When you power up the Nabaztag, it runs through a start-up sequence with orange and green lights. Use this to check where there&rsquo;s a problem:</p>
<ul>
<li>First belly light is the connection to your network - green is good</li>
<li>Second belly light is that the bunny has got an IP address on your network - green is good</li>
<li>Third belly light means that the bunny can resolve the server web address - green is good</li>
<li>The nose light confirms whether the server is responding to the rabbit&rsquo;s requests - green is good</li>
</ul>
<p>A pulsing purple light underneath the rabbit means that the Nabaztag is connected and working OK.</p></blockquote>
<h3 id="sending-messages-to-the-rabbit-on-the-lan">Sending Messages to the Rabbit on the LAN</h3>
<p>Now I could try sending messages via the API demo pages. The messages seemed to be sent OK, but nothing happened on the Rabbit. I was unsure of it had booted OK or even if the <strong>bootcode.bin</strong> file was correct.</p>
<p>At this point I got to thinking, the main reason I wanted this working again was the text to speech (TTS) system. This is not part of the OpenNab server, this function is passed off to the original Nabaztag service. So was all this work going to get me what I wanted?</p>
<p>I was at the point I had learnt loads about getting Apache, PHP and OpenNab going but was frankly I was not nearer what I was after.</p>
<h2 id="a-practical-solution">A Practical Solution</h2>
<p>At this point I went back to look at the <a href="http://en.wikipedia.org/wiki/Nabaztag#Cessation_of_service">other replacement servers</a>. I decided to give <a href="http://www.nabaztaglives.com/">Nabaztaglives.com</a> a go, and it just worked. Just follow their <a href="http://www.nabaztaglives.com/newRabbit.php">setup page</a>. They provide TTS using the Google API, so just what I needed.</p>
<p>Ok it is not a Raspberry PI backpack, not a completely standalone solution,  but I do have the option to use the Nabaztag in the same manner as I used to as a means to signal build problems</p>
]]></content:encoded>
    </item>
    <item>
      <title>Adding another VM to a running Lab Management environment</title>
      <link>https://blog.richardfennell.net/posts/adding-another-vm-to-a-running-lab-management-environment/</link>
      <pubDate>Tue, 23 Jul 2013 14:29:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/adding-another-vm-to-a-running-lab-management-environment/</guid>
      <description>&lt;p&gt;If you are using network isolated environment in TFS Lab management there is no way to add another VM unless you rebuild and redeploy the environment. However, if you are not network isolated you can at least avoid the redeploy issues to a degree.&lt;/p&gt;
&lt;p&gt;I had a SCVMM based environment that was a not network isolated environment that contained a single non-domain joined server. This was used to host a backend simulation service for a project. In the next phase of the project we need to test accessing this service via RDP/Terminal Server so I wanted to add a VM to act in this role to the environment.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are using network isolated environment in TFS Lab management there is no way to add another VM unless you rebuild and redeploy the environment. However, if you are not network isolated you can at least avoid the redeploy issues to a degree.</p>
<p>I had a SCVMM based environment that was a not network isolated environment that contained a single non-domain joined server. This was used to host a backend simulation service for a project. In the next phase of the project we need to test accessing this service via RDP/Terminal Server so I wanted to add a VM to act in this role to the environment.</p>
<p>So firstly I deleted the environment in MTM, as the VMs in the environment are not network isolated they are not removed. The only change is to remove the XML meta data from the properties description.</p>
<p>I now needed to create my new VM. I had thought I could create a new environment adding the existing deployed and running VM as well as  a new one from the SCVMM library. However you get the error ‘ cannot create an environment consisting of both running and stored VMs’</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_133.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_133.png" title="image"></a></p>
<p>So here you have two options.</p>
<ol>
<li>Store the running VM in the library and redeploy</li>
<li>Deploy out, via SCVMM, a new VM from some template or stored VM</li>
</ol>
<p>Once this is done you can create the new environment using the running VMs or stored images depending on the option chosen in the previous step.</p>
<p>So not any huge saving in time or effort. Just wish there was a way to edit deployed environments</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences with a Kindle Paperwhite</title>
      <link>https://blog.richardfennell.net/posts/experiences-with-a-kindle-paperwhite/</link>
      <pubDate>Sat, 20 Jul 2013 11:18:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-with-a-kindle-paperwhite/</guid>
      <description>&lt;p&gt;I wrote a post a while ago about ‘&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/01/06/kindle-on-the-phone-7.aspx&#34;&gt;should I buy a Kindle&lt;/a&gt;’, well I put if off for over a year using the Kindle app on my WP7 phone, reading best part of 50 books and been happy enough without buying an actual Kindle. The key issue being poor battery life, but that’s phones for you.&lt;/p&gt;
&lt;p&gt;However, I have eventually got around to getting a Kindle device. They key was I had been waiting for something that used touch, had no keyboard,  but most importantly worked in the dark without an external light. This is because I found one of the most useful features of the phone app was reading in bed without the need for a light.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I wrote a post a while ago about ‘<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/01/06/kindle-on-the-phone-7.aspx">should I buy a Kindle</a>’, well I put if off for over a year using the Kindle app on my WP7 phone, reading best part of 50 books and been happy enough without buying an actual Kindle. The key issue being poor battery life, but that’s phones for you.</p>
<p>However, I have eventually got around to getting a Kindle device. They key was I had been waiting for something that used touch, had no keyboard,  but most importantly worked in the dark without an external light. This is because I found one of the most useful features of the phone app was reading in bed without the need for a light.</p>
<p>This is basically the spec of the <a href="http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;bc1=000000&amp;IS2=1&amp;bg1=FFFFFF&amp;fc1=000000&amp;lc1=0000FF&amp;t=buitwoonmypc-21&amp;o=2&amp;p=8&amp;l=as4&amp;m=amazon&amp;f=ifr&amp;ref=ss_til&amp;asins=B007OZO03M">Kindle Paperwhite</a>, so I had no excuse to delay any longer.</p>
<p><a href="http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;bc1=000000&amp;IS2=1&amp;bg1=FFFFFF&amp;fc1=000000&amp;lc1=0000FF&amp;t=buitwoonmypc-21&amp;o=2&amp;p=8&amp;l=as4&amp;m=amazon&amp;f=ifr&amp;ref=ss_til&amp;asins=B007OZO03M"><img alt="Kindle Paperwhite e-reader" loading="lazy" src="http://g-ecx.images-amazon.com/images/G/02/kindle/dp/2012/KC/KC-slate-03-lg._V401625903_.jpg"></a></p>
<p>This week was my first trip away with it and it was interesting to see my usage pattern. On the train and in the hotel I used the Kindle, but standing on the railway station or generally  waiting around I still pulled out my phone to read. This had the effect that I did have to put my phone into WIFI hotspot mode so the Kindle could sync up my last read point via whispersync when I wanted to switch back to the Kindle. This was because I had not bought the <a href="http://rcm-eu.amazon-adsystem.com/e/cm?lt1=_blank&amp;bc1=000000&amp;IS2=1&amp;bg1=FFFFFF&amp;fc1=000000&amp;lc1=0000FF&amp;t=buitwoonmypc-21&amp;o=2&amp;p=8&amp;l=as4&amp;m=amazon&amp;f=ifr&amp;ref=ss_til&amp;asins=B007OZNWRC">3G version of the Paperwhite</a>, and I still don’t think I would bother to get, as firing up a hotspot is easy if I am on the road and the Kindle uses my home and work WIFI most of the time.</p>
<p>So I have had it for a few weeks now and must say I am very happy with it, I can heartily recommend it. I still have reservations over having to carry another device, but it is so much more pleasant to read on the Kindle screen. So most of the time it is worth carrying it and for when it is not I just use my phone.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Washing your phone headset</title>
      <link>https://blog.richardfennell.net/posts/washing-your-phone-headset/</link>
      <pubDate>Sun, 14 Jul 2013 14:23:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/washing-your-phone-headset/</guid>
      <description>&lt;p&gt;As part of my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2007/05/22/correct-cleaning-method-for-corsair-flash-voyager-usb-stick.aspx&#34;&gt;on-going experiments&lt;/a&gt; to see what pieces of personal electronics can be put through a washing machine on 40C wash, I can confirm the headset off my Nokia phone has survived.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As part of my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2007/05/22/correct-cleaning-method-for-corsair-flash-voyager-usb-stick.aspx">on-going experiments</a> to see what pieces of personal electronics can be put through a washing machine on 40C wash, I can confirm the headset off my Nokia phone has survived.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Minor issue on TFS 2012.3 upgrade if you are using host headers in bindings</title>
      <link>https://blog.richardfennell.net/posts/minor-issue-on-tfs-2012-3-upgrade-if-you-are-using-host-headers-in-bindings/</link>
      <pubDate>Thu, 11 Jul 2013 10:03:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/minor-issue-on-tfs-2012-3-upgrade-if-you-are-using-host-headers-in-bindings/</guid>
      <description>&lt;p&gt;Yesterday I upgraded our production &lt;a href=&#34;http://www.microsoft.com/en-us/download/details.aspx?id=38185&#34;&gt;2012.2 TFS server to update 3&lt;/a&gt;. All seemed to go OK and it completed with no errors, it was so much easier now that the update supports the use of &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/ff877884.aspx&#34;&gt;SQL 2012 Availability Groups&lt;/a&gt; within the update process, no need to remove the DBs from the availability group prior to the update.&lt;/p&gt;
&lt;p&gt;However, though there were no errors it did reported a warning, and on a quick check users could not connects to the upgraded server on our usually https URL.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Yesterday I upgraded our production <a href="http://www.microsoft.com/en-us/download/details.aspx?id=38185">2012.2 TFS server to update 3</a>. All seemed to go OK and it completed with no errors, it was so much easier now that the update supports the use of <a href="http://msdn.microsoft.com/en-us/library/ff877884.aspx">SQL 2012 Availability Groups</a> within the update process, no need to remove the DBs from the availability group prior to the update.</p>
<p>However, though there were no errors it did reported a warning, and on a quick check users could not connects to the upgraded server on our usually https URL.</p>
<p>On checking the update log I saw</p>
<blockquote>
<p>[Warning@09:06:13.578] TF401145: The Team Foundation Server web application was previously configured with one or more bindings that have ports that are currently unavailable.  See the log for detailed information.<br>
[Info   @09:06:13.578]<br>
[Info   @09:06:13.578] +-+-+-+-+-| The following previously configured ports are not currently available&hellip; |+-+-+-+-+-<br>
[Info   @09:06:13.584]<br>
[Info   @09:06:13.584] 1          - Protocol          : https<br>
[Info   @09:06:13.584]            - Host              : tfs.blackmarble.co.uk<br>
[Info   @09:06:13.584]            - Port              : 443<br>
[Info   @09:06:13.584] port: 443<br>
[Info   @09:06:13.585] authMode: Windows<br>
[Info   @09:06:13.585] authenticationProvider: Ntlm</p></blockquote>
<p>The issue appears if you use host headers, as we do for our HTTPS bindings. The TFS configuration tool does not understand these, so sees more than one binding in our case on 443  (our TFS server VM also hosts as a nuget server on https 443, we use host headers to separate the traffic) . As the tool does not know what to do with host headers, it just deletes the bindings it does no understand.</p>
<p>Anyway the fix was to  manually reconfigured the HTTPS bindings in IIS and all was OK.</p>
<p>On checking with Microsoft it seems this is a know issue, and on their radar to sort out in future.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Setting SkyDrive as a trusted location in Office 2013</title>
      <link>https://blog.richardfennell.net/posts/setting-skydrive-as-a-trusted-location-in-office-2013/</link>
      <pubDate>Tue, 09 Jul 2013 14:03:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-skydrive-as-a-trusted-location-in-office-2013/</guid>
      <description>&lt;p&gt;We use a &lt;a href=&#34;http://en.wikipedia.org/wiki/Visual_Studio_Tools_for_Office&#34;&gt;VSTO&lt;/a&gt; based Word template to make sure all our documents have the same styling and are suitably reformatted for shipping to clients e.g revision comments removed, contents pages up to date etc. Normally we will create a new document using this template from our SharePoint server and all is OK. However sometimes you are on the road when you started a document so you just create it locally using a locally installed copy of the template. In the past this has not caused me problems. I have my local ‘My documents’ set in Word as a trusted location and it just works fine.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We use a <a href="http://en.wikipedia.org/wiki/Visual_Studio_Tools_for_Office">VSTO</a> based Word template to make sure all our documents have the same styling and are suitably reformatted for shipping to clients e.g revision comments removed, contents pages up to date etc. Normally we will create a new document using this template from our SharePoint server and all is OK. However sometimes you are on the road when you started a document so you just create it locally using a locally installed copy of the template. In the past this has not caused me problems. I have my local ‘My documents’ set in Word as a trusted location and it just works fine.</p>
<p>However, of late due to some SSD problems, I have taken to using the <a href="http://windows.microsoft.com/en-US/skydrive/download">SkyDrive desktop application</a>. So I now save to C:Users[username]SkyDrive and this syncs up to my SkyDrive space whenever it gets a chance. It has certainly saved me a few times already (see older posts my my SSD failure adventures)</p>
<p>However, the problem is that I can create a new document OK, VSTO runs, and save it to my local SkyDrive folder, but then I come back to open it for editing I get the error</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_131.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_131.png" title="image"></a></p>
<p>The problem is my SkyDrive folder is not in my trusted locations list (Word &gt; options &gt; trust center &gt; trust center settings (button lower right) &gt; Trust locations)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_132.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_132.png" title="image"></a></p>
<p>So I tried adding C:Users[username]SkyDrive – it did not work.</p>
<p>I then noticed that when I load or save from SkyDrive dialogs say it is coping to ‘https://d.docs.live.net/[an unique id]’. So I entered <a href="https://d.docs.live.net">https://d.docs.live.net</a> (and its sub folders) as a trusted location and it worked.</p>
<p>Now I don’t really want to trust the whole of SkyDrive, so needed to find my SkyDrive ID. Now I am sure there is a easy way to do this but I don’t know it.</p>
<p>The solution I used was to</p>
<ol>
<li>Go to the browser version of SkyDrive.</li>
<li>Pick a file</li>
<li>Use the menu option ‘Embed’ to generate the HTML to embed the file</li>
<li>From this URL extracted the CID</li>
<li>Added this to the base URL so you get <a href="https://d.docs.live.net/12345678/">https://d.docs.live.net/12345678/</a></li>
<li>Add this new URL to the trusted locations (with sub folders) and my VSTO application works</li>
</ol>
<p>Simple wasn’t it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Renewed as an MVP for Visual Studio ALM</title>
      <link>https://blog.richardfennell.net/posts/renewed-as-an-mvp-for-visual-studio-alm/</link>
      <pubDate>Mon, 01 Jul 2013 17:09:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/renewed-as-an-mvp-for-visual-studio-alm/</guid>
      <description>&lt;p&gt;Really pleased to say I have been renewed as a Microsoft MVP for Visual Studio ALM. It great to be involved with such as activity community as the ALM crowd, can’t wait for the next summit in November.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;MVP Logo&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/images/mvp.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Really pleased to say I have been renewed as a Microsoft MVP for Visual Studio ALM. It great to be involved with such as activity community as the ALM crowd, can’t wait for the next summit in November.</p>
<p><img alt="MVP Logo" loading="lazy" src="http://blogs.blackmarble.co.uk/images/mvp.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>DDDNorth session submission has opened</title>
      <link>https://blog.richardfennell.net/posts/dddnorth-session-submission-has-opened/</link>
      <pubDate>Mon, 01 Jul 2013 10:19:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dddnorth-session-submission-has-opened/</guid>
      <description>&lt;p&gt;The next &lt;a href=&#34;http://www.dddnorth.co.uk&#34;&gt;DDDNorth event is to be held at Sunderland University on the 12th October&lt;/a&gt;. Session submission has now opened so why not get your &lt;a href=&#34;http://www.dddnorth.co.uk/Sessions&#34;&gt;session proposal&lt;/a&gt; in?&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD North Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddnorth.co.uk/Content/images/logo.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The next <a href="http://www.dddnorth.co.uk">DDDNorth event is to be held at Sunderland University on the 12th October</a>. Session submission has now opened so why not get your <a href="http://www.dddnorth.co.uk/Sessions">session proposal</a> in?</p>
<p><img alt="DDD North Logo" loading="lazy" src="http://www.dddnorth.co.uk/Content/images/logo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>The best way to enjoy Build on the road.</title>
      <link>https://blog.richardfennell.net/posts/the-best-way-to-enjoy-build-on-the-road/</link>
      <pubDate>Fri, 28 Jun 2013 09:49:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-best-way-to-enjoy-build-on-the-road/</guid>
      <description>&lt;p&gt;The way most big conferences manage to virtually live stream everything is very impressive. I started watching the &lt;a href=&#34;http://channel9.msdn.com/Events/Build/2013&#34;&gt;stream of yesterdays Microsoft Build keynote&lt;/a&gt; on the office’s big projection screen with everyone else at Black Marble. I have always said the best way to enjoy a keynote is on the comfy sofa with a beer at the end of the day. So much better than an early queue then a usually over air conditioned hall with 10,000 close friends.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The way most big conferences manage to virtually live stream everything is very impressive. I started watching the <a href="http://channel9.msdn.com/Events/Build/2013">stream of yesterdays Microsoft Build keynote</a> on the office’s big projection screen with everyone else at Black Marble. I have always said the best way to enjoy a keynote is on the comfy sofa with a beer at the end of the day. So much better than an early queue then a usually over air conditioned hall with 10,000 close friends.</p>
<p>Unfortunately I had to leave about 1 hour into the keynote, so fired up my Lumia 800 Window 7.8 phone (yes an older one, but I like the size) and hit the Channel 9 site. This picked up the stream, just in the browser, and I was able to listen to the rest of the session via 3G whilst travelling home, I of course made sure the screen was off as I was driving. It was seamless.</p>
<p>Clever what this technology stuff can do now isn’t it</p>
]]></content:encoded>
    </item>
    <item>
      <title>A day of TFS upgrades</title>
      <link>https://blog.richardfennell.net/posts/a-day-of-tfs-upgrades/</link>
      <pubDate>Thu, 27 Jun 2013 13:36:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-day-of-tfs-upgrades/</guid>
      <description>&lt;p&gt;After last nights release of new TFS and Visual Studio bits at the &lt;a href=&#34;http://www.buildwindows.com/&#34;&gt;Build&lt;/a&gt; conference I spent this morning upgrading my demo VMs. Firstly I upgraded to &lt;a href=&#34;http://www.microsoft.com/en-us/download/details.aspx?id=38185&#34;&gt;TFS 2012.3&lt;/a&gt; and then snapshotting before going onto &lt;a href=&#34;http://www.microsoft.com/visualstudio/eng#2013-preview&#34;&gt;2013 Preview&lt;/a&gt;. So by changing snapshot I can now demo either version. In both cases the upgrade process was as expected, basically a rerun of the configuration wizard with all the fields bar the password prefilled. &lt;a href=&#34;http://nakedalm.com/about&#34;&gt;Martin Hinshelwood&lt;/a&gt; has done a nice post if you want more &lt;a href=&#34;http://nakedalm.com/upgrading-to-team-foundation-server-2013/&#34;&gt;details on the process&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After last nights release of new TFS and Visual Studio bits at the <a href="http://www.buildwindows.com/">Build</a> conference I spent this morning upgrading my demo VMs. Firstly I upgraded to <a href="http://www.microsoft.com/en-us/download/details.aspx?id=38185">TFS 2012.3</a> and then snapshotting before going onto <a href="http://www.microsoft.com/visualstudio/eng#2013-preview">2013 Preview</a>. So by changing snapshot I can now demo either version. In both cases the upgrade process was as expected, basically a rerun of the configuration wizard with all the fields bar the password prefilled. <a href="http://nakedalm.com/about">Martin Hinshelwood</a> has done a nice post if you want more <a href="http://nakedalm.com/upgrading-to-team-foundation-server-2013/">details on the process</a></p>
<p>Looking at the <a href="http://channel9.msdn.com/Events/Build/2013">session at Build on Channel9</a> there are not too many on TFS, to find out more about the new features then you are probably better to check out the <a href="http://channel9.msdn.com/Events/teched#fbid=pL72HOujDF_">TechEd USA or TechEd Europe streams</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why can’t I find my build settings on a Git based project on TFS Service?</title>
      <link>https://blog.richardfennell.net/posts/why-cant-i-find-my-build-settings-on-a-git-based-project-on-tfs-service/</link>
      <pubDate>Wed, 26 Jun 2013 11:05:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-cant-i-find-my-build-settings-on-a-git-based-project-on-tfs-service/</guid>
      <description>&lt;p&gt;Just wasted a bit of time trying to find the build tab on a TFS Team Project hosted on the hosted &lt;a href=&#34;http://tfs.visualstudio.com&#34;&gt;http://tfs.visualstudio.com&lt;/a&gt; using a Git repository. I was looking on team explorer expecting to see something like&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_129.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_129.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;But all I was seeing the the Visual Studio Git Changes option (just the top bit on the left panel above).&lt;/p&gt;
&lt;p&gt;It took to me ages to realise that the issue was I had cloned the Git repository to my local PC using the &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/abafc7d6-dcaa-40f4-8a5e-d6724bdb980c/view/Reviews/0?showReviewForm=True&#34;&gt;Visual Studio Tools for Git&lt;/a&gt;. So I was just using just Git tools, not TFS tools. As far as Visual Studio was concerned this was just some Git repository it could have been local, GitHub, TFS Service or anything that hosts Git.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just wasted a bit of time trying to find the build tab on a TFS Team Project hosted on the hosted <a href="http://tfs.visualstudio.com">http://tfs.visualstudio.com</a> using a Git repository. I was looking on team explorer expecting to see something like</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_129.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_129.png" title="image"></a></p>
<p>But all I was seeing the the Visual Studio Git Changes option (just the top bit on the left panel above).</p>
<p>It took to me ages to realise that the issue was I had cloned the Git repository to my local PC using the <a href="http://visualstudiogallery.msdn.microsoft.com/abafc7d6-dcaa-40f4-8a5e-d6724bdb980c/view/Reviews/0?showReviewForm=True">Visual Studio Tools for Git</a>. So I was just using just Git tools, not TFS tools. As far as Visual Studio was concerned this was just some Git repository it could have been local, GitHub, TFS Service or anything that hosts Git.</p>
<p>To see the full features of TFS Service you need to connect to the service using Team Explorer (the green bits), not just as a Git client (the red bits)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_130.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_130.png" title="image"></a></p>
<p>Of course if you only need Git based source code management tools, just clone the repository and use the Git tooling, where inside or outside Visual Studio. The Git repository in TFS is just a standard Git repro so all tools should work. From the server end TFS does not care what client you use, in fact it will still associate you commits, irrespective of client, with TFS work items if you use the #1234 syntax for work item IDs in your comments.</p>
<p>However if you are using hosted TFS from Visual Studio, it probably makes more sense to use a Team Explorer connection so all the other TFS feature light up, such as build. The best bit is that all the Git tools are still there as Visual Studio knows it is still just a Git repository. Maybe doing this will be less confusing when I come to try to use a TFS feature!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Error adding a new widget to our BlogEngine.NET 2.8.0.0 server</title>
      <link>https://blog.richardfennell.net/posts/error-adding-a-new-widget-to-our-blogengine-net-2-8-0-0-server/</link>
      <pubDate>Tue, 25 Jun 2013 14:07:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/error-adding-a-new-widget-to-our-blogengine-net-2-8-0-0-server/</guid>
      <description>&lt;h3 id=&#34;background&#34;&gt;Background&lt;/h3&gt;
&lt;p&gt;if you use Twitter in any web you will probably have noticed that they have switched off the 1.0 API, you have to use the 1.1 version which is stricter over OAUTH. This meant the Twitter feeds into our blog server stopped working on the 10th of June. The old call of&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble&#34; title=&#34;http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble&#34;&gt;http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;did not work and just change 1 to 1.1 did not work.&lt;/p&gt;
&lt;p&gt;So I decided to pull down a different widget for BlogEngine.NET to do the job, choosing &lt;a href=&#34;http://dnbegallery.org/cms/List/Widgets/RecentTweets&#34;&gt;Recent Tweets&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="background">Background</h3>
<p>if you use Twitter in any web you will probably have noticed that they have switched off the 1.0 API, you have to use the 1.1 version which is stricter over OAUTH. This meant the Twitter feeds into our blog server stopped working on the 10th of June. The old call of</p>
<p><a href="http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble" title="http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble">http://api.twitter.com/1/statuses/user_timeline.rss?screen_name=blackmarble</a></p>
<p>did not work and just change 1 to 1.1 did not work.</p>
<p>So I decided to pull down a different widget for BlogEngine.NET to do the job, choosing <a href="http://dnbegallery.org/cms/List/Widgets/RecentTweets">Recent Tweets</a>.</p>
<h3 id="the-problem">The Problem</h3>
<p>However when I tried to access our root/parent blog site and go onto the customisation page to add the new widget I got</p>
<blockquote>
<h3 id="ooops-an-unexpected-error-has-occurred">Ooops! An unexpected error has occurred.</h3>
<p>This one&rsquo;s down to me! Please accept my apologies for this - I&rsquo;ll see to it that the developer responsible for this happening is given 20 lashes (but only after he or she has fixed this problem).</p>
<h4 id="error-details">Error Details:</h4>
<p>Url : <a href="http://blogs.blackmarble.co.uk/blogs/admin/Extensions/default.cshtml">http://blogs.blackmarble.co.uk/blogs/admin/Extensions/default.cshtml</a><br>
Raw Url : /blogs/admin/Extensions/default.cshtml<br>
Message : Exception of type &lsquo;System.Web.HttpUnhandledException&rsquo; was thrown.<br>
Source : System.Web.WebPages<br>
StackTrace : at System.Web.WebPages.WebPageHttpHandler.HandleError(Exception e)<br>
at System.Web.WebPages.WebPageHttpHandler.ProcessRequestInternal(HttpContextBase httpContext)<br>
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()<br>
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean&amp; completedSynchronously)<br>
TargetSite : Boolean HandleError(System.Exception)<br>
Message : Item has already been added. Key in dictionary: &lsquo;displayname&rsquo; Key being added: &lsquo;displayname&rsquo;</p></blockquote>
<p>Looking at the <a href="http://blogengine.codeplex.com/discussions">discussion forums</a> it seem be had some DB issues.</p>
<h3 id="the-fix">The Fix</h3>
<p>I could see nobody with the same problems, so I pulled down the source code from Codeplex and had a look at the DBBlogProvider.cs (line 2350) where the error was reported. I think the issue is that when a blog site is set ‘Is for site aggregation’, as our root site where I needed to install the new widget is, the SQL query that generates the user profile list was not filtered by blog, so it saw duplicates.</p>
<p>I disabled ‘Is for site aggregation’ for our root blog and was then able to load the customisation page and add my widget.</p>
<p>Interestingly, I then switched back on ‘Is for site aggregation’ and all was still OK. I assume the act of opening the customisation page once fixes the problem.</p>
<p>Update: Turns out this is not the case, after a reboot of my client PC the error returned, must have been some caching that made it work</p>
<h3 id="also-worth-noting-">Also worth noting ….</h3>
<p>In case you had not seen it, I hadn’t, there is a <a href="http://www.dotnetblogengine.net/post/Patch-for-BlogEngineNET-28.aspx">patch for 2.8.0.0</a> that fixes a problem that the slug (the url generate for a post) was not being done correctly, so multiple posts on the same day got group as one. This cause search and navigation issues. Worth installing this if you are likely to write more than one post on a blog a day.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using SYSPREP’d VM images as opposed to Templates in a new TFS 2012 Lab Management Environment</title>
      <link>https://blog.richardfennell.net/posts/using-sysprepd-vm-images-as-opposed-to-templates-in-a-new-tfs-2012-lab-management-environment/</link>
      <pubDate>Mon, 24 Jun 2013 13:49:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-sysprepd-vm-images-as-opposed-to-templates-in-a-new-tfs-2012-lab-management-environment/</guid>
      <description>&lt;p&gt;An interesting change with Lab Management 2012 and SCVMM 2012 is that templates become a lot less useful. In the SCVMM 2008 versions you had a choice when you stored VMs in the SCVMM library. …&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You could store a fully configured VM&lt;/li&gt;
&lt;li&gt;or a generalised template.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When you added the template to a new environment you could enter details such as the machine name, domain to join and product key etc. If you try this with SCVMM 2012 you just see the message ‘These properties cannot be edited from Microsoft Test Manager’&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>An interesting change with Lab Management 2012 and SCVMM 2012 is that templates become a lot less useful. In the SCVMM 2008 versions you had a choice when you stored VMs in the SCVMM library. …</p>
<ul>
<li>You could store a fully configured VM</li>
<li>or a generalised template.</li>
</ul>
<p>When you added the template to a new environment you could enter details such as the machine name, domain to join and product key etc. If you try this with SCVMM 2012 you just see the message ‘These properties cannot be edited from Microsoft Test Manager’</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_127.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_127.png" title="image"></a></p>
<p>So you are meant to use SCVMM to manage everything about the templates, not great if you want to do everything from MTM. However, is that the only solution?</p>
<p>An alternative is to store a SYSPREP’d VM as a Virtual Machine in the SCVMM library. This VM can be added as many times as is required to an environment (though if added more than once you are asked if you are sure)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_128.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_128.png" title="image"></a></p>
<p>This method does however bring problems of its own. When the environment is started, assuming it is network isolated, the second network adaptor is added as expected. However, as there is no agent on the VM it cannot be configured, usually for a template Lab Management would sort all this out, but because the VM is SYSPREP’d it is left sitting at the mini setup ‘Pick your region’ screen.</p>
<p>You need to manually configure the VM. So the best process I have found is</p>
<ol>
<li>Create the environment with you standard VMs and the SYSPRED’d one</li>
<li>Boot the environment, the standard ready to use VMs get configured OK</li>
<li>Manually connect to the SYSPREP’d VM and complete the mini setup. You will now have a PC on a workgroup</li>
<li>The PC will have two network adapters, neither connected to you corporate network, both are connected to the network isolated virtual LAN. You have a choice</li>
</ol>
<ul>
<li>Connect the legacy adaptor to your corporate LAN, to get at a network share via SCVMM</li>
<li>Mount the TFS Test Agent ISO</li>
</ul>
<ol start="6">
<li>Either way you need to manually install the Test Agent and run the configuration (just select the defaults it should know where the test controller is). This will configure network isolated adaptor to the 192.168.23.x network</li>
<li>Now you can manually join the isolated domain</li>
<li>A reboot the VM (or the environment) and all should be OK</li>
</ol>
<p>All a bit long winded, but does mean it is easier to build generalised VMs from MTM without having to play around in SCVMM too much. </p>
<p>I think all would be a good deal easier of the VM had the agents on it before the SYSPREP, I have not tried this yet, but that is true in my option of all VMs used for Lab Management. Get the agents on early as you can, just speeds everything up.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Great experience moving my DotNetNuke site to PowerDNN</title>
      <link>https://blog.richardfennell.net/posts/great-experience-moving-my-dotnetnuke-site-to-powerdnn/</link>
      <pubDate>Wed, 19 Jun 2013 16:29:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/great-experience-moving-my-dotnetnuke-site-to-powerdnn/</guid>
      <description>&lt;p&gt;I posted recently about my experiences in &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/05/17/.aspx&#34;&gt;upgrading DotNetNuke 5 to 7&lt;/a&gt;, what fun that was! Well I have now had to do the move for real. I expected to follow the same process, but had problems. Turns out the key was to go 5 &amp;gt; 6 &amp;gt; 7. Once I did this the upgrade worked, turns out this is the &lt;a href=&#34;http://www.dotnetnuke.com/Resources/Wiki/Page/Suggested_Upgrade_Path.aspx&#34;&gt;recommended route&lt;/a&gt;. Why my previous trial worked I don’t know?&lt;/p&gt;
&lt;p&gt;Anyway I ended up with a local DNN 7 site running against SQL 2012. It still was using DNN 5 based skin (which has problems with IE 10) which I needed to alter, but was functional. So it was time to move my ISP.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted recently about my experiences in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/05/17/.aspx">upgrading DotNetNuke 5 to 7</a>, what fun that was! Well I have now had to do the move for real. I expected to follow the same process, but had problems. Turns out the key was to go 5 &gt; 6 &gt; 7. Once I did this the upgrade worked, turns out this is the <a href="http://www.dotnetnuke.com/Resources/Wiki/Page/Suggested_Upgrade_Path.aspx">recommended route</a>. Why my previous trial worked I don’t know?</p>
<p>Anyway I ended up with a local DNN 7 site running against SQL 2012. It still was using DNN 5 based skin (which has problems with IE 10) which I needed to alter, but was functional. So it was time to move my ISP.</p>
<p>Historically I had the site running on <a href="http://www.zen.co.uk/">Zen Internet</a>, but their Windows hosting is showing its age, they do not offer .NET 4,  and appear to have no plans to change this when I last asked. Also there is no means to do a scripted/scheduled backup on their servers.</p>
<p>The lack of .NET 4  meant I could not use Zen for DNN 7. So I choose to move to <a href="http://www.powerdnn.com/">PowerDNN</a>, which is a DNN specialist, offers the latest Microsoft hosting and was cheaper.</p>
<p>I had expect the migrate/setup to be awkward, but far from it. I uploaded my backups to PowerDNN’s FTP site and the site was live within 10 minutes. I had a good few questions over backup options, virtual directories for other .NET applications etc. all were answered via email virtually instantly. Thus far the service has been excellent, PowerDNN are looking a good choice.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using git tf to migrate code between TFS servers retaining history</title>
      <link>https://blog.richardfennell.net/posts/using-git-tf-to-migrate-code-between-tfs-servers-retaining-history/</link>
      <pubDate>Sun, 16 Jun 2013 17:48:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-git-tf-to-migrate-code-between-tfs-servers-retaining-history/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://nakedalm.com/migrating-source-code-with-history-to-tfs-2012-with-git-tf/&#34;&gt;Martin Hinshelwood did a recent post&lt;/a&gt; on moving source code between TFS servers using  &lt;strong&gt;git tf&lt;/strong&gt;. He mentioned that you could use the &lt;strong&gt;--deep&lt;/strong&gt; option to get the whole changeset check-in history.&lt;/p&gt;
&lt;p&gt;Being fairly new to using Git, in anything other than the simplest scenarios, it took me a while to get the commands right. This is what I used in the end (using the Brian Keller VM for sample data) …&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://nakedalm.com/migrating-source-code-with-history-to-tfs-2012-with-git-tf/">Martin Hinshelwood did a recent post</a> on moving source code between TFS servers using  <strong>git tf</strong>. He mentioned that you could use the <strong>--deep</strong> option to get the whole changeset check-in history.</p>
<p>Being fairly new to using Git, in anything other than the simplest scenarios, it took me a while to get the commands right. This is what I used in the end (using the Brian Keller VM for sample data) …</p>
<blockquote>
<p>C:tmpgit&gt; <strong>git tf clone http://vsalm:8080/tfs/fabrikamfibercollection $/fabrikamfiber/Main oldserver &ndash;deep</strong></p>
<p>Connecting to TFS&hellip;</p>
<p>Cloning $/fabrikamfiber/Main into C:Tmpgitoldserver: 100%, done.</p>
<p>Cloned 5 changesets. Cloned last changeset 24 as 8b00d7d</p>
<p>C:tmpgit&gt; <strong>git init newserver</strong></p>
<p>Initialized empty Git repository in C:/tmp/git/newserver/.git/</p>
<p>C:tmpgit&gt; <strong>cd newserver</strong></p>
<p>C:tmpgitnewserver [master]&gt; <strong>git pull ..oldserver &ndash;depth=100000000</strong></p>
<p>remote: Counting objects: 372, done.</p>
<p>remote: Compressing objects: 100% (350/350), done.</p>
<p>96% (358/372), 2.09 MiB | 4.14 MiB/s</p>
<p>Receiving objects: 100% (372/372), 2.19 MiB | 4.14 MiB/s, done.</p>
<p>Resolving deltas: 100% (110/110), done.</p>
<p>From ..oldserver</p>
<p>* branch HEAD -&gt; FETCH_HEAD</p>
<p>C:tmpgitnewserver [master]&gt; <strong>git tf configure http://vsalm:8080/tfs/fabrikamfibercollection $/fabrikamfiber/NewLocation</strong></p>
<p>Configuring repository</p>
<p>C:tmpgitnewserver [master]&gt; <strong>git tf checkin &ndash;deep &ndash;autosquash</strong></p>
<p>Connecting to TFS&hellip;</p>
<p>Checking in to $/fabrikamfiber/NewLocation: 100%, done.</p>
<p>Checked in 5 changesets, HEAD is changeset 30</p></blockquote>
<p>The key was I had missed the <strong>–autosquash</strong> option on the final checkin.</p>
<p>Once this was run I could see my checking history, the process is quick and once you have the right command line straight forward. However, just like <a href="http://tfsintegration.codeplex.com/">TFS Integration Platform</a> time is compressed, and unlike TFS Integration Platform you also lose the ownership of the original edits.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_126.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_126.png" title="image"></a></p>
<p>This all said, another useful tool in the migration arsenal.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Beta release of the ALM Rangers Unit Test Generate VS Extension</title>
      <link>https://blog.richardfennell.net/posts/beta-release-of-the-alm-rangers-unit-test-generate-vs-extension/</link>
      <pubDate>Sun, 16 Jun 2013 17:09:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/beta-release-of-the-alm-rangers-unit-test-generate-vs-extension/</guid>
      <description>&lt;p&gt;With Visual Studio 2012 have you missed the automated unit test generation tools that were present in Visual Studio 2010?&lt;/p&gt;
&lt;p&gt;If you have then the ALM Rangers have produced the ‘Unit Test Generate VS Extension’. The first beta of this is now available on the &lt;a href=&#34;http://aka.ms/Y1ff33&#34;&gt;VSGallery&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/clip_image001.jpg&#34;&gt;&lt;img alt=&#34;clip_image001&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/clip_image001_thumb.jpg&#34; title=&#34;clip_image001&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Why not download it and try out so you can provide feedback to the team?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With Visual Studio 2012 have you missed the automated unit test generation tools that were present in Visual Studio 2010?</p>
<p>If you have then the ALM Rangers have produced the ‘Unit Test Generate VS Extension’. The first beta of this is now available on the <a href="http://aka.ms/Y1ff33">VSGallery</a>.</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image001.jpg"><img alt="clip_image001" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image001_thumb.jpg" title="clip_image001"></a></p>
<p>Why not download it and try out so you can provide feedback to the team?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where did my parameters go when I edited that standard TFS report?</title>
      <link>https://blog.richardfennell.net/posts/where-did-my-parameters-go-when-i-edited-that-standard-tfs-report/</link>
      <pubDate>Mon, 10 Jun 2013 13:56:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-did-my-parameters-go-when-i-edited-that-standard-tfs-report/</guid>
      <description>&lt;p&gt;I have been doing some editing of the standard scrum TFS 2012 Sprint Burndown report in SQL 2012 Report Builder. When I ran the report after editing the MDX query in the dsBurndown DataSet to return an extra column I got an error:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;on a remote PC it just said error with dsBurndown dataset&lt;/li&gt;
&lt;li&gt;on the server hosting reporting services, or in Report Builder, I got a bit more information, it said the TaskName parameter was not defined.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On checking the state of the dataset parameters before and after my edit I could see that the TaskName parameter had been lost&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been doing some editing of the standard scrum TFS 2012 Sprint Burndown report in SQL 2012 Report Builder. When I ran the report after editing the MDX query in the dsBurndown DataSet to return an extra column I got an error:</p>
<ul>
<li>on a remote PC it just said error with dsBurndown dataset</li>
<li>on the server hosting reporting services, or in Report Builder, I got a bit more information, it said the TaskName parameter was not defined.</li>
</ul>
<p>On checking the state of the dataset parameters before and after my edit I could see that the TaskName parameter had been lost</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_125.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_125.png" title="image"></a></p>
<p>Manually re-adding it fixed the problem.</p>
<p>Interestingly which parameters were lost seemed to depend on the MDX query edit I made, I assume something is inferring the parameters from the MDX query.</p>
<p>So certainly one to keep an eye on. I suspect this is a feature of Report Builder, maybe I am better just using trusty Notepad to edit the .RDL file. Oh how I love to edit XML in Notepad</p>
]]></content:encoded>
    </item>
    <item>
      <title>Nice discussion on real world issues with software projects</title>
      <link>https://blog.richardfennell.net/posts/nice-discussion-on-real-world-issues-with-software-projects/</link>
      <pubDate>Fri, 07 Jun 2013 15:37:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/nice-discussion-on-real-world-issues-with-software-projects/</guid>
      <description>&lt;p&gt;Just watched a good session from TechEd USA 2013, it was billed as &lt;a href=&#34;http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/DEV-B212#fbid=vK8lgJC8uYE&#34;&gt;Agile Software Development with Microsoft Visual Studio ALM&lt;/a&gt; but has little that was specifically TFS based; no demos just war stories from &lt;a href=&#34;http://channel9.msdn.com/Events/Speakers/aaron-bjork&#34;&gt;Aaron Bjork&lt;/a&gt; and &lt;a href=&#34;http://channel9.msdn.com/Events/Speakers/Peter-Provost&#34;&gt;Peter Provost&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It is a good discussion of the problems, experiences and solutions the Microsoft Visual Studio team went through when trying to move to agile development, including&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Sprint lengths, be consistent across teams&lt;/li&gt;
&lt;li&gt;Retrospectives, do you actually act on them?&lt;/li&gt;
&lt;li&gt;Technical debt, do you write features or bugs?&lt;/li&gt;
&lt;li&gt;Measure what you do&lt;/li&gt;
&lt;li&gt;What is the role of the product owner/manager?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Well worth a watch&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just watched a good session from TechEd USA 2013, it was billed as <a href="http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/DEV-B212#fbid=vK8lgJC8uYE">Agile Software Development with Microsoft Visual Studio ALM</a> but has little that was specifically TFS based; no demos just war stories from <a href="http://channel9.msdn.com/Events/Speakers/aaron-bjork">Aaron Bjork</a> and <a href="http://channel9.msdn.com/Events/Speakers/Peter-Provost">Peter Provost</a></p>
<p>It is a good discussion of the problems, experiences and solutions the Microsoft Visual Studio team went through when trying to move to agile development, including</p>
<ul>
<li>Sprint lengths, be consistent across teams</li>
<li>Retrospectives, do you actually act on them?</li>
<li>Technical debt, do you write features or bugs?</li>
<li>Measure what you do</li>
<li>What is the role of the product owner/manager?</li>
</ul>
<p>Well worth a watch</p>
]]></content:encoded>
    </item>
    <item>
      <title>DHCP does not seem to work on Ubuntu for wireless based Hyper-V virtual switches</title>
      <link>https://blog.richardfennell.net/posts/dhcp-does-not-seem-to-work-on-ubuntu-for-wireless-based-hyper-v-virtual-switches/</link>
      <pubDate>Fri, 07 Jun 2013 10:02:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dhcp-does-not-seem-to-work-on-ubuntu-for-wireless-based-hyper-v-virtual-switches/</guid>
      <description>&lt;p&gt;If running an Ubuntu guest VM on Windows 8 Hyper-V you have a problem if you want to make use of a wireless network on the host machine. DHCP does not seem to work.&lt;/p&gt;
&lt;p&gt;Firstly you have to create a virtual switch in Hyper-V&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_121.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_121.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;and connect it to your wireless card&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_122.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_122.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;you can then connect a Network Adaptor on the Ubuntu guest VM to the new switch.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_123.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_123.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If running an Ubuntu guest VM on Windows 8 Hyper-V you have a problem if you want to make use of a wireless network on the host machine. DHCP does not seem to work.</p>
<p>Firstly you have to create a virtual switch in Hyper-V</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_121.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_121.png" title="image"></a></p>
<p>and connect it to your wireless card</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_122.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_122.png" title="image"></a></p>
<p>you can then connect a Network Adaptor on the Ubuntu guest VM to the new switch.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_123.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_123.png" title="image"></a></p>
<p>Now for most operating systems this is all you need to do. The guest VM would use DHCP to pickup an IP address and all is good. However on Ubuntu 12.04 (and other versions judging from other posts), if you are using a virtual switch connected to a wireless card, DHCP does not work. The problem appears to lie in the way Windows/Hyper-V does the bridging to the Wifi.</p>
<p>You have manually set the networking settings. You need to track down the correct network using the MAC address. Remember that as the system is having network problems you might need to enable the connection (with the slider top right of the dialog if using the UI) before you can set the options</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_124.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_124.png" title="image"></a></p>
<p>Once this is all set you should have a working network.</p>
<p>Now some posts suggest that you can avoid this problem if you use a ‘legacy network adaptor’ when you create the VM in Hyper-V, but this did not work for me. In fact even manually setting the IP address did not help on the legacy adaptor.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2013 announcement at TechEd USA</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2013-announcement-at-teched-usa/</link>
      <pubDate>Mon, 03 Jun 2013 19:25:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2013-announcement-at-teched-usa/</guid>
      <description>&lt;p&gt;Today at TechEd USA Brian Harry announced Visual Studio 2013, &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/06/03/visual-studio-2013.aspx&#34;&gt;have a look at his blog for details of the new ALM features&lt;/a&gt;. These include…&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Agile Portfolio Management&lt;/li&gt;
&lt;li&gt;Git source control on premises&lt;/li&gt;
&lt;li&gt;Revised team explorer including pop out windows&lt;/li&gt;
&lt;li&gt;Improvements in code editing and annotation&lt;/li&gt;
&lt;li&gt;Improvement in web based test management&lt;/li&gt;
&lt;li&gt;Team Room – chat like collaboration&lt;/li&gt;
&lt;li&gt;Cloud based web load testing&lt;/li&gt;
&lt;li&gt;The start of addition of release management to TFS via the purchase of &lt;a href=&#34;http://www.incyclesoftware.com/inrelease/&#34;&gt;InRelease&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more info see the various sessions up on &lt;a href=&#34;http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013#fbid=vK8lgJC8uYE&#34;&gt;Channel 9&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today at TechEd USA Brian Harry announced Visual Studio 2013, <a href="http://blogs.msdn.com/b/bharry/archive/2013/06/03/visual-studio-2013.aspx">have a look at his blog for details of the new ALM features</a>. These include…</p>
<ul>
<li>Agile Portfolio Management</li>
<li>Git source control on premises</li>
<li>Revised team explorer including pop out windows</li>
<li>Improvements in code editing and annotation</li>
<li>Improvement in web based test management</li>
<li>Team Room – chat like collaboration</li>
<li>Cloud based web load testing</li>
<li>The start of addition of release management to TFS via the purchase of <a href="http://www.incyclesoftware.com/inrelease/">InRelease</a></li>
</ul>
<p>For more info see the various sessions up on <a href="http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013#fbid=vK8lgJC8uYE">Channel 9</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Lenovo W520 problems with Wifi and Windows 8</title>
      <link>https://blog.richardfennell.net/posts/lenovo-w520-problems-with-wifi-and-windows-8/</link>
      <pubDate>Mon, 03 Jun 2013 19:12:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/lenovo-w520-problems-with-wifi-and-windows-8/</guid>
      <description>&lt;p&gt;My Windows 8 based Lenovo W520 has an Intel Centrino Ultimate-N 6300 Wifi chipset, it has been giving me problems with this for a while.&lt;/p&gt;
&lt;p&gt;The most usual problem is that if I sleep or hibernate the PC when I restart it, in a different location, there is a chance I cannot connect to Wifi networks. I can see them, get a limited connection but no IP address as DHCP fails. Sometimes using the hardware Wifi switch on the front left of the PC helps, sometimes switching into Windows 8 airplane mode and back out does, but not always. Often I need to restart the PC.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My Windows 8 based Lenovo W520 has an Intel Centrino Ultimate-N 6300 Wifi chipset, it has been giving me problems with this for a while.</p>
<p>The most usual problem is that if I sleep or hibernate the PC when I restart it, in a different location, there is a chance I cannot connect to Wifi networks. I can see them, get a limited connection but no IP address as DHCP fails. Sometimes using the hardware Wifi switch on the front left of the PC helps, sometimes switching into Windows 8 airplane mode and back out does, but not always. Often I need to restart the PC.</p>
<p>I have also had problems in Lync video calls, for example on Friday I was having all sorts of problems with a call, it was working for a few minutes then locking. When we swapped to a colleagues supposedly identical W520 all was OK.</p>
<p>So I think I have a problem. Time for some digging.</p>
<p>So I thought it was worth trying newer drivers, I had the default 15.1.x drivers provided by Windows update that are about  12 months old. I got the latest <a href="http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS032424">15.6.x from Lenovo</a>.</p>
<p>Also after reading around the subject I also set my Wifi adaptor to not controlled by power management (right click the network tools tray icon, open network sharing centre, change adaptor settings).</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_120.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_120.png" title="image"></a></p>
<p>At first I thought this was helping, but I found if my PC screen locks due to inactivity I got a blue screen of death when I login again – with a watchdog timer error. So I switched the power save management back on and this seems to have fixed this problem.</p>
<p>So with the new drivers I now get a new set of behaviours.</p>
<ul>
<li>The PC seems to come out of sleep OK</li>
<li>However, whilst streaming BBC IPlayer over the weekend to watch the triathlon (well done <a href="http://www.bbc.co.uk/sport/0/wales/22748210">Non</a> and <a href="http://www.bbc.co.uk/sport/0/triathlon/22746827">Jonny</a>) my Wifi link kept dropping and only getting a limited connection as it reconnected. This happened with two different systems, my Netgear N600 based BT ADSL and another on Virgin Cable with one of their Superhub (also Netgear based) at another house. Interestingly once I swapped from the 5Ghz to a 2.4GHz Wifi on my home N600 ADSL system I had no further problems with disconnects. Maybe a router problem here as opposed to the PC? But I did check there were no router firmware updates and no errors reported.</li>
</ul>
<p>It then occurred to me to think what the different between my PC and my colleagues?  I had a Hyper-V virtual switch configured to use Wifi. I tried deleting this, but it appears to have no effect on the 5Ghhz problem. So maybe a read herring.</p>
<p>So now I think my system is more stable, but only time will tell if it is working well enough. The biggest test will be the [Lync based webinar I am doing on DevOps in a couple of weeks](<a href="http://www.blackmarble.com/events.aspx?event=DevOps">http://www.blackmarble.com/events.aspx?event=DevOps</a> with Visual Studio Team Foundation Server (and using tooling from PreEmptive) (Black Marble Byte Size Webinar)).</p>
<p><strong>Updated 4 Nov 2013:</strong> I have add an offline discussion about this issue, the summary of which is it appears the problem is not that of Lenovo but of Intel and/or Netgear. I seems the laptop is not listening to the instruction it gets properly or the router isn&rsquo;t sending it correctly and this happens on many brands of kit. In some cases the solution is to right click on the wireless connection in use and choose properties change the encryption from aes to tkip wait a few minutes and you are back on the internet, or just a restart of the wifi stack.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My session on TFS at the ‘Building Applications for the Future’</title>
      <link>https://blog.richardfennell.net/posts/my-session-on-tfs-at-the-building-applications-for-the-future/</link>
      <pubDate>Wed, 22 May 2013 22:21:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-session-on-tfs-at-the-building-applications-for-the-future/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my session on ‘TFS for Developers’ at the &lt;a href=&#34;https://www.greymatter.com/information/events&#34;&gt;Grey Matter’s ‘Building Applications for the Future’ event&lt;/a&gt; today. As you will have noticed my session was basically slide free, so not much to share there.&lt;/p&gt;
&lt;p&gt;As I said at the end of my session to find out more have a look at&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://aka.ms/VS11ALMVM&#34;&gt;Brian Keller’s TFS 2012 VM&lt;/a&gt; – ready to run demo VM with plenty of hands on labs&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://tfs.visualstudio.com&#34;&gt;Team Foundation Service&lt;/a&gt; – the free hosted version of TFS – go on give it a try.&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/&#34;&gt;Brian Harry’s Blog&lt;/a&gt; – all announcements on TFS can be found here&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Also a couple of people asked by about TFS and Eclipse, which I only mentioned briefly at the end. For more on &lt;a href=&#34;http://channel9.msdn.com/Events/TechDays/UK-Tech-Days/Visual-Studio-Team-Foundation-for-Everyone&#34;&gt;Team Explorer Everywhere look at the video&lt;/a&gt; I did last year on that very subject&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my session on ‘TFS for Developers’ at the <a href="https://www.greymatter.com/information/events">Grey Matter’s ‘Building Applications for the Future’ event</a> today. As you will have noticed my session was basically slide free, so not much to share there.</p>
<p>As I said at the end of my session to find out more have a look at</p>
<ul>
<li><a href="http://aka.ms/VS11ALMVM">Brian Keller’s TFS 2012 VM</a> – ready to run demo VM with plenty of hands on labs</li>
<li><a href="http://tfs.visualstudio.com">Team Foundation Service</a> – the free hosted version of TFS – go on give it a try.</li>
<li><a href="http://blogs.msdn.com/b/bharry/">Brian Harry’s Blog</a> – all announcements on TFS can be found here</li>
</ul>
<p>Also a couple of people asked by about TFS and Eclipse, which I only mentioned briefly at the end. For more on <a href="http://channel9.msdn.com/Events/TechDays/UK-Tech-Days/Visual-Studio-Team-Foundation-for-Everyone">Team Explorer Everywhere look at the video</a> I did last year on that very subject</p>
]]></content:encoded>
    </item>
    <item>
      <title>Video on Nuget for C&#43;&#43; on Channel 9</title>
      <link>https://blog.richardfennell.net/posts/video-on-nuget-for-c-on-channel-9/</link>
      <pubDate>Wed, 22 May 2013 22:04:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/video-on-nuget-for-c-on-channel-9/</guid>
      <description>&lt;p&gt;I have been out to a number of sites recently where there are C++ developers. We often get talking about package management and general best practices for shared libraries. The common refrain is ‘I wish we had something like Nuget for C++’.&lt;/p&gt;
&lt;p&gt;Well it was released in &lt;a href=&#34;http://blogs.msdn.com/b/vcblog/archive/2013/04/26/nuget-for-c.aspx&#34;&gt;Nuget 2.5&lt;/a&gt; and there is a video on &lt;a href=&#34;http://channel9.msdn.com/Shows/C9-GoingNative/GoingNative-16-Garrett-Serak-Inside-NuGet-for-C&#34;&gt;Channel9&lt;/a&gt; all about it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been out to a number of sites recently where there are C++ developers. We often get talking about package management and general best practices for shared libraries. The common refrain is ‘I wish we had something like Nuget for C++’.</p>
<p>Well it was released in <a href="http://blogs.msdn.com/b/vcblog/archive/2013/04/26/nuget-for-c.aspx">Nuget 2.5</a> and there is a video on <a href="http://channel9.msdn.com/Shows/C9-GoingNative/GoingNative-16-Garrett-Serak-Inside-NuGet-for-C">Channel9</a> all about it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Webinar on PreEmptive Analytics tools on the 28th of May</title>
      <link>https://blog.richardfennell.net/posts/webinar-on-preemptive-analytics-tools-on-the-28th-of-may/</link>
      <pubDate>Mon, 20 May 2013 16:47:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/webinar-on-preemptive-analytics-tools-on-the-28th-of-may/</guid>
      <description>&lt;p&gt;A key requirement for any DevOps strategy is the reporting on how your solution is behaving in the wild. PreEmptive Analytics™ for Team Foundation Server (TFS) can provide a great insight in this area, and there is a good chance you are already licensed for it as part of MSDN.&lt;/p&gt;
&lt;p&gt;So why not have a look on the &lt;a href=&#34;http://blogs.msdn.com/b/ukmsdn/archive/2013/05/13/event-improve-software-quality-user-experience-and-developer-productivity-with-real-time-analytics-webinar.aspx&#34;&gt;UK MSDN&lt;/a&gt; site for more details the free Microsoft hosted event.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://www3.gotomeeting.com/register/985709814&#34;&gt;MSDN Webinar Improve Software Quality, User Experience and Developer Productivity with Real Time Analytics&lt;/a&gt;&lt;br&gt;
Tuesday, May 28 2013: 4:00 – 5:00 pm (UK Time)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A key requirement for any DevOps strategy is the reporting on how your solution is behaving in the wild. PreEmptive Analytics™ for Team Foundation Server (TFS) can provide a great insight in this area, and there is a good chance you are already licensed for it as part of MSDN.</p>
<p>So why not have a look on the <a href="http://blogs.msdn.com/b/ukmsdn/archive/2013/05/13/event-improve-software-quality-user-experience-and-developer-productivity-with-real-time-analytics-webinar.aspx">UK MSDN</a> site for more details the free Microsoft hosted event.</p>
<blockquote>
<p><a href="https://www3.gotomeeting.com/register/985709814">MSDN Webinar Improve Software Quality, User Experience and Developer Productivity with Real Time Analytics</a><br>
Tuesday, May 28 2013: 4:00 – 5:00 pm (UK Time)</p></blockquote>
<p>Also why not sign up for [Black Marble’s webinar event in June](<a href="http://www.blackmarble.com/events.aspx?event=DevOps">http://www.blackmarble.com/events.aspx?event=DevOps</a> with Visual Studio Team Foundation Server (and using tooling from PreEmptive) (Black Marble Byte Size Webinar)) on DevOps process and tools in the Microsoft space.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting Wix 3.6 MajorUpgrade working</title>
      <link>https://blog.richardfennell.net/posts/getting-wix-3-6-majorupgrade-working/</link>
      <pubDate>Fri, 17 May 2013 11:07:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-wix-3-6-majorupgrade-working/</guid>
      <description>&lt;p&gt;Why is everything so complex to get going with Wix, then so easy in the end when you get the syntax correct?&lt;/p&gt;
&lt;p&gt;If you want to allow your MSI installer to upgrade a previous version then there are some things you have to have correct if you don’t want the ‘Another version of this product is already installed’ dialog appearing.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The Product Id should be set to * so that a new Guid is generated each time the product is rebuild&lt;/li&gt;
&lt;li&gt;The Product UpgradeCode should be set to a fix Guid for all releases&lt;/li&gt;
&lt;li&gt;The Product Version should increment on of the first three numbers, by default the final build number is ignored for update checking&lt;/li&gt;
&lt;li&gt;The Package block should not have an Id set – this will allow it to be auto generated&lt;/li&gt;
&lt;li&gt;You need to add the MajorUpgrade block to you installer&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So you end up with&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Why is everything so complex to get going with Wix, then so easy in the end when you get the syntax correct?</p>
<p>If you want to allow your MSI installer to upgrade a previous version then there are some things you have to have correct if you don’t want the ‘Another version of this product is already installed’ dialog appearing.</p>
<ul>
<li>The Product Id should be set to * so that a new Guid is generated each time the product is rebuild</li>
<li>The Product UpgradeCode should be set to a fix Guid for all releases</li>
<li>The Product Version should increment on of the first three numbers, by default the final build number is ignored for update checking</li>
<li>The Package block should not have an Id set – this will allow it to be auto generated</li>
<li>You need to add the MajorUpgrade block to you installer</li>
</ul>
<p>So you end up with</p>
<blockquote>
<p>Wix xmlns=&quot;<a href="http://schemas.microsoft.com/wix/2006/wi%22">http://schemas.microsoft.com/wix/2006/wi&quot;</a> xmlns:netfx=&quot;<a href="http://schemas.microsoft.com/wix/NetFxExtension%22">http://schemas.microsoft.com/wix/NetFxExtension&quot;</a> xmlns:util=&quot;<a href="http://schemas.microsoft.com/wix/UtilExtension%22">http://schemas.microsoft.com/wix/UtilExtension&quot;</a> xmlns:iis=&quot;<a href="http://schemas.microsoft.com/wix/IIsExtension%22">http://schemas.microsoft.com/wix/IIsExtension&quot;</a>&gt;<br>
  <Product Id="\*" Name="My App v!(bind.FileVersion.MyExe)" Language="1033" Version="!(bind.FileVersion.MyExe)" Manufacturer="My Company" UpgradeCode="6842fffa-603c-40e9-bedd-91f6990c43ed"><br>
    &lt;Package InstallerVersion=&ldquo;405&rdquo; Compressed=&ldquo;yes&rdquo; InstallScope=&ldquo;perMachine&rdquo; InstallPrivileges=&ldquo;elevated&rdquo;  /&gt;</p>
<p>    <MajorUpgrade DowngradeErrorMessage="A later version of \[ProductName\] is already installed. Setup will now exit." /></p>
<p>……</p></blockquote>
<p>So simpler than pre Wix 3.5, but still places to trip up</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading DotNetNuke from V5 to V7</title>
      <link>https://blog.richardfennell.net/posts/upgrading-dotnetnuke-from-v5-to-v7/</link>
      <pubDate>Fri, 17 May 2013 08:51:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-dotnetnuke-from-v5-to-v7/</guid>
      <description>&lt;p&gt;I recently needed to upgrade a DNN V5 site  to V7 (yes I know I had neglected it, but I was forced to consider a change due to an ISP change). Now this is a documented process, but I had a few problems. There are some subtleties the release notes miss out. This is what I found I had to do to test the process on a replica web site …&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently needed to upgrade a DNN V5 site  to V7 (yes I know I had neglected it, but I was forced to consider a change due to an ISP change). Now this is a documented process, but I had a few problems. There are some subtleties the release notes miss out. This is what I found I had to do to test the process on a replica web site …</p>
<h3 id="setup-the-site">Setup the Site</h3>
<ul>
<li>I restored a backup of the site SQL DB onto a SQL 2008R2 instance running on Windows 2008R2.</li>
<li>On the same box I created a new IIS 7 web site and copied in a backup of the site structure and setup the virtual application required for my DNN configuration.</li>
<li>I made sure the AppPool associated with the site and application was set to .NET 2.0 in classic mode.</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_112.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_112.png" title="image"></a></p></blockquote>
<ul>
<li>I fix the connection strings to the restored DB in the web.config (remember there are two DB entries, one in ConnectionString and one in AppSettings).</li>
<li>At this point I thought it was a good idea to test the DNN 5 site</li>
</ul>
<h3 id="the-upgrade">The Upgrade</h3>
<ul>
<li>I copied over the contents of the <a href="http://dotnetnuke.codeplex.com/downloads/get/671131">DDN V7 upgrade package</a></li>
<li>I changed the AppPool from .NET 2.0 Classic mode to .NET 4.0 in Integrated pipeline mode</li>
<li>I tried to load the website – at this point I got an ASP.NET 500 Internal error</li>
</ul>
<blockquote>
<p>An aside – If I used a Windows 8 PC (using IIS 7.5) all I got was the 500 internal error message, no more details. I am sure you can reconfigure IIS 7.5 to give more detailed messages. However, i chose to Windows 2008R2 and IIS7 which gave me a nice set of 500.19 web.config errors</p></blockquote>
<ul>
<li>The issue was I needed to edit the web.config to remove the duplicate entries it found, they are all in the form. Just remove the offending line, save the file and refresh the site.</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_113.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_113.png" title="image"></a></p>
<section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /></blockquote>
<ul>
<li>I then got the unhelpful DotNetNuke Error. Turns out this down to the wrong version of Telerik DLLs. They are not shipped in the DDN 7 upgrade package,  I just copied the bin folder from the <a href="http://dotnetnuke.codeplex.com/downloads/get/671130">DNN 7 Install package</a> which contains the right versions</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_114.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_114.png" title="image"></a></p></blockquote>
<ul>
<li>The DNN upgrade wizard should now load. I entered my login details and let it run, it took about a minute.</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_115.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_115.png" title="image"></a></p></blockquote>
<blockquote>
<p>The site might still not load (showing the error below), this is because the DB stores the sites based on the full domain name, so trying to load using something like <a href="http://localhost/dnn">http://localhost/dnn</a> may not work (unless you configured it as the address). I had to edit the hosts file on my PC so my full domain name e.g <a href="http://www.mydomain.com/dns">http://www.mydomain.com/dns</a> resolved to 127.0.0.1. The alternative is to (if you can connect) on the hosts &gt; site management &gt; edit the site &gt; site aliases and enable ‘auto add site alias’. if this is done you can connect with any address</p></blockquote>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_116.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_116.png" title="image"></a></p></blockquote>
<h3 id="503-problems">503 Problems</h3>
<p>Now that should be the whole story, but I still had problems. I kept seeing 503 errors</p>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_117.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_117.png" title="image"></a></p></blockquote>
<p>On checking I found the AppPool kept stopping. The event log showing</p>
<blockquote>
<p>Application: w3wp.exe<br>
Framework Version: v4.0.30319<br>
Description: The process was terminated due to an internal error in the .NET Runtime at IP 000007FEF8E21550 (000007FEF8E20000) with exit code 80131506.</p></blockquote>
<p>and</p>
<blockquote>
<p>Application pool &lsquo;DDN&rsquo; is being automatically disabled due to a series of failures in the process(es) serving that application pool.</p></blockquote>
<p>I tried a clean install of DNN 7 to it own DB (on the same server) using the same AppPool. This all worked fine. So I had to assume the problem lay with either</p>
<ol>
<li>My web.config</li>
<li>My upgraded DB</li>
<li>Some assembly in my installation</li>
</ol>
<p>Reading around hinted that AppPools stopping could be due to having mix .NET 2/3.5 and 4.0 assemblies. So I was favouring option 1 or 3</p>
<p>In the end I chose to use the web.config from the DNN V7 installation package. I just copied this over the upgraded one and edited the connection strings. I also had to replace the machine key entry.</p>
<blockquote>
<machineKey validationKey="<and string>" decryptionKey="<and string>" decryption="3DES" validation="SHA1" /></blockquote>
<p>This has to swapped as this is used to decrypt data such as passwords from the DB, if you don’t do this you can’t login in.</p>
<p>Once this new web.config was changed the site loaded without any errors. I never tracked down the actual line in the web.config that caused the problem.</p>
<h3 id="dnn-log-errors">DNN Log Errors</h3>
<p>I repeated this upgrade process a few times before I got it right. On one test I saw errors in my ./DNN/Portals/_Default/Logs in the form</p>
<blockquote>
<p>2013-05-13 21:46:50,505 [TyphoonTFS][Thread:14][ERROR] DotNetNuke.Services.Exceptions.Exceptions - System.Data.SqlClient.SqlException (0x80131904): The INSERT statement conflicted with the FOREIGN KEY constraint &ldquo;FK_ScheduleHistory_Schedule&rdquo;. The conflict occurred in database &ldquo;dnn&rdquo;, table &ldquo;dbo.Schedule&rdquo;, column &lsquo;ScheduleID&rsquo;.</p></blockquote>
<p>I fixed this by deleting the contents of the a dbo.ScheduleHistory table. However, I think this is a red herring as when I got the rest of the process OK this error was not shown.</p>
<h3 id="content-update">Content Update</h3>
<p>Finally I could upgrade the skin being used by the site to make it look like a DNN 7 based site.</p>
<ul>
<li>Most importantly for me was to get the new admin look by changed the way the DNN admin menus are shown. This is done by changing the Host Settings &gt; Other Settings &gt; Control Panel to CONTROLBAR. This gets you the new menu banner model at the top of the page (this took me ages to find!)</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_118.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_118.png" title="image"></a></p></blockquote>
<ul>
<li>I updated my extension modules (Host &gt; extensions). This page shows which modules have updates. Click on the green update link to download the package. Then the use the ‘Install Extension Wizard’ button at the top of the page to install them.</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_119.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_119.png" title="image"></a></p></blockquote>
<ul>
<li>Finally I started to changed from a V5 skin to one of the DNN V7 ones, I was using a customised version of the old default of MinimalExtropy, I swapped to the new  one based on the new default Gravity</li>
</ul>
<p>So this is still a work in progress as with any CMS solution. Now I need to repeat the process by moving my installation from the old ISP to the new one.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why do all my TFS labels appear to be associated with the same changeset?</title>
      <link>https://blog.richardfennell.net/posts/why-do-all-my-tfs-labels-appear-to-be-associated-with-the-same-changeset/</link>
      <pubDate>Thu, 16 May 2013 08:41:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-do-all-my-tfs-labels-appear-to-be-associated-with-the-same-changeset/</guid>
      <description>&lt;p&gt;If you look at the labels tab in the source control history in Visual Studio 2012 you could be confused by the changeset numbers. How can all the labels added by my different builds, done over many days, be associated with the same changeset?&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_109.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_109.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you look at the same view in VS 2010 the problem is not so obvious, but that is basically due to the column not being shown.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you look at the labels tab in the source control history in Visual Studio 2012 you could be confused by the changeset numbers. How can all the labels added by my different builds, done over many days, be associated with the same changeset?</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_109.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_109.png" title="image"></a></p>
<p>If you look at the same view in VS 2010 the problem is not so obvious, but that is basically due to the column not being shown.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_110.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_110.png" title="image"></a></p>
<p>The answer is that the value shown in the first screen shot is for the root element associated with the label. If you drill into the label you can see all the labelled folders and files with their changeset values when the label was created</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_111.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_111.png" title="image"></a></p>
<p>So it is just that the initial screen is confusing drilling in makes it all clearer.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problem with CollectionView.CurrentChanged event not being fired in a WPF application</title>
      <link>https://blog.richardfennell.net/posts/problem-with-collectionview-currentchanged-event-not-being-fired-in-a-wpf-application/</link>
      <pubDate>Tue, 14 May 2013 16:59:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problem-with-collectionview-currentchanged-event-not-being-fired-in-a-wpf-application/</guid>
      <description>&lt;p&gt;Had an interesting issue on one of our WPF applications that is using &lt;a href=&#34;http://www.galasoft.ch/mvvm/&#34;&gt;MVVM Lite&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_108.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_108.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This application is a front end to upload and download folder structures to TFS. On my development PC all was working fine i.e. when we upload a new folder structure to the TFS backend the various combo’s on the download tab are also updated. However, on another test system they were not updated.&lt;/p&gt;
&lt;p&gt;After a bit of tracing we could see in both cases the &lt;strong&gt;RefreshData&lt;/strong&gt; method was being called OK, and the CollectionViews  recreated and bound without errors.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Had an interesting issue on one of our WPF applications that is using <a href="http://www.galasoft.ch/mvvm/">MVVM Lite</a>.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_108.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_108.png" title="image"></a></p>
<p>This application is a front end to upload and download folder structures to TFS. On my development PC all was working fine i.e. when we upload a new folder structure to the TFS backend the various combo’s on the download tab are also updated. However, on another test system they were not updated.</p>
<p>After a bit of tracing we could see in both cases the <strong>RefreshData</strong> method was being called OK, and the CollectionViews  recreated and bound without errors.</p>
<blockquote>
<p>private void RefreshData()<br>
        {<br>
            this.dataService.GetData(<br>
                (item, error) =&gt;<br>
                {<br>
                    if (error != null)<br>
                    {<br>
                        logger.Error(&ldquo;MainViewModel: Cannot find dataservice&rdquo;);<br>
                        return;<br>
                    }</p>
<p>                    this.ServerUrl = item.ServerUrl.ToString();<br>
                    this.TeamProjects = new CollectionView(item.TeamProjects);<br>
                    this.Projects = new CollectionView(item.Projects);<br>
                });</p>
<p>            this.TeamProjects.CurrentChanged += new EventHandler(this.TeamProjects_CurrentChanged);</p>
<p>            this.TeamProjects.MoveCurrentToFirst();<br>
          }</p></blockquote>
<p>So what was the problem? To me it was not obvious.</p>
<p>It turned out it was that on my development system the TeamProject CollectionView contained 2 items, but on the test system only 1.</p>
<p>This means that even though we had recreated the CollectionView and rebound that data and events the calling of <strong>MoveCurrentToFirst</strong> (or any other move to for that matter) had no effect as there was no place to move to in a collection of one item. Hence the changed event never got called and this in turn stopped the calling of the methods that repopulated the other combos on the download tab.</p>
<p>The solution was to add the following line at the end of the method, and all was OK</p>
<blockquote>
<p>            this.TeamProjects.Refresh();</p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>Robert&#39;s new arrival</title>
      <link>https://blog.richardfennell.net/posts/roberts-new-arrival/</link>
      <pubDate>Tue, 14 May 2013 11:03:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/roberts-new-arrival/</guid>
      <description>&lt;p&gt;Look what the stork delivered to Robert!&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_000521.jpg&#34;&gt;&lt;img alt=&#34;WP_000521&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_000521_thumb.jpg&#34; title=&#34;WP_000521&#34;&gt;&lt;/a&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_000520.jpg&#34;&gt;&lt;img alt=&#34;WP_000520&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/WP_000520_thumb.jpg&#34; title=&#34;WP_000520&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;He has bought himself a &lt;a href=&#34;http://www.aldebaran-robotics.com/en/&#34;&gt;robot&lt;/a&gt;, and brought it in to the office to show to everyone today&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Look what the stork delivered to Robert!</p>
<p><a href="/wp-content/uploads/sites/2/historic/WP_000521.jpg"><img alt="WP_000521" loading="lazy" src="/wp-content/uploads/sites/2/historic/WP_000521_thumb.jpg" title="WP_000521"></a><a href="/wp-content/uploads/sites/2/historic/WP_000520.jpg"><img alt="WP_000520" loading="lazy" src="/wp-content/uploads/sites/2/historic/WP_000520_thumb.jpg" title="WP_000520"></a></p>
<p>He has bought himself a <a href="http://www.aldebaran-robotics.com/en/">robot</a>, and brought it in to the office to show to everyone today</p>
]]></content:encoded>
    </item>
    <item>
      <title>Setting up a TFS 2012 proxy in a cross domain system</title>
      <link>https://blog.richardfennell.net/posts/setting-up-a-tfs-2012-proxy-in-a-cross-domain-system/</link>
      <pubDate>Fri, 10 May 2013 21:16:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-up-a-tfs-2012-proxy-in-a-cross-domain-system/</guid>
      <description>&lt;p&gt;Today I have been setting up a cross domain TFS proxy. The developers are in one domain and the TFS server in another. Given there is no trust between these domains &lt;a href=&#34;http://blogs.msdn.com/b/briankel/archive/2011/09/16/visual-studio-11-application-lifecycle-management-virtual-machine-and-hands-on-labs-demo-scripts.aspx&#34;&gt;you have use a trick&lt;/a&gt; to get it to work.&lt;/p&gt;
&lt;p&gt;So I created a local user tfsproxy.local on both the TFS server and proxy with the same password on each. At the proxy end I made this local user a local admin.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today I have been setting up a cross domain TFS proxy. The developers are in one domain and the TFS server in another. Given there is no trust between these domains <a href="http://blogs.msdn.com/b/briankel/archive/2011/09/16/visual-studio-11-application-lifecycle-management-virtual-machine-and-hands-on-labs-demo-scripts.aspx">you have use a trick</a> to get it to work.</p>
<p>So I created a local user tfsproxy.local on both the TFS server and proxy with the same password on each. At the proxy end I made this local user a local admin.</p>
<p>Next I ran the TFS 2012.2 wizard setting the proxy account  to the tfsproxy.local user. It all passed verification, but then I got an error</p>
<blockquote>
<p>TF400371: Failed to add the service account &lsquo;TFSPROXYTFSProxy.local&rsquo; to Proxy Service Accounts Group. Details: TF14045: The identity with type &lsquo;System.Security.Principal.WindowsIdentity&rsquo; and identifier &lsquo;S-1-5-21-4198714966-1643845615-1961851592-1024&rsquo; could not be found..</p></blockquote>
<p>It seems this is a <a href="http://blogs.msdn.com/b/tfsao/archive/2013/04/15/tf400371-tf14045-configuring-proxy-in-an-untrusted-domain.aspx">known issue with TFS2012</a>. It is meant to be fixed in TFS2012.3, so I pulled down the <a href="http://blogs.msdn.com/b/bharry/archive/2013/05/07/visual-studio-2012-3-update-3-go-live-ctp-is-now-available.aspx">’go live’ CTP</a> and installed this on the proxy. It made no difference, I assumed it actually needs to be installed on the server end and not just the proxy as this is where the user lookup occurs. However, I did not access to do that upgrade today.</p>
<p>I was about to <a href="http://blogs.msdn.com/b/tfsao/archive/2013/04/15/tf400371-tf14045-configuring-proxy-in-an-untrusted-domain.aspx">follow the workaround</a> of removing the proxy from the domain, configuring it and then putting it back. But I then had an idea; the step it was failing on was granting rights, so I did it manually. On the TFS server end I added the tfsproxy.local user to the ‘Proxy Service Accounts Group’. Once this was done the configuration completed without error.</p>
<p>A quick test showed the proxy was working as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Interested in some tutor delivered training on TFS 2012 in the UK?</title>
      <link>https://blog.richardfennell.net/posts/interested-in-some-tutor-delivered-training-on-tfs-2012-in-the-uk/</link>
      <pubDate>Fri, 10 May 2013 09:23:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/interested-in-some-tutor-delivered-training-on-tfs-2012-in-the-uk/</guid>
      <description>&lt;p&gt;Interested in some tutor delivered training on TFS 2012? Well &lt;a href=&#34;http://myalmblog.com/&#34;&gt;Anthony Borton&lt;/a&gt;, the TFS trainer and ALM MVP, is over in the UK running his excellent set of the TFS/VS 2012 focused courses.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;TFS 2012 Configuration and Administration&lt;/strong&gt;&lt;br&gt;
10th – 12th June 2013 | &lt;a href=&#34;http://www.quicklearn.com/datasheets/ALMA12.pdf&#34;&gt;Course Outline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Software testing with Visual Studio 2012&lt;/strong&gt;13th- 14th June 2013 | &lt;a href=&#34;http://www.quicklearn.com/datasheets/ALMT12.pdf&#34;&gt;Course Outline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;TFS 2012 Developer Fundamentals&lt;/strong&gt;&lt;br&gt;
17th – 18th June 2013 | &lt;a href=&#34;http://www.quicklearn.com/datasheets/ALMD12.pdf&#34;&gt;Course Outline&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Interestingly these courses can be taken in the room with the trainer or via &lt;a href=&#34;http://www.quicklearn.com/rci.aspx&#34;&gt;Quicklearn’s Remote Classroom Instruction&lt;/a&gt; (RCI), so you can take the course in real time with the other students from the comfort of your own home/desk, but with all the benefits of a tutor classroom environment&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Interested in some tutor delivered training on TFS 2012? Well <a href="http://myalmblog.com/">Anthony Borton</a>, the TFS trainer and ALM MVP, is over in the UK running his excellent set of the TFS/VS 2012 focused courses.</p>
<ul>
<li><strong>TFS 2012 Configuration and Administration</strong><br>
10th – 12th June 2013 | <a href="http://www.quicklearn.com/datasheets/ALMA12.pdf">Course Outline</a></li>
<li><strong>Software testing with Visual Studio 2012</strong>13th- 14th June 2013 | <a href="http://www.quicklearn.com/datasheets/ALMT12.pdf">Course Outline</a></li>
<li><strong>TFS 2012 Developer Fundamentals</strong><br>
17th – 18th June 2013 | <a href="http://www.quicklearn.com/datasheets/ALMD12.pdf">Course Outline</a></li>
</ul>
<p>Interestingly these courses can be taken in the room with the trainer or via <a href="http://www.quicklearn.com/rci.aspx">Quicklearn’s Remote Classroom Instruction</a> (RCI), so you can take the course in real time with the other students from the comfort of your own home/desk, but with all the benefits of a tutor classroom environment</p>
<p>If you are interested in these courses and need UK billing get in touch with <a href="mailto:lisa@blackmarble.co.uk">lisa@blackmarble.co.uk</a> for more details</p>
]]></content:encoded>
    </item>
    <item>
      <title>How healthy is my TFS server?</title>
      <link>https://blog.richardfennell.net/posts/how-healthy-is-my-tfs-server/</link>
      <pubDate>Wed, 08 May 2013 15:31:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-healthy-is-my-tfs-server/</guid>
      <description>&lt;p&gt;If you want to know the health of the TFS server there are a number of options from a full &lt;a href=&#34;http://www.microsoft.com/en-us/download/details.aspx?id=35773&#34;&gt;System Center MOM pack&lt;/a&gt; downwards. A good starting point are the &lt;a href=&#34;http://blogs.msdn.com/b/granth/archive/2009/02/03/announcing-tfs-performance-report-pack.aspx&#34;&gt;performance reports&lt;/a&gt; and &lt;a href=&#34;http://blogs.msdn.com/b/granth/archive/2010/07/12/administrative-report-pack-for-team-foundation-server-2010.aspx&#34;&gt;administrative report pack&lt;/a&gt; produced by &lt;a href=&#34;http://blogs.msdn.com/b/granth/&#34;&gt;Grant Holiday&lt;/a&gt;. Though the performance pack is  designed for TFS 2008 they work on 2010 and 2012, but you do need to do a bit of editing.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;As the installation notes state, create a new shared data source called “TfsActivityReportDS”
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Set the connection string to: &lt;strong&gt;Data Source=[your SQL server];Initial Catalog=Tfs_[your TPC name]    -&lt;/strong&gt; this is the big change as it this used to point to the tfs_ActivityLogging DB, this (as of TFS 2010) is now all rolled into you Team project Collection DB, so you need to alter the connection string to match your TPC. Also note if you use multiple TPCs you will need multiple data sources and reports.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you want to know the health of the TFS server there are a number of options from a full <a href="http://www.microsoft.com/en-us/download/details.aspx?id=35773">System Center MOM pack</a> downwards. A good starting point are the <a href="http://blogs.msdn.com/b/granth/archive/2009/02/03/announcing-tfs-performance-report-pack.aspx">performance reports</a> and <a href="http://blogs.msdn.com/b/granth/archive/2010/07/12/administrative-report-pack-for-team-foundation-server-2010.aspx">administrative report pack</a> produced by <a href="http://blogs.msdn.com/b/granth/">Grant Holiday</a>. Though the performance pack is  designed for TFS 2008 they work on 2010 and 2012, but you do need to do a bit of editing.</p>
<ol>
<li>As the installation notes state, create a new shared data source called “TfsActivityReportDS”
<ol>
<li>
<p>Set the connection string to: <strong>Data Source=[your SQL server];Initial Catalog=Tfs_[your TPC name]    -</strong> this is the big change as it this used to point to the tfs_ActivityLogging DB, this (as of TFS 2010) is now all rolled into you Team project Collection DB, so you need to alter the connection string to match your TPC. Also note if you use multiple TPCs you will need multiple data sources and reports.</p>
</li>
<li>
<p>Credentials: domainuser that has access to the Tfs_[TPC Name] database</p>
</li>
<li>
<p>Use as windows credentials when connecting to the data source</p>
</li>
<li>
<p>Once uploaded each report needs to be edited via the web manage option to change it Data Source to match the newly created source</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_106.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_106.png" title="image"></a></p>
</li>
<li>
<p>You also need to edit each report in the pack via Report Builder as the SQL queries all contain the full path. For each dataset, (and each report can have a few) you need to edit the query to only contain the table name not the whole SQL path</p>
<p>i.e. From <strong>TfsActivityLogging.dbo.tbl_Command</strong> to **tbl_Command</p>
<p>**<a href="/wp-content/uploads/sites/2/historic/image_107.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_107.png" title="image"></a></p>
</li>
</ol>
</li>
</ol>
<p>Once this is done most of the reports should working and giving a good insight into the performance of your server.</p>
<p>Some reports such as the Source Control Requests and Top user bypassing proxy take a bit more SQL query fiddling.</p>
<ul>
<li>
<p>Server Status - Top Users Bypassing Proxies – you need to alter the Users part of the query to something like (note the hard coded table path, i am sure we could do better, but I don’t usually need this report as have few proxies, so not made much effort on it)</p>
<blockquote>
<p>Users(<br>
        ID,<br>
        UserName,<br>
        FullyQualifiedAlias,<br>
        eMail<br>
    ) AS<br>
    (</p>
<p>        SELECT [personSK]<br>
              ,[Name]<br>
              ,[Domain] + &rsquo;&rsquo; + [Alias] as FullyQualifiedAlias<br>
              ,[Email]<br>
          FROM [Tfs_2012_Warehouse].[dbo].[DimPerson] with (nolock)<br>
    )</p></blockquote>
</li>
<li>
<p>Source Control Requests – runs from a straight web service endpoint, so you need to edit the Url it targets to something like</p>
</li>
</ul>
<blockquote>
<p><a href="http://localhost:8080/versioncontrol/v3.0/administration.asmx">http://localhost:8080/versioncontrol/v3.0/administration.asmx</a></p></blockquote>
<p>Unlike the performance reports the admin report packs is designed for TFS 2010/2012 so it works once you make sure the reports are connected to the correct shared data sources.</p>
<p>However, remember the new <a href="http://blogs.msdn.com/b/granth/archive/2013/02/13/tfs2012-new-tools-for-tfs-administrators.aspx">web based Admin Tools on TFS 2012</a> actually address many of these areas out the box.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting going with the TFS Java API</title>
      <link>https://blog.richardfennell.net/posts/getting-going-with-the-tfs-java-api/</link>
      <pubDate>Sat, 04 May 2013 08:53:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-going-with-the-tfs-java-api/</guid>
      <description>&lt;p&gt;If you are using the &lt;a href=&#34;http://www.microsoft.com/en-us/download/details.aspx?id=22616&#34;&gt;TFS 2012 Java API&lt;/a&gt; it is important you read the release notes. It is not enough to just reference the &lt;strong&gt;com.microsoft.tfs.sdk-11.0.0.jar&lt;/strong&gt; file in your &lt;strong&gt;classpath&lt;/strong&gt; as you might expect. You also have to pass a Java system property that associates &lt;strong&gt;com.microsoft.tfs.jni.native.base-directory&lt;/strong&gt; with the location of the native library files that provide platform specific implementation for method calls.  The command line for this is done in the form&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are using the <a href="http://www.microsoft.com/en-us/download/details.aspx?id=22616">TFS 2012 Java API</a> it is important you read the release notes. It is not enough to just reference the <strong>com.microsoft.tfs.sdk-11.0.0.jar</strong> file in your <strong>classpath</strong> as you might expect. You also have to pass a Java system property that associates <strong>com.microsoft.tfs.jni.native.base-directory</strong> with the location of the native library files that provide platform specific implementation for method calls.  The command line for this is done in the form</p>
<blockquote>
<p>java.exe -D&quot;com.microsoft.tfs.jni.native.base-directory=C:UsersUsernameYourApplicationnative&quot;</p></blockquote>
<p>If you don’t set this property you get an exception similar to</p>
<blockquote>
<p>Exception in thread &ldquo;main&rdquo; java.lang.UnsatisfiedLinkError: com.microsoft.tfs.jni.internal.platformmisc.NativePlatformMisc.nativeGetEnvironmentVariable(Ljava/lang/String;)Ljava/lang/String;</p></blockquote>
<p>Now setting this property on the command line is all well and good, but how do you do this if you are working in Eclipse?</p>
<p>The answer is you set the argument via the Run &gt; Run Configuration. Select your configuration and enter the VM argument as shown below.</p>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_105.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_105.png" title="image"></a></p></blockquote>
<p>Once this is set you can run and debug you application inside Eclipse</p>
]]></content:encoded>
    </item>
    <item>
      <title>Accessing TFS work item tags via the API</title>
      <link>https://blog.richardfennell.net/posts/accessing-tfs-work-item-tags-via-the-api/</link>
      <pubDate>Fri, 03 May 2013 22:35:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/accessing-tfs-work-item-tags-via-the-api/</guid>
      <description>&lt;p&gt;With &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2013/02/03/tfs2012-qu2-new-feature-work-item-tagging.aspx&#34;&gt;TFS 2012.2 Microsoft have added tags to work items&lt;/a&gt;. These provide a great way to add custom information to work items without the need to customise the process template to add custom fields. This is important for users of the hosted &lt;a href=&#34;http://tfs.visualstudio.com&#34;&gt;http://tfs.visualstudio.com&lt;/a&gt; as this does not, at this time, allow any process customisation.&lt;/p&gt;
&lt;p&gt;It is easy to add tags to any work item via the TFS web client, just press the Add.. button and either select an existing tag or add a new one. In the following PBI work item example I have added two tags, Tag1 and Tag2.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2013/02/03/tfs2012-qu2-new-feature-work-item-tagging.aspx">TFS 2012.2 Microsoft have added tags to work items</a>. These provide a great way to add custom information to work items without the need to customise the process template to add custom fields. This is important for users of the hosted <a href="http://tfs.visualstudio.com">http://tfs.visualstudio.com</a> as this does not, at this time, allow any process customisation.</p>
<p>It is easy to add tags to any work item via the TFS web client, just press the Add.. button and either select an existing tag or add a new one. In the following PBI work item example I have added two tags, Tag1 and Tag2.</p>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_103.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_103.png" title="image"></a></p></blockquote>
<p>However, the problem with tags, at present, is that they can only be used as filters within the result of a work item query in the web client, as shown below.</p>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_104.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_104.png" title="image"></a></p></blockquote>
<p>They are not available inside work item queries and are not published to the TFS warehouse/cube for reporting purposes. Hopefully these limitations will be addressed in the future, but not today.</p>
<p>Given all this, I was recently asked by a client if they could use tags to mark PBI work items scheduled for a given release with a view to using this information to produce release notes. Obviously given the current limitations this cannot be done via work item queries or reporting, but you can use the TFS 2012.2 API to do this easily in .NET or Java.</p>
<p>The tags are stored as a ; separated list in a string field property. In C# there is a property in the API to get the tags …</p>
<blockquote>
<p>using System;<br>
using Microsoft.TeamFoundation.Client;<br>
using Microsoft.TeamFoundation.WorkItemTracking.Client;<br>
using System.Linq;</p>
<p>namespace BlackMarble<br>
{<br>
    public class TFSDemo<br>
    {<br>
        public static string[] GetTagsForWorkItem(Uri tfsUri, int workItemId)<br>
        {<br>
            // get a reference to the team project collection<br>
            using (var projectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(tfsUri))<br>
            {<br>
                // get a reference to the work item tracking service<br>
                var workItemStore = projectCollection.GetService<WorkItemStore>();</p>
<p>                // and get the work item<br>
                var wi = workItemStore.GetWorkItem(workItemId);<br>
                return wi.Tags.Split(&rsquo;;&rsquo;);<br>
            }<br>
        }<br>
    }<br>
}</p></blockquote>
<p>but in Java you have to get the field yourself …</p>
<blockquote>
<p>import java.net.URI;<br>
import java.net.URISyntaxException;</p>
<p>import com.microsoft.tfs.core.TFSTeamProjectCollection;<br>
import com.microsoft.tfs.core.clients.workitem.WorkItem;<br>
import com.microsoft.tfs.core.clients.workitem.WorkItemClient;<br>
import com.microsoft.tfs.core.httpclient.Credentials;<br>
import com.microsoft.tfs.core.httpclient.DefaultNTCredentials;</p>
<p>public class TFSDemo {<br>
    <br>
      public static String[] GetTagsForWorkItem(URI tfsUri, int workItemId) <br>
      {<br>
          // get a reference to the team project collection<br>
          Credentials credentials = new DefaultNTCredentials();<br>
         <br>
          TFSTeamProjectCollection projectCollection = new TFSTeamProjectCollection(tfsUri, credentials);<br>
         <br>
          // get a reference to the work item tracking service<br>
          WorkItemClient wic = projectCollection.getWorkItemClient();<br>
         <br>
          // get the work item and return the tags<br>
          WorkItem wi = wic.getWorkItemByID(workItemId);<br>
         <br>
          // there is no method for the tags, but can pull it out of the fields<br>
          return wi.getFields().getField(&ldquo;Tags&rdquo;).getValue().toString().split(&quot;;&quot;);<br>
      }</p>
<p>}<br>
        </p></blockquote>
<p>Given these methods it is possible to write a tool that can select matching work items. Thus allowing you generate any output you require.</p>
<p><strong>Update 14 May 2013</strong></p>
<p>Just had confirmed that at present there is no API to write tags, I had not tried, I only need a read only solution. Keep an eye open for future releases of the SDKs to get a write call method.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Workaround for TF900546: An unexpected error occurred while running the RunTests activity</title>
      <link>https://blog.richardfennell.net/posts/workaround-for-tf900546-an-unexpected-error-occurred-while-running-the-runtests-activity/</link>
      <pubDate>Wed, 01 May 2013 15:01:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/workaround-for-tf900546-an-unexpected-error-occurred-while-running-the-runtests-activity/</guid>
      <description>&lt;h3 id=&#34;the-problem&#34;&gt;The problem&lt;/h3&gt;
&lt;p&gt;I have been working on a project that contains SharePoint 2010 WSP packages and a MSI distributed WPF application. These projects are all in a single solution with their unit tests. I have been getting a problem with our TFS 2012.2 build system, all the projects compile but at the test point I get the unhelpful&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;TF900546: An unexpected error occurred while running the RunTests activity: &amp;lsquo;Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.&amp;rsquo;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h3 id="the-problem">The problem</h3>
<p>I have been working on a project that contains SharePoint 2010 WSP packages and a MSI distributed WPF application. These projects are all in a single solution with their unit tests. I have been getting a problem with our TFS 2012.2 build system, all the projects compile but at the test point I get the unhelpful</p>
<blockquote>
<p>TF900546: An unexpected error occurred while running the RunTests activity: &lsquo;Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.&rsquo;.</p></blockquote>
<p>If I remote onto the build box and loaded the solution in Visual Studio (which was luckily installed on the build box) and tried to run the test in the test explorer I got</p>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_101.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_101.png" title="image"></a></p></blockquote>
<p>and the event log showed</p>
<blockquote>
<p>Application: vstest.executionengine.x86.exe<br>
Framework Version: v4.0.30319<br>
Description: The process was terminated due to an unhandled exception.<br>
Exception Info: System.InvalidProgramException<br>
Stack:<br>
   at System.ServiceModel.ServiceHost..ctor(System.Object, System.Uri[])<br>
   at Microsoft.VisualStudio.TestPlatform.TestExecutor.TestExecutorMain.Run(System.String[])<br>
   at Microsoft.VisualStudio.TestPlatform.TestExecutor.ServiceMain.Main(System.String[])</p></blockquote>
<p>and</p>
<blockquote>
<p>Faulting application name: vstest.executionengine.x86.exe, version: 11.0.60315.1, time stamp: 0x5142b4b6<br>
Faulting module name: KERNELBASE.dll, version: 6.1.7601.18015, time stamp: 0x50b83c8a<br>
Exception code: 0xe0434352<br>
Fault offset: 0x0000c41f<br>
Faulting process id: 0x700<br>
Faulting application start time: 0x01ce4663824905bd<br>
Faulting application path: C:PROGRAM FILES (X86)MICROSOFT VISUAL STUDIO 11.0COMMON7IDECOMMONEXTENSIONSMICROSOFTTESTWINDOWvstest.executionengine.x86.exe<br>
Faulting module path: C:Windowssyswow64KERNELBASE.dll<br>
Report Id: c2689d15-b256-11e2-80aa-00155d0a5201</p></blockquote>
<p>Next I tried creating in a simple new test project with one unit test, this failed with the same error.</p>
<p>As all the tests work locally on my development PC it was all pointing to a corrupted installed of Visual Studio (and/or the components installed as part of TFS build) on the build box. It should be noted that this was a build box with a good number of additional packages installed to support SharePoint, so patching order could be an issue.</p>
<h3 id="the-workaround">The workaround</h3>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rhancock/">Robert</a>, another of our ALM consultants, said he had seen a similar problem at client and suggested changing the test runner.</p>
<p>So in the build definition &gt; process &gt; Basic &gt; Automated Tests &gt; I edited the test run settings and changed to the MSTest VS2010 runner from the default.</p>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_102.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_102.png" title="image"></a></p></blockquote>
<p>Once this was done my tests then ran. However I then got a publishing error</p>
<blockquote>
<p>API restriction: The assembly &lsquo;file:///C:Builds2BMCTAppBox.Main.CIBinaries_PublishedWebsitesWebServiceTestClientbinWebServiceTestClient.dll&rsquo; has already loaded from a different location. It cannot be loaded from a new location within the same appdomain.</p></blockquote>
<p>The problem was the build was set to the default test search criteria of *test*. This meant it picked a project it should not have, due to a poor naming convention. As soon as I changed the filter to *.tests all was OK.</p>
<p>I did retry the VS2012 test runner after fixing this naming issue, it had no effect.</p>
<p>I know I need to sort (rebuild) this build box, but now is not the time, I have a working solution that will do for now</p>
]]></content:encoded>
    </item>
    <item>
      <title>Follow the Yorkshire Global Windows Azure Event</title>
      <link>https://blog.richardfennell.net/posts/follow-the-yorkshire-global-windows-azure-event/</link>
      <pubDate>Sat, 27 Apr 2013 10:48:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-the-yorkshire-global-windows-azure-event/</guid>
      <description>&lt;p&gt;Today Black Marble is hosting the Yorkshire &lt;a href=&#34;http://bit.ly/BMGWAB&#34;&gt;Global Windows Azure Bootcamp&lt;/a&gt; event.&lt;/p&gt;
&lt;p&gt;To see what is happening check out the photos on our &lt;a href=&#34;http://bit.ly/BM_GWAB&#34;&gt;Facebook gallery&lt;/a&gt; and the #GlobalWindowsAzure Twitter hashtag&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Global Azure Bootcamp Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.blackmarble.co.uk/images/events/AzBootcamp2013.png&#34;&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_100.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_100.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today Black Marble is hosting the Yorkshire <a href="http://bit.ly/BMGWAB">Global Windows Azure Bootcamp</a> event.</p>
<p>To see what is happening check out the photos on our <a href="http://bit.ly/BM_GWAB">Facebook gallery</a> and the #GlobalWindowsAzure Twitter hashtag</p>
<p><img alt="Global Azure Bootcamp Logo" loading="lazy" src="http://www.blackmarble.co.uk/images/events/AzBootcamp2013.png"><a href="/wp-content/uploads/sites/2/historic/image_100.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_100.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting SQL 2012 SSIS packages built on TFS 2012.2</title>
      <link>https://blog.richardfennell.net/posts/getting-sql-2012-ssis-packages-built-on-tfs-2012-2/</link>
      <pubDate>Wed, 24 Apr 2013 16:36:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-sql-2012-ssis-packages-built-on-tfs-2012-2/</guid>
      <description>&lt;p&gt;I have been trying to get SQL 2012 SSIS packages built on a TFS2012.2 build system. As has been pointed out by many people the problem is you cannot build SQL SSIS packages with MSBuild. This means you have to resort to call Visual Studio DevEnv.exe from within your build.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://geekswithblogs.net/jakob/archive/2010/05/14/building-visual-studio-setup-projects-with-tfs-2010-team-build.aspx&#34;&gt;Jakob Ehn did a great post on this subject&lt;/a&gt;, but it is a little dated now due the release of VS 2012 and SQL 2012&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been trying to get SQL 2012 SSIS packages built on a TFS2012.2 build system. As has been pointed out by many people the problem is you cannot build SQL SSIS packages with MSBuild. This means you have to resort to call Visual Studio DevEnv.exe from within your build.</p>
<p><a href="http://geekswithblogs.net/jakob/archive/2010/05/14/building-visual-studio-setup-projects-with-tfs-2010-team-build.aspx">Jakob Ehn did a great post on this subject</a>, but it is a little dated now due the release of VS 2012 and SQL 2012</p>
<h3 id="the-command-line">The command line</h3>
<p>But before we get to TFS, let us sort the actual command line we need to run. So assume VS2012 is in use, the basic command line build for a solution will be</p>
<blockquote>
<p><em>“C:Program Files (x86)Microsoft Visual Studio 11.0Common7IDEdevenv”  “MySolution.sln” /build &ldquo;Release|Any CPU&rdquo;</em></p></blockquote>
<p>If you solution only contains SSIS packages then this command line might be OK. However you might just want to build a single SSIS project within a larger solution. In this case you might use</p>
<blockquote>
<p><em>“C:Program Files (x86)Microsoft Visual Studio 11.0Common7IDEdevenv”  “MySolution.sln” /build &ldquo;Release|Any CPU&rdquo;  /project “SSISBitsSSISBis.dtproj”</em></p></blockquote>
<p>So to work out the command line you need,  first you need to make sure VS2012 and the Business Intelligence tools are installed on your build box. Once this is done you can try the command line. I decided for my project that I would create a second solution file in the root of the source code that just contained my two SSIS projects, thus making the command line easier (basically one solution for SSIS packages and another for everything else).</p>
<p>So I ran the command line</p>
<blockquote>
<p><em>“C:Program Files (x86)Microsoft Visual Studio 11.0Common7IDEdevenv”  “MySolution.sln” /build &ldquo;Release|Any CPU&rdquo;</em></p></blockquote>
<p>and got</p>
<blockquote>
<p><em>Microsoft (R) Microsoft Visual Studio 2012 Version 11.0.60315.1.<br>
Copyright (C) Microsoft Corp. All rights reserved.<br>
Error: Catastrophic failure (Exception from HRESULT: 0x8000FFFF (E_UNEXPECTED))<br>
Error: Catastrophic failure (Exception from HRESULT: 0x8000FFFF (E_UNEXPECTED))<br>
========== Build: 0 succeeded or up-to-date, 2 failed, 0 skipped ==========</em></p></blockquote>
<p>Not good. So I checked that if I loaded the same solution in the same copy of Visual Studio 2012.2 it built OK, and it did.</p>
<p>So it seems there is an issue with command line build of SSIS packages in VS2012. A quick search showed it was a <a href="http://connect.microsoft.com/SQLServer/feedback/details/781872/fail-to-build-ssis-ssas-projects-via-vs2012-command-line-devenv-exe-with-ssdt-bi-tool-installed">logged issue on Microsoft Connect</a>. Luckily a workaround was mentioned, so I tried it, to use the VS2010 version of the tools. So my command line became</p>
<blockquote>
<p><em>“C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEdevenv”  “MySolution.sln” /build &ldquo;Release|Any CPU&rdquo;</em></p></blockquote>
<p>To try this I had to install the SQL Data Tools from my SQL 2012 ISO (not the <a href="http://blogs.msdn.com/b/ssdt/archive/2012/12/13/available-today-ssdt-december-2012.aspx">SSDT tools from the web</a> as these free ones don’t have the BI features). Once this had installed I could issue my command line and it all built OK.</p>
<p>So I knew I had a working command line. I started to put the same version of VS2010 SSDT tools on my TFS build box and moved onto the build process.</p>
<h3 id="the-tfs-build-process">The TFS Build Process</h3>
<p>So as now I had the command line, I could apply this knowledge to the <a href="http://geekswithblogs.net/jakob/archive/2010/05/14/building-visual-studio-setup-projects-with-tfs-2010-team-build.aspx">process Jakob outlined</a>. There are two basic steps</p>
<ol>
<li>Run the command line build – this was basically the same</li>
<li>Find the files created and copy them to the drops location – the change here is the old post mentions .MSI files, now we are looking for .ISPAC files</li>
</ol>
<p>As I had decided to have two solutions within my build, I used an if block (based on a solution name convention) to choose if needed to do a MSBuild or DevEnv build. So my process flow for the build phase was.</p>
<blockquote>
<p> <a href="/wp-content/uploads/sites/2/historic/image_98.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_98.png" title="image"></a></p></blockquote>
<p>Also I had to edit the xcopy block to look for .ISPAC files extensions i.e.</p>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_99.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_99.png" title="image"></a></p></blockquote>
<p>Other than these changes the templates was exactly as <a href="http://geekswithblogs.net/jakob/archive/2010/05/14/building-visual-studio-setup-projects-with-tfs-2010-team-build.aspx">Jakob described</a> – even down to using VS2010!</p>
<h3 id="summary">Summary</h3>
<p>So once all this was done I had a build that create my SSIS packages.</p>
<p>All seems a lot of work, life would be so much easier if SSDT</p>
<ul>
<li>Work properly under VS2012</li>
<li>Or even better support MSBuild!</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Upgraded to BlogEngine.NET 2.8</title>
      <link>https://blog.richardfennell.net/posts/upgraded-to-blogengine-net-2-8/</link>
      <pubDate>Wed, 24 Apr 2013 10:16:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgraded-to-blogengine-net-2-8/</guid>
      <description>&lt;p&gt;I have just upgrade our Blog Server from &lt;a href=&#34;http://www.dotnetblogengine.net/post/BlogEngineNET-28-Released.aspx&#34;&gt;BlogEngine.NET 2.7 to 2.8&lt;/a&gt;, all seems to have gone well, basically it is just file copies as there is no DB schema change, so…&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;backup your blogs folder&lt;/li&gt;
&lt;li&gt;delete it’s contents&lt;/li&gt;
&lt;li&gt;copy in the the &lt;a href=&#34;https://blogengine.codeplex.com/releases/view/105425&#34;&gt;new release&lt;/a&gt; from the zip&lt;/li&gt;
&lt;li&gt;fix the SQL connection string&lt;/li&gt;
&lt;li&gt;copy around a few files as detailed in the &lt;a href=&#34;http://blogengine.codeplex.com/wikipage?title=Upgrading%20to%20BlogEngine.NET%202.8&amp;amp;referringTitle=Installation&#34;&gt;release notes&lt;/a&gt;, basically any customisation you have done,&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That that seems to be it, just a case now of swapping themes to get the new look, if you want it&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just upgrade our Blog Server from <a href="http://www.dotnetblogengine.net/post/BlogEngineNET-28-Released.aspx">BlogEngine.NET 2.7 to 2.8</a>, all seems to have gone well, basically it is just file copies as there is no DB schema change, so…</p>
<ol>
<li>backup your blogs folder</li>
<li>delete it’s contents</li>
<li>copy in the the <a href="https://blogengine.codeplex.com/releases/view/105425">new release</a> from the zip</li>
<li>fix the SQL connection string</li>
<li>copy around a few files as detailed in the <a href="http://blogengine.codeplex.com/wikipage?title=Upgrading%20to%20BlogEngine.NET%202.8&amp;referringTitle=Installation">release notes</a>, basically any customisation you have done,</li>
</ol>
<p>That that seems to be it, just a case now of swapping themes to get the new look, if you want it</p>
]]></content:encoded>
    </item>
    <item>
      <title>Thanks for attending my webinar on lab management</title>
      <link>https://blog.richardfennell.net/posts/thanks-for-attending-my-webinar-on-lab-management/</link>
      <pubDate>Tue, 23 Apr 2013 11:58:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/thanks-for-attending-my-webinar-on-lab-management/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my webinar session today in TFS Lab Management, I hope the audio issues were not too much of a problem and you found the session useful. We are looking into the causes of the audio problem so hopefully the &lt;a href=&#34;http://blackmarble.com/SectionDisplay.aspx?name=Events&#34;&gt;next webinar&lt;/a&gt; will not need people dialling in via the phone unless they want to.&lt;/p&gt;
&lt;p&gt;A number of people asked for the slides, &lt;a href=&#34;https://github.com/rfennell/Presentations/blob/main/Tips%20for%20Visual%20Studio%20Lab%20Management.pptx&#34;&gt;you can find a copy of them here&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my webinar session today in TFS Lab Management, I hope the audio issues were not too much of a problem and you found the session useful. We are looking into the causes of the audio problem so hopefully the <a href="http://blackmarble.com/SectionDisplay.aspx?name=Events">next webinar</a> will not need people dialling in via the phone unless they want to.</p>
<p>A number of people asked for the slides, <a href="https://github.com/rfennell/Presentations/blob/main/Tips%20for%20Visual%20Studio%20Lab%20Management.pptx">you can find a copy of them here</a>.</p>
<p>As I mentioned in the session if you want a look at Lab Management you can have a go yourself using the <a href="http://aka.ms/VS11ALMVM">HOL in Brian Keller’s TFS 2012 VM</a>. Or <a href="http://tinyurl.com/BMTFSLab">watch the video I did at Techday 2010, an end to end demo of Lab Management</a>.</p>
<p>I also mentioned a couple of Microsoft case studies that might be of interest</p>
<ul>
<li><a href="http://www.microsoft.com/casestudies/Windows-Server-2008-R2/ING-DIRECT-Australia/ING-DIRECT-accelerates-innovation-with-one-click-provisioning-copies-of-the-bank/710000000278">Ing Direct (Australia) Bank in a Box</a></li>
<li><a href="http://blackmarble.com/casestudypdf/LabManagementCase.pdf">Implementing Lab Management with SC-VMM 2012</a></li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Lab Management webinar next week</title>
      <link>https://blog.richardfennell.net/posts/lab-management-webinar-next-week/</link>
      <pubDate>Thu, 18 Apr 2013 16:53:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/lab-management-webinar-next-week/</guid>
      <description>&lt;p&gt;If you are interested finding out more about TFS Lab Management why not come to the [Black Marble Bite size webinar](&lt;a href=&#34;http://blackmarble.com/events.aspx?event=Tips&#34;&gt;http://blackmarble.com/events.aspx?event=Tips&lt;/a&gt; for Visual Studio Lab Management (Black Marble Byte Size Webinar)) next Tuesday (23rd April). I will be giving a basic overview of the product and discussing some of our experiences implementing it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are interested finding out more about TFS Lab Management why not come to the [Black Marble Bite size webinar](<a href="http://blackmarble.com/events.aspx?event=Tips">http://blackmarble.com/events.aspx?event=Tips</a> for Visual Studio Lab Management (Black Marble Byte Size Webinar)) next Tuesday (23rd April). I will be giving a basic overview of the product and discussing some of our experiences implementing it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Our upgrade to TFS 2012.2 has worked OK</title>
      <link>https://blog.richardfennell.net/posts/our-upgrade-to-tfs-2012-2-has-worked-ok/</link>
      <pubDate>Thu, 18 Apr 2013 14:57:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/our-upgrade-to-tfs-2012-2-has-worked-ok/</guid>
      <description>&lt;p&gt;I have mentioned in &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/02/07/More-in-rights-being-stripped-for-the-team-project-contributors-group-in-TFS-2012-when-QU1-applied-and-how-to-sort-it.aspx&#34;&gt;past posts the issues we had doing our first quarterly update for TFS 2012&lt;/a&gt;. Well today we had scheduled our upgrade to &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/04/04/vs-tfs-2012-2-update-2-released-today.aspx&#34;&gt;2012.2&lt;/a&gt; and I am please to say it all seems to have worked.&lt;/p&gt;
&lt;p&gt;Unlike the last upgrade, this time we were doing nothing complex such as moving DB tier SQL instances; so it was a straight upgrade of a dual tier TFS 2012.1 instance with the DB being stored on a SQL2012 Availability Group (in previous updates you had to remove the DBs from the availability group for the update, with update 2 this is no longer required).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have mentioned in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/02/07/More-in-rights-being-stripped-for-the-team-project-contributors-group-in-TFS-2012-when-QU1-applied-and-how-to-sort-it.aspx">past posts the issues we had doing our first quarterly update for TFS 2012</a>. Well today we had scheduled our upgrade to <a href="http://blogs.msdn.com/b/bharry/archive/2013/04/04/vs-tfs-2012-2-update-2-released-today.aspx">2012.2</a> and I am please to say it all seems to have worked.</p>
<p>Unlike the last upgrade, this time we were doing nothing complex such as moving DB tier SQL instances; so it was a straight upgrade of a dual tier TFS 2012.1 instance with the DB being stored on a SQL2012 Availability Group (in previous updates you had to remove the DBs from the availability group for the update, with update 2 this is no longer required).</p>
<p>So we ran the EXE, all the files were copied on OK. So when we got to the verify stage of the wizard we had expected no issues, but the tool reported problems with the servers HTTPS Url. A quick check showed the issue was the server had the TFS ODATA service bound to HTTP on port 433, but using a different IP address to that used by TFS itself. As soon as this web site was stopped the wizard passed verification and the upgrade proceeded without an errors.</p>
<p>So it would seem that the verification does a rather basic check to see if port 443 is used on any IP address on the server, not just the ones being used TFS as identified via either IP address or host header bindings.</p>
<p>The only other thing we have had to do is upgrade <a href="http://pascoal.net/2013/04/team-foundation-task-board-enhancer-version-2-6-2-released-for-update-2-only/">Tiago’s Team Foundation Task Board Enhancer</a>, without the upgrade the previous version of this extension did not work.</p>
<p>So not too bad an experience.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Great event in Antwerp</title>
      <link>https://blog.richardfennell.net/posts/great-event-in-antwerp/</link>
      <pubDate>Thu, 18 Apr 2013 12:46:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/great-event-in-antwerp/</guid>
      <description>&lt;p&gt;I had a great time yesterday at the &lt;a href=&#34;http://www.visugday.be/&#34;&gt;VISUG Conference in Antwerp&lt;/a&gt;, thanks to everyone involved in event.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_97.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_97.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The ALM track organiser &lt;a href=&#34;http://intovsts.net/&#34;&gt;Pieter&lt;/a&gt; and all the speakers, myself, &lt;a href=&#34;http://woodwardweb.com&#34;&gt;Martin&lt;/a&gt; and &lt;a href=&#34;http://www.teamsystempro.com&#34;&gt;Neno&lt;/a&gt;, were really pleased to see how full our room was all day. There certainly seems to be more interest in ALM than ever before.&lt;/p&gt;
&lt;p&gt;If you want to find out more on areas of TFS I was talking on i.e connecting from environments other than Visual Studio, why not look at a very similar session I recorded last year, &lt;a href=&#34;http://channel9.msdn.com/Events/TechDays/UK-Tech-Days/Visual-Studio-Team-Foundation-for-Everyone&#34;&gt;it can be found on Channel9&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I had a great time yesterday at the <a href="http://www.visugday.be/">VISUG Conference in Antwerp</a>, thanks to everyone involved in event.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_97.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_97.png" title="image"></a></p>
<p>The ALM track organiser <a href="http://intovsts.net/">Pieter</a> and all the speakers, myself, <a href="http://woodwardweb.com">Martin</a> and <a href="http://www.teamsystempro.com">Neno</a>, were really pleased to see how full our room was all day. There certainly seems to be more interest in ALM than ever before.</p>
<p>If you want to find out more on areas of TFS I was talking on i.e connecting from environments other than Visual Studio, why not look at a very similar session I recorded last year, <a href="http://channel9.msdn.com/Events/TechDays/UK-Tech-Days/Visual-Studio-Team-Foundation-for-Everyone">it can be found on Channel9</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Error TF400129: Verifying that the team project collection has space for new system fields when upgrading TFS to 2012.2</title>
      <link>https://blog.richardfennell.net/posts/error-tf400129-verifying-that-the-team-project-collection-has-space-for-new-system-fields-when-upgrading-tfs-to-2012-2/</link>
      <pubDate>Thu, 11 Apr 2013 15:02:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/error-tf400129-verifying-that-the-team-project-collection-has-space-for-new-system-fields-when-upgrading-tfs-to-2012-2/</guid>
      <description>&lt;p&gt;Whist testing an upgrade of TFS 2010 to TFS 2012.2 I was getting a number of verification errors in the TFS configuration upgrade wizard. They were all TF400129 based such as&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TF400129: Verifying that the team project collection has space for new system fields&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;but also mention models and schema.&lt;/p&gt;
&lt;p&gt;A quick search threw up &lt;a href=&#34;http://social.msdn.microsoft.com/Forums/en-US/tfsgeneral/thread/12f56b27-eb6f-4c7f-8b96-22112cec89ce&#34;&gt;this thread on the subject&lt;/a&gt;, but on checking the DB tables I could see my problem was all together more basic. The thread talked of TPCs in incorrect states. In my case I had been provided with an empty DB, so TFS could find not tables at all. So I suppose the error message was a bit too specific, should have been ‘DB is empty!!!!’ error. Once I got a valid file backup restored for the TPC in question all was ok.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist testing an upgrade of TFS 2010 to TFS 2012.2 I was getting a number of verification errors in the TFS configuration upgrade wizard. They were all TF400129 based such as</p>
<blockquote>
<p><em>TF400129: Verifying that the team project collection has space for new system fields</em></p></blockquote>
<p>but also mention models and schema.</p>
<p>A quick search threw up <a href="http://social.msdn.microsoft.com/Forums/en-US/tfsgeneral/thread/12f56b27-eb6f-4c7f-8b96-22112cec89ce">this thread on the subject</a>, but on checking the DB tables I could see my problem was all together more basic. The thread talked of TPCs in incorrect states. In my case I had been provided with an empty DB, so TFS could find not tables at all. So I suppose the error message was a bit too specific, should have been ‘DB is empty!!!!’ error. Once I got a valid file backup restored for the TPC in question all was ok.</p>
<p>A bit more digging showed that I could also see an error if I issued the command</p>
<blockquote>
<p><em>tfsconfig remapdbs /sqlinstances:TFS1 /databaseName:TFS1;Tfs_Configuration</em></p></blockquote>
<p>As this too reported it could not find a DB it was expecting.</p>
<p>So the tip is make sure you really have the Dbs restored you think you have.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can TFS work within your company’s processes?</title>
      <link>https://blog.richardfennell.net/posts/can-tfs-work-within-your-companys-processes/</link>
      <pubDate>Tue, 02 Apr 2013 20:07:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/can-tfs-work-within-your-companys-processes/</guid>
      <description>&lt;p&gt;I am often asked how well TFS it can be integrated within a company’s larger management processes. This is an especially common question in companies where developing software is not their primary business, it is just a means to enable the business’s core business.&lt;/p&gt;
&lt;p&gt;A really nice discussion on this question can be found on a recent &lt;a href=&#34;http://www.dotnetrocks.com/default.aspx?showNum=856&#34;&gt;.Net Rocks about Columbia Sportswear&lt;/a&gt;, well worth a listen.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am often asked how well TFS it can be integrated within a company’s larger management processes. This is an especially common question in companies where developing software is not their primary business, it is just a means to enable the business’s core business.</p>
<p>A really nice discussion on this question can be found on a recent <a href="http://www.dotnetrocks.com/default.aspx?showNum=856">.Net Rocks about Columbia Sportswear</a>, well worth a listen.</p>
]]></content:encoded>
    </item>
    <item>
      <title>What machine name is being used when you compose an environment from running VMs in Lab Management?</title>
      <link>https://blog.richardfennell.net/posts/what-machine-name-is-being-used-when-you-compose-an-environment-from-running-vms-in-lab-management/</link>
      <pubDate>Thu, 28 Mar 2013 13:00:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-machine-name-is-being-used-when-you-compose-an-environment-from-running-vms-in-lab-management/</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a follow up to my&lt;/em&gt; &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/05/More-fun-with-creating-TFS-2012-SC-VMM-environments.aspx&#34;&gt;&lt;em&gt;older post&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on a similar subject&lt;/em&gt; &lt;/p&gt;
&lt;p&gt;When composing a new Lab Environment from running VMs the PC you are running MTM on needs to be able to connect to the running VMs. It does this using IP so at the most basic level you need to be able to resolve the name of the VM to an IP address.&lt;/p&gt;
&lt;p&gt;If your VM is connected to the same LAN as your PC, but not in the same domain the chances are that DNS name resolution will not work. I find the best option is to put a temporary entry in your local hosts file, keeping it for just as long as the creation process takes.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>This is a follow up to my</em> <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/05/More-fun-with-creating-TFS-2012-SC-VMM-environments.aspx"><em>older post</em></a> <em>on a similar subject</em> </p>
<p>When composing a new Lab Environment from running VMs the PC you are running MTM on needs to be able to connect to the running VMs. It does this using IP so at the most basic level you need to be able to resolve the name of the VM to an IP address.</p>
<p>If your VM is connected to the same LAN as your PC, but not in the same domain the chances are that DNS name resolution will not work. I find the best option is to put a temporary entry in your local hosts file, keeping it for just as long as the creation process takes.</p>
<p>But what should this entry be? Should it be the name of the VM as it appears in the MTM new environment wizard?</p>
<p>Turns out the answer is no, it needs to be the name as appears in the SC-VMM console</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_96.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_96.png" title="image"></a></p>
<p>So the hosts table contains the correct entries for the FQDN (watch out for typo’s here, a mistype IP address only adds to the confusion) e.g.</p>
<blockquote>
<p>10.10.10.100 wyfrswin7.wyfrs.local<br>
10.10.10.45 shamrockbay.wyfrs.local</p></blockquote>
<p>Once all this is set then just follow the process in my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/05/More-fun-with-creating-TFS-2012-SC-VMM-environments.aspx">older post</a> to enable the connection so the new environment wizard can verify OK.</p>
<p>Remember the firewall on the VMs may also be an issue. Just for the period of the environment creation I often disable this.</p>
<p>Also <a href="http://www.wireshark.org/download.html">Wireshark</a> is your friend, it will show if the machine you think is responding is the one you really want.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Lab Management with SCVMM 2012 and /labenvironmentplacementpolicy:aggressive</title>
      <link>https://blog.richardfennell.net/posts/lab-management-with-scvmm-2012-and-labenvironmentplacementpolicyaggressive/</link>
      <pubDate>Tue, 26 Mar 2013 11:05:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/lab-management-with-scvmm-2012-and-labenvironmentplacementpolicyaggressive/</guid>
      <description>&lt;p&gt;I did a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/01/31/Have-you-tried-switching-it-on-and-off-again-Go-on-be-aggressive!.aspx&#34;&gt;post a year or so ago&lt;/a&gt; about setting up TFS Labs and mentioned command&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;C:Program FilesMicrosoft Team Foundation Server 2010Tools&amp;gt;tfsconfig lab /hostgroup /collectionName:myTpc  ?/labenvironmentplacementpolicy:aggressive /edit /Name:&amp;ldquo;My hosts group&amp;rdquo;&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This can be used tell TFS Lab Management to place VMs using any memory that is assigned stopped environments. This allowed a degree of over commitment of resources.&lt;/p&gt;
&lt;p&gt;As I discovered today this command only works for SCVMM 2010 based system. if you try it you just get a message saying not support on SCVMM 2012. There appears to be no equivalent for 2012.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I did a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/01/31/Have-you-tried-switching-it-on-and-off-again-Go-on-be-aggressive!.aspx">post a year or so ago</a> about setting up TFS Labs and mentioned command</p>
<blockquote>
<p><em>C:Program FilesMicrosoft Team Foundation Server 2010Tools&gt;tfsconfig lab /hostgroup /collectionName:myTpc  ?/labenvironmentplacementpolicy:aggressive /edit /Name:&ldquo;My hosts group&rdquo;</em></p></blockquote>
<p>This can be used tell TFS Lab Management to place VMs using any memory that is assigned stopped environments. This allowed a degree of over commitment of resources.</p>
<p>As I discovered today this command only works for SCVMM 2010 based system. if you try it you just get a message saying not support on SCVMM 2012. There appears to be no equivalent for 2012.</p>
<p>However you can use features such as dynamic memory with in SCVMM 2012 so all is not lost</p>
]]></content:encoded>
    </item>
    <item>
      <title>Kerbal Space Program - Its educational and written in Mono too!</title>
      <link>https://blog.richardfennell.net/posts/kerbal-space-program-its-educational-and-written-in-mono-too/</link>
      <pubDate>Sun, 24 Mar 2013 11:35:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/kerbal-space-program-its-educational-and-written-in-mono-too/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_95.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_95.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;My son is really taken with &lt;a href=&#34;https://www.kerbalspaceprogram.com/&#34;&gt;Kerbal Space Program&lt;/a&gt;. This great games allows you to design your own  space craft and so run your own on-going space program, all with a realistic physics engine.&lt;/p&gt;
&lt;p&gt;What is particularly nice is that this &lt;a href=&#34;http://www.mono-project.com/Main_Page&#34;&gt;cross platform Mono based application&lt;/a&gt; is being built in a very agile manner with a new release most weeks, each adding features as well as bug fixes. There also seems to be an active community of people building plug-ins for extra space craft components and rovers.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="/wp-content/uploads/sites/2/historic/image_95.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_95.png" title="image"></a></p>
<p>My son is really taken with <a href="https://www.kerbalspaceprogram.com/">Kerbal Space Program</a>. This great games allows you to design your own  space craft and so run your own on-going space program, all with a realistic physics engine.</p>
<p>What is particularly nice is that this <a href="http://www.mono-project.com/Main_Page">cross platform Mono based application</a> is being built in a very agile manner with a new release most weeks, each adding features as well as bug fixes. There also seems to be an active community of people building plug-ins for extra space craft components and rovers.</p>
<p>I am not sure how much orbital mechanics will appear in his school exams this year, but it is certainly educational in the longer term.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF900548 when using my Typemock 2012 TFS custom build activity</title>
      <link>https://blog.richardfennell.net/posts/tf900548-when-using-my-typemock-2012-tfs-custom-build-activity/</link>
      <pubDate>Sat, 23 Mar 2013 21:52:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf900548-when-using-my-typemock-2012-tfs-custom-build-activity/</guid>
      <description>&lt;p&gt;Using the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/24/Getting-Typemock-Isolator-running-within-a-TFS-2012-build-part-2.aspx&#34;&gt;Typemock TFS 2012 Build activity I created&lt;/a&gt; I had started seen the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TF900548: An error occurred publishing the Visual Studio test results. Details: &amp;lsquo;The following id must have a positive value: testRunId.&amp;rsquo;&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;I thought it might be down to having patched our build boxes to TFS 2012 Update 1, maybe it needed to be rebuild due to some dependency? However, on trying the build activity on my development TFS server I found it ran fine.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Using the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/24/Getting-Typemock-Isolator-running-within-a-TFS-2012-build-part-2.aspx">Typemock TFS 2012 Build activity I created</a> I had started seen the error</p>
<blockquote>
<p><em>TF900548: An error occurred publishing the Visual Studio test results. Details: &lsquo;The following id must have a positive value: testRunId.&rsquo;</em></p></blockquote>
<p>I thought it might be down to having patched our build boxes to TFS 2012 Update 1, maybe it needed to be rebuild due to some dependency? However, on trying the build activity on my development TFS server I found it ran fine.</p>
<p>I made sure I had the same custom assemblies and Typemock autorun folder and build definition on both systems, I did, so it was not that.</p>
<p>Next I tried running the build but targeting an agent not on the same VM as the build controller. This worked, so it seems I have a build controller issues. So I ran Windows update to make sure the OS was patched it to date, it applied a few patches and rebooted. And all was OK my test ran gain.</p>
<p>It does seem that for many build issues the standard switch it off and back on again does the job</p>
]]></content:encoded>
    </item>
    <item>
      <title>Black Marble is hosting the Yorkshire Chapter of the Global Windows Azure Bootcamp on the 27th of April</title>
      <link>https://blog.richardfennell.net/posts/black-marble-is-hosting-the-yorkshire-chapter-of-the-global-windows-azure-bootcamp-on-the-27th-of-april/</link>
      <pubDate>Thu, 21 Mar 2013 12:45:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/black-marble-is-hosting-the-yorkshire-chapter-of-the-global-windows-azure-bootcamp-on-the-27th-of-april/</guid>
      <description>&lt;p&gt;Black Marble is hosting the Yorkshire Chapter of the &lt;a href=&#34;http://bit.ly/BMGWAB&#34;&gt;Global Windows Azure Bootcamp&lt;/a&gt; taking place in several locations globally on the April 27th, 2013. This free community organised event is one day deep dive class where you will get you up to speed on developing for Windows Azure. The class includes a trainer with deep real world experience with Windows Azure, as well as a series of labs so you can practice what you just learned.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Black Marble is hosting the Yorkshire Chapter of the <a href="http://bit.ly/BMGWAB">Global Windows Azure Bootcamp</a> taking place in several locations globally on the April 27th, 2013. This free community organised event is one day deep dive class where you will get you up to speed on developing for Windows Azure. The class includes a trainer with deep real world experience with Windows Azure, as well as a series of labs so you can practice what you just learned.</p>
<p>Black Marble’s event will be run by <a href="http://blogs.blackmarble.co.uk/blogs/boss">Robert Hogg (Microsoft Integration MVP)</a> and <a href="http://blogs.blackmarble.co.uk/blogs/sspencer/default.aspx">Steve Spencer (Windows Azure MVP)</a>. Come along and join the global Azure event of the year!</p>
<p>Check out the <a href="http://bit.ly/GWABPreReq">prerequisites you need to install on your PC</a> and [here to sign up](<a href="http://www.blackmarble.co.uk/events.aspx?event=Global">http://www.blackmarble.co.uk/events.aspx?event=Global</a> Windows Azure Boot Camp)</p>
<p><img alt="Global Azure Bootcamp Logo" loading="lazy" src="http://www.blackmarble.co.uk/images/events/AzBootcamp2013.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Installing a DB from a DACPAC using Powershell as part of TFS Lab Management deployment</title>
      <link>https://blog.richardfennell.net/posts/installing-a-db-from-a-dacpac-using-powershell-as-part-of-tfs-lab-management-deployment/</link>
      <pubDate>Thu, 14 Mar 2013 16:45:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/installing-a-db-from-a-dacpac-using-powershell-as-part-of-tfs-lab-management-deployment/</guid>
      <description>&lt;p&gt;I have been battling setting up a DB deployed via the SQL 2012 DAC tools and Powershell.  My environment was a network isolated pair of machines&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DC – the domain controller and SQL 2012 server&lt;/li&gt;
&lt;li&gt;IIS – A web front end&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As this is network isolated I could only run scripts on the IIS server, so my DB deploy needed to be remote. So the script I ended up with was&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been battling setting up a DB deployed via the SQL 2012 DAC tools and Powershell.  My environment was a network isolated pair of machines</p>
<ul>
<li>DC – the domain controller and SQL 2012 server</li>
<li>IIS – A web front end</li>
</ul>
<p>As this is network isolated I could only run scripts on the IIS server, so my DB deploy needed to be remote. So the script I ended up with was</p>
<p>param(<br>
    [string]$sqlserver = $( throw &ldquo;Missing: parameter sqlserver&rdquo;),<br>
    [string]$dacpac = $( throw &ldquo;Missing: parameter dacpac&rdquo;),<br>
    [string]$dbname = $( throw &ldquo;Missing: parameter dbname&rdquo;) )</p>
<p>Write-Host &ldquo;Deploying the DB with the following settings&rdquo;<br>
Write-Host &ldquo;sqlserver:   $sqlserver&rdquo;<br>
Write-Host &ldquo;dacpac: $dacpac&rdquo;<br>
Write-Host &ldquo;dbname: $dbname&rdquo;</p>
<p># load in DAC DLL (requires config file to support .NET 4.0)<br>
# change file location for a 32-bit OS<br>
add-type -path &ldquo;C:Program Files (x86)Microsoft SQL Server110DACbinMicrosoft.SqlServer.Dac.dll&rdquo;</p>
<p># make DacServices object, needs a connection string<br>
$d = new-object Microsoft.SqlServer.Dac.DacServices &ldquo;server=$sqlserver&rdquo;</p>
<p># register events, if you want &rsquo;em<br>
register-objectevent -in $d -eventname Message -source &ldquo;msg&rdquo; -action { out-host -in $Event.SourceArgs[1].Message.Message } | Out-Null</p>
<p># Load dacpac from file &amp; deploy to database named pubsnew<br>
$dp = [Microsoft.SqlServer.Dac.DacPackage]::Load($dacpac)<br>
$d.Deploy($dp, $dbname, $true) # the true is to allow an upgrade, could be parameterised, also can add further deploy params</p>
<p># clean up event<br>
unregister-event -source &ldquo;msg&rdquo;</p>
<p>Remember the SQL 2012 DAC tools only work with PowerShell 3.0 as they have a .NET 4 dependency.</p>
<p>This was called within the Lab Build using the command line</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_94.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_94.png" title="image"></a></p>
<p>cmd /c powershell $(BuildLocation)SQLDeploy.ps1 dc $(BuildLocation)Database.dacpac sabs</p>
<p>All my scripts worked correctly locally when I ran it on the command line, they were also starting from within the build, but failing with errors along the lines of</p>
<p>Deployment Task Logs for Machine: IIS<br>
Accessing the following location using the lab service account: blackmarbletfslab, \storedrops.<br>
Deploying the DB with the following settings<br>
sqlserver:   dc<br>
dacpac: \storedropsMain.CIMain.CI_20130314.2DebugDatabase.dacpac<br>
dbname: Database1<br>
Initializing deployment (Start)<br>
Exception calling &ldquo;Deploy&rdquo; with &ldquo;3&rdquo; argument(s): &ldquo;Could not deploy package.&rdquo;<br>
Initializing deployment (Failed)<br>
At \storedropsMain.CIMain.CI_20130314.2DebugSQLDeploy.ps1:26<br>
char:2<br>
+  $d.Deploy($dp, $dbname, $true) # the true is to allow an upgrade<br>
+  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
+ CategoryInfo          : NotSpecified: (:) [], MethodInvocationException<br>
+ FullyQualifiedErrorId : DacServicesException<br>
Stopped accessing the following location using the lab service account: blackmarbletfslab, \storedrops.</p>
<p>Though not obvious from the error message the issue was who the script was running as. The TFS agent runs as a machine account, this had no rights to access the SQL on the DC. Once I granted the computer account IIS$ suitable rights to the SQL box all was OK. The alternative would have been to enable mixed mode authentication and user a connection string in the form </p>
<blockquote>
<p>“server=dc;User ID=sa;Password=mypassword”</p></blockquote>
<p>So now I can deploy my DB on a new build.</p>
]]></content:encoded>
    </item>
    <item>
      <title>You don’t half get strange errors when two servers have the same SID</title>
      <link>https://blog.richardfennell.net/posts/you-dont-half-get-strange-errors-when-two-servers-have-the-same-sid/</link>
      <pubDate>Tue, 12 Mar 2013 17:30:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-dont-half-get-strange-errors-when-two-servers-have-the-same-sid/</guid>
      <description>&lt;p&gt;You don’t half get strange errors when building a test environment if when you run SYSPREP’d each copy of your VM base image you forget to check the ‘generalize’ box&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_93.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_93.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you forget this, as I did, each VM has a different name but the same SID. Basically the domain/AD is completely confused as who is what. The commonest error I saw was that I could not setup application (Report Services, SP 2010 and TFS 2012) with domain service accounts. In all cases I got messages about missing rights or cannot communicate with domain controller.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You don’t half get strange errors when building a test environment if when you run SYSPREP’d each copy of your VM base image you forget to check the ‘generalize’ box</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_93.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_93.png" title="image"></a></p>
<p>If you forget this, as I did, each VM has a different name but the same SID. Basically the domain/AD is completely confused as who is what. The commonest error I saw was that I could not setup application (Report Services, SP 2010 and TFS 2012) with domain service accounts. In all cases I got messages about missing rights or cannot communicate with domain controller.</p>
<p>The fix was to basically start again. I re-SDYSPREP’d one of the pair of boxes I had to it reset it’s SID, I stripped off what I was trying to install, re-added the server to the domain and installed the applications again. Once this was done all was fine.</p>
<p><a href="http://blogs.technet.com/b/markrussinovich/archive/2009/11/03/3291024.aspx">For more on SID and SYSPREP see Mark Russinovick’s blog</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot run Microsoft Fakes based test if Typemock Isolator enabled</title>
      <link>https://blog.richardfennell.net/posts/cannot-run-microsoft-fakes-based-test-if-typemock-isolator-enabled/</link>
      <pubDate>Sat, 09 Mar 2013 15:07:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-run-microsoft-fakes-based-test-if-typemock-isolator-enabled/</guid>
      <description>&lt;p&gt;With Microsoft Fakes moving to the Premium SKU of Visual Studio in 2012.2 (&lt;a href=&#34;http://www.microsoft.com/en-us/download/details.aspx?id=36833&#34;&gt;CTP4 is now available&lt;/a&gt;) more people will be looking at using them.&lt;/p&gt;
&lt;p&gt;I have just installed CTP4 and have seen a behaviour I don’t think I have not seen in the previous version of Visual Studio (I need to check because as well as CTP4  I have recently installed the new version of Typemock Isolator 7.3.0 that addresses issues with Windows 8 and Visual  Studio 2012).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With Microsoft Fakes moving to the Premium SKU of Visual Studio in 2012.2 (<a href="http://www.microsoft.com/en-us/download/details.aspx?id=36833">CTP4 is now available</a>) more people will be looking at using them.</p>
<p>I have just installed CTP4 and have seen a behaviour I don’t think I have not seen in the previous version of Visual Studio (I need to check because as well as CTP4  I have recently installed the new version of Typemock Isolator 7.3.0 that addresses issues with Windows 8 and Visual  Studio 2012).</p>
<p>Anyway the error you see when you run a fakes based test is <em>‘UnitTestIsolation instrumentation failed to initialialize, Please restart Visual Studio and rerun this test’</em></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_92.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_92.png" title="image"></a></p>
<p>The solution is to disable Typemock Isolator (Menu Typemock &gt; Suspend Mocking), when this is done, without a reboot, the Fakes based test run.</p>
<p>Does mean you can’t have a solution using both Fakes and Isolator, but why would you?</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS TPC Databases and SQL 2012 availability groups</title>
      <link>https://blog.richardfennell.net/posts/tfs-tpc-databases-and-sql-2012-availability-groups/</link>
      <pubDate>Sat, 09 Mar 2013 13:46:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-tpc-databases-and-sql-2012-availability-groups/</guid>
      <description>&lt;p&gt;Worth noting that when you create a new TPC in TFS 2012, when the TFS configuration DB and other TPC DBs are in SQL 2012 availability groups, the new TPC DB is not placed in this or any other availability group. You have to add it manually, and historically remove it when servicing TFS. Though the need to remove it for servicing changes with TFS 2012.2 which &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/03/04/ctp4-march-of-vs-tfs-2012-update-2-is-available.aspx&#34;&gt;allows servicing of high availability DBs&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Worth noting that when you create a new TPC in TFS 2012, when the TFS configuration DB and other TPC DBs are in SQL 2012 availability groups, the new TPC DB is not placed in this or any other availability group. You have to add it manually, and historically remove it when servicing TFS. Though the need to remove it for servicing changes with TFS 2012.2 which <a href="http://blogs.msdn.com/b/bharry/archive/2013/03/04/ctp4-march-of-vs-tfs-2012-update-2-is-available.aspx">allows servicing of high availability DBs</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Recovering network isolated lab management environments if you have to recreate your SC-VMM server’s DB</title>
      <link>https://blog.richardfennell.net/posts/recovering-network-isolated-lab-management-environments-if-you-have-to-recreate-your-sc-vmm-servers-db/</link>
      <pubDate>Fri, 08 Mar 2013 17:12:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/recovering-network-isolated-lab-management-environments-if-you-have-to-recreate-your-sc-vmm-servers-db/</guid>
      <description>&lt;p&gt;Whilst &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/03/04/Upgrading-our-TFS-2012-Lab-Management-to-use-SC-VMM-2012-SP1.aspx&#34;&gt;upgrading our Lab Management system&lt;/a&gt; we lost the SC-VMM DB. This has meant we needed to recreate environments we already have running on Hyper_V hosts but were unknown to TFS. If they were not network isolated this is straight forward, just recompose the environment (after clear out the XML in the VM descriptions fields). However if they are network isolated and running, then you have do play around a bit.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/03/04/Upgrading-our-TFS-2012-Lab-Management-to-use-SC-VMM-2012-SP1.aspx">upgrading our Lab Management system</a> we lost the SC-VMM DB. This has meant we needed to recreate environments we already have running on Hyper_V hosts but were unknown to TFS. If they were not network isolated this is straight forward, just recompose the environment (after clear out the XML in the VM descriptions fields). However if they are network isolated and running, then you have do play around a bit.</p>
<p>This is the simplest method I have found thus far. I am interested to hear if you have a better way</p>
<ul>
<li>In SC-VMM (or via PowerShell) find all the VMs in your environment. They are going to have names in the form Lab_[GUID]. If you look at the properties of the VMs in the description field you can see the XML that defines the Lab they belong to.</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_90.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_90.png" title="image"></a></p>
<p>If you are not sure what VMs you need you can of course cross reference the internal machine names with the AD within the network isolated environment. remember this environment is running so you can login to it.</p></blockquote>
<ul>
<li>
<p>Via SC-VMM Shutdown each VM</p>
</li>
<li>
<p>Via SC-VMM store the VM in the library</p>
</li>
<li>
<p>Wait a while…….</p>
</li>
<li>
<p>When all the VMs have been stored, navigate to them in SC-VMM. For each one in turn open the properties and</p>
</li>
<li>
<p><em><strong>CHECK THE DESCRIPTION XML TO MAKE SURE YOU HAVE THE RIGHT VM AND KNOW THEIR ROLE</strong></em></p>
</li>
<li>
<p>Change the name to something sensible (not essential if you like GUIDs in environment members names, but as I think it helps) e.g change Lab_[guid] to ‘My Test DC’</p>
</li>
<li>
<p>Delete all the XML in the Description field</p>
</li>
<li>
<p>In the hardware configuration, delete the ‘legacy network’ and connect the ‘Network adaptor’ to your main network – this will all be recreated when you create the new lab</p>
</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_91.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_91.png" title="image"></a></p>
<p>Note that a DC will not have any connections to your main network as it is network isolated. For the purpose of this migration it <strong>DOES</strong> need to be reconnected. Again this will be stored by the tooling when you create the new environment.</p></blockquote>
<ul>
<li>When all have been update in SC-VMM, open MTM and import the stored VMs into the team project</li>
<li>You can now create a new environment using these stored VM. It should deploy out OK, but I have found you might need to restart it before all the test agent connect correctly</li>
<li>And that should be it, the environment is known to TFS lab managed and is running network isolated</li>
</ul>
<p>You might want to delete the stored VMs once you have the environment running. But this will down to your policies, they are not needed as you can store the environment as a whole to archive or duplicate it with network isolation.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fixing Lab Manager environments with brute force</title>
      <link>https://blog.richardfennell.net/posts/fixing-lab-manager-environments-with-brute-force/</link>
      <pubDate>Tue, 05 Mar 2013 17:22:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixing-lab-manager-environments-with-brute-force/</guid>
      <description>&lt;p&gt;Another post from Rik &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2013/03/05/Fixing-Lab-Manager-environments-with-brute-force.aspx&#34;&gt;Fixing Lab Manager environments with brute force&lt;/a&gt; following our Lab Management upgrade&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Another post from Rik <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2013/03/05/Fixing-Lab-Manager-environments-with-brute-force.aspx">Fixing Lab Manager environments with brute force</a> following our Lab Management upgrade</p>
]]></content:encoded>
    </item>
    <item>
      <title>Creating VMs for lab management</title>
      <link>https://blog.richardfennell.net/posts/creating-vms-for-lab-management/</link>
      <pubDate>Tue, 05 Mar 2013 12:46:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/creating-vms-for-lab-management/</guid>
      <description>&lt;p&gt;Rik has done a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2013/03/05/Things-to-remember-when-building-virtual-machines-for-a-lab-manager-environment.aspx&#34;&gt;post on creating VMs for Lab management&lt;/a&gt; to follow up on the one I did yesterday &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/default.aspx&#34;&gt;on our Lab upgrade&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Rik has done a <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/post/2013/03/05/Things-to-remember-when-building-virtual-machines-for-a-lab-manager-environment.aspx">post on creating VMs for Lab management</a> to follow up on the one I did yesterday <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/default.aspx">on our Lab upgrade</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading our TFS 2012 Lab Management to use SC-VMM 2012 SP1</title>
      <link>https://blog.richardfennell.net/posts/upgrading-our-tfs-2012-lab-management-to-use-sc-vmm-2012-sp1/</link>
      <pubDate>Mon, 04 Mar 2013 17:45:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-our-tfs-2012-lab-management-to-use-sc-vmm-2012-sp1/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;We have been successfully using our TFS Lab Management system for a while. However, we have noticed an issue that when deploying environments the performance of the system slowed. This was down to I/O congestion between our servers and the SAN that provided their main VM storage because the library store and the hyper-v host servers all shared the same SAN.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/clip_image002.png&#34;&gt;&lt;img alt=&#34;clip_image002&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/clip_image002_thumb.png&#34; title=&#34;clip_image002&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Also we had the need to start to create environments using Windows 8 and Server 2012. This is not possible using System Center Virtual Machine Manager (SCVMM) 2008 or Hyper-V hosted on earlier than Windows 2012.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>We have been successfully using our TFS Lab Management system for a while. However, we have noticed an issue that when deploying environments the performance of the system slowed. This was down to I/O congestion between our servers and the SAN that provided their main VM storage because the library store and the hyper-v host servers all shared the same SAN.</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002.png"><img alt="clip_image002" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002_thumb.png" title="clip_image002"></a></p>
<p>Also we had the need to start to create environments using Windows 8 and Server 2012. This is not possible using System Center Virtual Machine Manager (SCVMM) 2008 or Hyper-V hosted on earlier than Windows 2012.</p>
<p>So it was time to do an upgrade of both the operating system, and our underlying hardware. The decision was made to dump the SAN and provide each Hyper-V host with its own SAS based RAID 5 disk. The main Lab SCVMM library server would also use local storage. All servers would be moved to Server 2012 and SCVMM would advance to 2012 with Service Pack 1 (required to manage Server 2012).</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image004.png"><img alt="clip_image004" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image004_thumb.png" title="clip_image004"></a></p>
<p>You might ask why we did not do this sooner, especially the move to support Windows 8 and Server 2012 VMs? The answer is that until TFS 2012 quarterly update 1 there was no support for System Center 2012, which itself had to wait until the SP1 release in January. So the start of the year was the first time we could do the update. We had planned to do very early in the year, but waiting for hardware and scheduling of our time let it drag on.</p>
<h2 id="hardware">Hardware</h2>
<p>The hardware upgrade was less straightforward than original anticipated as we needed to change the host controller card to support the required RAID5 array of disks. Once that was done on each of the Hyper-V hosts, installing the OS and configuring Hyper-V was quick and easy.</p>
<p>Meanwhile the library server was moved to a different box to allow for the sliding block puzzle of storage move and OS upgrades.</p>
<h2 id="the-plan">The Plan</h2>
<p>After much thought we decided our upgrade path would be</p>
<ol>
<li>Stop all environments</li>
<li>Rebuild each Hyper-V server with Server 2012 on their mirrored boot disks, adding the new SAS hardware and removing the SAN disks.</li>
<li>Build the new SCVMM 2012SP1 server on Server 2012. The DB server would also move from an in-place SQL 2008 R2 instance to be hosted on our resilient SQL 2012 with Always On (also now supported in SCVMM 2012 SP1). This would mean an upgrade with existing database, as far as SCVMM is concerned.</li>
<li>Add the rebuilt Hyper-V hosts to SCVMM</li>
<li>Get the deployed VMs off SAN and onto the new SAS disks, along with a transfer of the library share.</li>
<li>Reconfigure TFS to point to the new SCVMM server</li>
</ol>
<h2 id="what-really-happened">What really happened</h2>
<h3 id="scvmm-upgrade">SCVMM Upgrade</h3>
<p>The major problem we found was that you can’t upgrade the database from SCVMM 2008 R2 SP1 to SCVMM 2012 SP1. It need to be upgraded to SCVMM 2012 first.</p>
<p>In fact the SCVMM install process then suffered a cascade of issues:</p>
<ul>
<li>SCVMM 2008 R2 needed the WAIK for Vista. SCVMM 2012 required the Win7 WAIK and SCVMM 2012 SP1 wanted the Windows 8 Automated Deployment kit.</li>
<li>Each time SCVMM wanted to upgrade its database, the database had to be in Simple Recovery mode. This is incompatible with Always On, which requires Full Recovery mode. This meant that the DB couldn’t be moved into the Availability group until the final installation was complete.</li>
</ul>
<p>In the end we built a temporary VM for SCVMM 2012. The original host was left on Server 2008 R2 with the Vista WAIK; the intermediate host had Server 2008 R2 with the Windows 7 WAIK and the final host had Server 2012 with the Windows 8 ADK.</p>
<p>The problem we then found was that the final SCVMM 2012 SP1 system failed to start the Virtual Machine Manager service. The error we saw was:</p>
<blockquote>
<p><em>Problem signature:<br>
P1: vmmservice<br>
P2: 3.1.6011.0<br>
P3: Utils<br>
P4: 3.1.6011.0<br>
P5: M.V.D.SqlRetryCommand.InternalExecuteReader<br>
P6: M.V.DB.CarmineSqlException<br>
P7: b82c</em></p></blockquote>
<p>We rolled back and repeated the process a number of time, and even tried a final in-place upgrade of the original host. Each resulted in the same fault. The internet turned up no help – two or three people reported the same fault and each one required a clean SCVMM installation to fix the fault.</p>
<p>Interestingly, our original DB size was around 700Mb. Each time, the upgrade process left a final DB of around 70Mb, suggesting something had gone wrong during DB updates. Whether this had anything to do with the presence of Lab we don’t know.</p>
<p>In the end we had no choice but to install SCVMM 2012 SP1 clean, with a new database. Once we did that everything worked just fine from the SCVMM point of view.</p>
<h3 id="tfs">TFS</h3>
<p>First we repointed TFS at the new SCVMM server</p>
<blockquote>
<p>**Tfsconfig.exe lab /settings / scvmmservername:**my_new_scvmmservername <strong>/force</strong></p></blockquote>
<p>This worked OK, we then tried to run to upgrade the schema</p>
<blockquote>
<p><strong>tfsconfig lab /upgradeSCVMM /collectionName:*</strong>.</p></blockquote>
<p>But this errored. Also when we tried to change the library and host groups both via the UI and command line we also got errors. The problem was that TFS thought there were libraries and host groups on the now retired old SCVMM server.</p>
<p>The solution was to open Microsoft Test Manager (MTM) and delete the live environments and stored environments, VMs and templates. This had no effect on the new SCVVM server as these entries referred to the now no-existent SCVMM host. It just removed the entries in the TFS databases.</p>
<p>Once this was done we could run the command</p>
<blockquote>
<p><strong>tfsconfig lab /upgradeSCVMM /collectionName:*</strong>.</p></blockquote>
<p>And we could also reset the library and host groups to point to the new server.</p>
<h2 id="so-what-do-we-have">So what do we have?</h2>
<p>The upgrade (reinstall?) is now done and what do we have? On the Hyper-V hosts we have running VMs, and due to the effort of our IT team we have wired back the virtual networks previously created for Lab Management for network isolation. It is a mess and need doing via MTM, but it works for now.</p>
<p>The SCVMM library contains loads of stored environments. However as they were stored using the GUID based names used in Lab management their purpose is not that obvious.</p>
<p>As we had to delete the SCVMM database and TFS entries we have lost that index of GUIDs to sensible names.</p>
<h2 id="the-next-step">The next step?</h2>
<p>We need to manually take VMs and get them imported correctly into the new SCVMM library so that they can be deployed.</p>
<p>For running environment that don’t require network isolation we can compose new environments to wrapper the VMs. However, if we need network isolation we can’t see any method other than to push each VM up into the library and re-deploy them as a new network isolated environment, more posts to follow as I am sure we will learn stuff doing this.</p>
<p>Points of note:</p>
<ul>
<li>Lab Manager shoves a load of XML into the notes field of a VM on Hyper-V. If that xml is present, lab assumes the VM is part of an environment and won’t show that as being a VM for use in new environment. Deleting the XML makes the VM magically reappear.</li>
<li>Allowing developers to make lots of snapshots, particularly with save states is a bad idea. When migrating between server versions, the Hyper-V save states are incompatible so you will lose lots and lots and lots of work if you’re not careful.</li>
<li>Having lots of snapshots with configuration variations might also cause confusion. Collapsing snapshots overall is a not a bad idea.</li>
<li>If you have a VM in the library (we now have multiple libraries to make sure people using Lab can’t accidentally delete key VM images) with the same VM ID as a running machine SCVMM won’t show you the one that is running. If you think it through this makes sense – SCVMM doesn’t export and import VM folders, it simply hefts them around the network. This is just like copying a VM form one Win8 box to another – the default option is to import the machine with the same ‘unique’ identifier.<br>
The solution to this one is to import a new copy of the source VM onto the hyper-v host, choosing the ‘make a copy and generate a new ID’ option. This new VM can then be manipulated with SCVMM and you still have a ‘gold master’ in your library.</li>
<li>Enabling server 2012 data deduplication on your SCVMM library shares is a very good plan. Ours is saving 75% of disk space. The only thing to be wary of is that if you pull a VM from the library onto Hyper-V and then store it back to the library the file will take up the ‘correct’ amount of disk space until the dedupe job runs again. If you’re not careful you can ‘overfill’ your disk this way!</li>
<li>SCVMM 2012 and SP1 like clouds and offer all kinds of technology solutions that can build a virtual infrastructure for you with VLANs and all kinds of cleverness. This will confuse the life out of Lab Manager so don’t do it for your Lab Environment. We now have a couple of ‘clouds’ in SCVMM and the Lab one consists of little more than the Hyper-V hosts and associated libraries. There is one virtual switch and one logical switch, both of which exist only to hook up our main network. Anything more complex will confuse Lab.<br>
Meanwhile, Lab still creates new Hyper-V Virtual Switches. SCVMM will know nothing of these so you need to be aware of that when inspecting VMs in SCVMM.</li>
</ul>
<h2 id="if-we-were-doing-it-again">If we were doing it again…</h2>
<p>So what have we learnt? The upgrade of the SCVMM database is critical. If this fails you have no other option other than to rebuild and manual recreate from the basic VMs</p>
<p>Even with SC-VMM 2012 SP1 and Lab Management there are still many moving parts you have to consider to get the system working. The change from 2008 to 2012 is like going from Imperial to Metric measurements. It is the same basic idea, it just feels like everything has changed.</p>
<p>Thanks to <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx">Rik</a> and <a href="http://blogs.blackmarble.co.uk/blogs/rhancock/">Rob</a> for helping with this post</p>
]]></content:encoded>
    </item>
    <item>
      <title>ALM Rangers ship more guidance</title>
      <link>https://blog.richardfennell.net/posts/alm-rangers-ship-more-guidance/</link>
      <pubDate>Sat, 02 Mar 2013 15:24:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alm-rangers-ship-more-guidance/</guid>
      <description>&lt;p&gt;Yesterday two new ALM Ranger projects shipped, practical guidance on the use of &lt;strong&gt;Microsoft Fakes&lt;/strong&gt; (which I worked on) and for Team Foundation Server (TFS) &lt;strong&gt;D&lt;/strong&gt;isaster &lt;strong&gt;R&lt;/strong&gt;ecovery &lt;strong&gt;avoidance&lt;/strong&gt;, planning and step-step recovery walkthroughs for the worst case scenarios.&lt;/p&gt;
&lt;p&gt;Also updates for the &lt;strong&gt;Coded UI test tooling guide&lt;/strong&gt; and the &lt;strong&gt;TFS Upgrade guide&lt;/strong&gt; were released..&lt;/p&gt;
&lt;p&gt;For more details see below to follow the links to the Ranger blogs and CodePlex to download the materials.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Yesterday two new ALM Ranger projects shipped, practical guidance on the use of <strong>Microsoft Fakes</strong> (which I worked on) and for Team Foundation Server (TFS) <strong>D</strong>isaster <strong>R</strong>ecovery <strong>avoidance</strong>, planning and step-step recovery walkthroughs for the worst case scenarios.</p>
<p>Also updates for the <strong>Coded UI test tooling guide</strong> and the <strong>TFS Upgrade guide</strong> were released..</p>
<p>For more details see below to follow the links to the Ranger blogs and CodePlex to download the materials.</p>
<p>  <strong><a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2013/03/01/better-unit-testing-with-microsoft-fakes-guide-shipped.aspx">Better Unit Testing with Microsoft Fakes v1 eBook</a></strong></p>
<p>Contains practical guidance for migrating to and unit testing with Microsoft Fakes. Walk-throughs allow you to navigate basic and advanced concepts, giving you a comfortable and confident start in implementing Microsoft Fakes as a mocking solution. The eBook PDF format will be complimented with other eReader formats, i.e. MOBI, in an upcoming update as part of our new content and style dog-fooding adventure.<br>
<a href="http://vsartesttoolingguide.codeplex.com/releases/view/102290"><img alt="clip_image005" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image005.gif" title="clip_image005"></a></p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image003.gif"><img alt="clip_image003" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image003_thumb.gif" title="clip_image003"></a></p>
<p><strong><a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2013/03/01/tfs-disaster-avoidance-and-recovery-planning-shipped.aspx">Team Foundation Server Planning Guide v1.1</a></strong><br>
featuring new sections on <strong>D</strong>isaster Avoidance and <strong>R</strong>ecovery Planning, allowing you to spot the “smoke” before you are confronted by a “raging fire”.<br>
<a href="http://vsarplanningguide.codeplex.com/releases/view/88002"><img alt="clip_image005[1]" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image005%5B1%5D.gif" title="clip_image005[1]"></a></p>
<p><strong><a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2013/03/01/updates-to-the-test-tooling-and-upgrade-guides-shipped.aspx">Test Tooling Guide (Coded UI) v1</a></strong><br>
<a href="http://vsartesttoolingguide.codeplex.com/releases/view/88005"><img alt="clip_image002" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002.gif" title="clip_image002"></a></p>
<p><strong><a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2013/03/01/updates-to-the-test-tooling-and-upgrade-guides-shipped.aspx">Team Foundation Server Upgrade Guide v2.1</a></strong><br>
<a href="http://vsarupgradeguide.codeplex.com/releases/view/88355"><img alt="clip_image002[1]" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002%5B1%5D.gif" title="clip_image002[1]"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>For those hard to mock moments - Microsoft Fakes or Typemock Isolator?</title>
      <link>https://blog.richardfennell.net/posts/for-those-hard-to-mock-moments-microsoft-fakes-or-typemock-isolator/</link>
      <pubDate>Tue, 26 Feb 2013 19:29:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/for-those-hard-to-mock-moments-microsoft-fakes-or-typemock-isolator/</guid>
      <description>&lt;p&gt;About a year ago I wrote a post ‘&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx&#34;&gt;Now that VS11 has a fake library do I still need Typemock Isolator to fake out SharePoint?&lt;/a&gt;’. Well this discussion becomes relevant for more people as with Visual Studio 2012.2 (&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/02/11/ctp-for-visual-studio-2012-update-2-vs-2012-2-is-available.aspx&#34;&gt;currently available as a CTP&lt;/a&gt;) the &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2013/02/08/february-ctp-for-visual-studio-update-2.aspx#fakes&#34;&gt;Microsoft Fakes move from the Ultimate SKU to the Premium SKU&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;From my experience the Ultimate SKU is not present on too many developer’s PCs. It is most commonly found on the PCs of the team leads, software architects or test developers (managing coded UI/load testing etc. efforts). If a team was historically going to use Microsoft Fakes then they had to buy more Ultimate SKUs; as what is the point of a unit test using a mocking framework that only part of the team can run?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>About a year ago I wrote a post ‘<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx">Now that VS11 has a fake library do I still need Typemock Isolator to fake out SharePoint?</a>’. Well this discussion becomes relevant for more people as with Visual Studio 2012.2 (<a href="http://blogs.msdn.com/b/bharry/archive/2013/02/11/ctp-for-visual-studio-2012-update-2-vs-2012-2-is-available.aspx">currently available as a CTP</a>) the <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2013/02/08/february-ctp-for-visual-studio-update-2.aspx#fakes">Microsoft Fakes move from the Ultimate SKU to the Premium SKU</a>.</p>
<p>From my experience the Ultimate SKU is not present on too many developer’s PCs. It is most commonly found on the PCs of the team leads, software architects or test developers (managing coded UI/load testing etc. efforts). If a team was historically going to use Microsoft Fakes then they had to buy more Ultimate SKUs; as what is the point of a unit test using a mocking framework that only part of the team can run?</p>
<p>The Premium SKU of Visual Studio is far more common, I would go as far as to say it is the corporate standard for development. Now as this SKU contains Test Manager (since 2012 RTM) it covers most jobs most developers do. Ultimate is just needed for the specialists in the team. Adding fakes to the Premium SKU really makes sense if Microsoft want to drive adoption.</p>
<p>So now the question of whether to use Microsoft Fakes or <a href="http://www.typemock.com/">Typemock Isolator</a> (or <a href="http://www.telerik.com/products/mocking.aspx">Telerik JustMock</a> a product I have to admit I have not used in anger) is rebalanced as there is a fair chance a development team may all be licensed for Microsoft Fakes as they have the premium SKU. The question becomes is the cost of Isolator justified by the features it offers over and above Microsoft Fakes?</p>
<p>This is not an uncommon form of question for any third party add-in to Visual Studio. Visual Studio offers refactoring, but I think few would argue that <a href="http://www.jetbrains.com/resharper/">Resharper</a> or <a href="http://www.devexpress.com/Products/Visual_Studio_Add-in/Coding_Assistance/">RefactorPro!</a> don’t offer more features that justify their cost.</p>
<p>For me the big advantage of Typemock is ease of use and consistent syntax across all usage patterns. This could be just due to familiarity, but the fact I don’t need to manually generate the fake assembly is a bonus. Also that Isolator’s fluent API is basically the same as <a href="http://code.google.com/p/moq/">Moq</a> and <a href="https://github.com/FakeItEasy/FakeItEasy">FakeItEasy</a> so causes less friction when coming to advanced mocking from these tools. A team can use the <a href="http://www.typemock.com/isolator-product-page">free basic version of Typemock Isolator</a> until they need the advanced features when they need to license it.</p>
<p>Fakes is a different way of working to most other frameworks, working at a different level inside Visual Studio. A disadvantage of this is that it does not lend itself well to refactoring, you are probably going to have to regenerate the fake assemblies after any refactor, which can be slow. Also this makes refactoring a bit more risky, as you also have to touch unit tests, a manual operation.</p>
<p>I think at this time for me Isolator still offers advanced features and easy of use advantages that justifies the license cost. However, as with all tools this is an ever changing field, I expect to see new features and changes for all the players in the fakes market as they all aim to better address the problems cause by the poorly architecture of applications/frameworks such as SharePoint and of course our own poorly designed legacy code.</p>
]]></content:encoded>
    </item>
    <item>
      <title>VS Anywhere–Have a look at distributed pair programming</title>
      <link>https://blog.richardfennell.net/posts/vs-anywhere-have-a-look-at-distributed-pair-programming/</link>
      <pubDate>Mon, 25 Feb 2013 19:46:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vs-anywhere-have-a-look-at-distributed-pair-programming/</guid>
      <description>&lt;p&gt;Whilst at the MVP Summit in Redmond last week there were MVP2MVP sessions, these are much like &lt;a href=&#34;http://developerdeveloperdeveloper.com/home/&#34;&gt;DDD conferences we have&lt;/a&gt; in the UK, sessions delivered by MVPs on their experiences and products they offer.&lt;/p&gt;
&lt;p&gt;One of the most interesting I saw last week was &lt;a href=&#34;https://vsanywhere.com/default.aspx&#34;&gt;VS Anywhere&lt;/a&gt;. This is an extension to Visual Studio that allows distributed pair programming. This far more than desktop sharing in Skype or Lync, think more like the concurrent document editing in Office 365.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst at the MVP Summit in Redmond last week there were MVP2MVP sessions, these are much like <a href="http://developerdeveloperdeveloper.com/home/">DDD conferences we have</a> in the UK, sessions delivered by MVPs on their experiences and products they offer.</p>
<p>One of the most interesting I saw last week was <a href="https://vsanywhere.com/default.aspx">VS Anywhere</a>. This is an extension to Visual Studio that allows distributed pair programming. This far more than desktop sharing in Skype or Lync, think more like the concurrent document editing in Office 365.</p>
<p>Why not have a look at the <a href="https://www.youtube.com/watch?feature=player_embedded&amp;v=XQQih5zFb6E">promo video</a> or sign up for a free trial. If you have distributed teams, or the need to support a client’s developers remotely might be just the thing.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Lab Management and Windows 8 and Server 2012</title>
      <link>https://blog.richardfennell.net/posts/tfs-lab-management-and-windows-8-and-server-2012/</link>
      <pubDate>Mon, 25 Feb 2013 19:22:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-lab-management-and-windows-8-and-server-2012/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/lab_management/archive/2009/05/18/vsts-2010-lab-management-basic-concepts.aspx&#34;&gt;TFS Lab Management&lt;/a&gt; can be a good way to manage your development and test environments, providing a means to more easily store, deploy and snapshot environments. Problem is they are not magic, you need to create the virtual machines you will use. You can use tools such the &lt;a href=&#34;http://msdn.microsoft.com/en-us/magazine/hh580740.aspx&#34;&gt;ALM Rangers VM Factory&lt;/a&gt; to speed this process, but you still need the source ISOs to create the VMs.&lt;/p&gt;
&lt;p&gt;Until the release of System Center 2012 SP1 (and TFS 2012 QU1) Lab Management did not support Windows 8 or Server 2012. This was a limitation with the System  Center 2008 Virtual Machine Manager. But with the release of these new versions you now have no excuse not to have a look at TFS Lab Management for even your most cutting edge projects. And remember you are licensed Lab Management (including SC-VMM for the sole use of TFS Lab Management) as part of your TFS 2012 MSDN license.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.msdn.com/b/lab_management/archive/2009/05/18/vsts-2010-lab-management-basic-concepts.aspx">TFS Lab Management</a> can be a good way to manage your development and test environments, providing a means to more easily store, deploy and snapshot environments. Problem is they are not magic, you need to create the virtual machines you will use. You can use tools such the <a href="http://msdn.microsoft.com/en-us/magazine/hh580740.aspx">ALM Rangers VM Factory</a> to speed this process, but you still need the source ISOs to create the VMs.</p>
<p>Until the release of System Center 2012 SP1 (and TFS 2012 QU1) Lab Management did not support Windows 8 or Server 2012. This was a limitation with the System  Center 2008 Virtual Machine Manager. But with the release of these new versions you now have no excuse not to have a look at TFS Lab Management for even your most cutting edge projects. And remember you are licensed Lab Management (including SC-VMM for the sole use of TFS Lab Management) as part of your TFS 2012 MSDN license.</p>
<p>So if you want to have a look at System Center or Windows Server 2012 why not try the new evaluation downloads on TechNet</p>
<ul>
<li> <a href="http://aka.ms/MVP_SC_1">System Center 2012 SP1</a> </li>
<li> <a href="http://aka.ms/MVP_WS_ISO_1">Windows Server 2012 ISO</a></li>
<li> <a href="http://aka.ms/MVP_WS_VHD_1">Windows Server 2012 VHD</a></li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Running an external command line tool as part of a Wix install</title>
      <link>https://blog.richardfennell.net/posts/running-an-external-command-line-tool-as-part-of-a-wix-install/</link>
      <pubDate>Mon, 25 Feb 2013 15:21:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-an-external-command-line-tool-as-part-of-a-wix-install/</guid>
      <description>&lt;p&gt;I have recently been battling running a command line tool within a Wix 3.6 installer. I eventually got it going but learnt a few things. Here is a code fragment that outlines the solution.&lt;/p&gt;
&lt;p&gt;&amp;lt;Product ………&amp;gt;&lt;br&gt;
……… loads of other Wix bits&lt;/p&gt;
&lt;!-- The command line we wish to run is set via a property. Usually you would set this with &lt;Property /&gt; block, but in this case it has to be done via a CustomAction as we want to build the command from other Wix properties that can only be evaluated at runtime. So we set the  
       whole command line  including command line arguments as a CustomAction that will be run immediately i.e. in the first phase of the MSIExec process    
       while the command set is being built.

     Note that the documentation say the command line should go in a property called QtExecCmdLine, but this is only true if the CustomAction  
    is to be run immediately. The CustomAction to set the property is immediate but the command line’s execution CustomAction is deferred, so we  
    have to set the property to the name of the CustomAction and not QtExecCmdLine  --&gt;  
&lt;CustomAction Id=&#39;PropertyAssign&#39; Property=&#39;SilentLaunch&#39; Value=&#39;&amp;quot;\[INSTALLDIR\]mycopier.exe&amp;quot; &amp;quot;\[ProgramFilesFolder\]Java&amp;quot; &amp;quot;\[INSTALLDIR\]my.jar&amp;quot;&#39; Execute=&#39;immediate&#39; /&gt;  
&lt;p&gt;  &amp;lt;!—Next we define the actual CustomAction that does the work. This needs to be deferred (until after the files are installed) and set to not be impersonated&lt;br&gt;
          so runs as the same elevated account as the rest of the MSIExec actions. (assuming your command line tool needs admin rights &amp;ndash;&amp;gt;&lt;br&gt;
  &amp;lt;CustomAction Id=&amp;ldquo;SilentLaunch&amp;rdquo; BinaryKey=&amp;ldquo;WixCA&amp;rdquo;  DllEntry=&amp;ldquo;CAQuietExec&amp;rdquo; Execute=&amp;ldquo;deferred&amp;rdquo; Return=&amp;ldquo;check&amp;rdquo; Impersonate=&amp;ldquo;no&amp;rdquo; /&amp;gt;&lt;br&gt;
 &lt;br&gt;
  &amp;lt;!—Finally we set where in the install sequence the CustomActions and that they are only called on a new install&lt;br&gt;
          Note that we don&amp;rsquo;t tidy up the actions of this command line tool on a de-install &amp;ndash;&amp;gt;&lt;br&gt;
  &lt;InstallExecuteSequence&gt;&lt;br&gt;
   &lt;Custom Action=&#34;PropertyAssign&#34; Before=&#34;SilentLaunch&#34;&gt;NOT Installed &lt;/Custom&gt;&lt;br&gt;
   &lt;Custom Action=&#34;SilentLaunch&#34; After=&#34;InstallFiles&#34;&gt;NOT Installed &lt;/Custom&gt;&lt;br&gt;
  &lt;/InstallExecuteSequence&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently been battling running a command line tool within a Wix 3.6 installer. I eventually got it going but learnt a few things. Here is a code fragment that outlines the solution.</p>
<p>&lt;Product ………&gt;<br>
……… loads of other Wix bits</p>
<!-- The command line we wish to run is set via a property. Usually you would set this with <Property /> block, but in this case it has to be done via a CustomAction as we want to build the command from other Wix properties that can only be evaluated at runtime. So we set the  
       whole command line  including command line arguments as a CustomAction that will be run immediately i.e. in the first phase of the MSIExec process    
       while the command set is being built.

     Note that the documentation say the command line should go in a property called QtExecCmdLine, but this is only true if the CustomAction  
    is to be run immediately. The CustomAction to set the property is immediate but the command line’s execution CustomAction is deferred, so we  
    have to set the property to the name of the CustomAction and not QtExecCmdLine  -->  
<CustomAction Id='PropertyAssign' Property='SilentLaunch' Value='&quot;\[INSTALLDIR\]mycopier.exe&quot; &quot;\[ProgramFilesFolder\]Java&quot; &quot;\[INSTALLDIR\]my.jar&quot;' Execute='immediate' />  
<p>  &lt;!—Next we define the actual CustomAction that does the work. This needs to be deferred (until after the files are installed) and set to not be impersonated<br>
          so runs as the same elevated account as the rest of the MSIExec actions. (assuming your command line tool needs admin rights &ndash;&gt;<br>
  &lt;CustomAction Id=&ldquo;SilentLaunch&rdquo; BinaryKey=&ldquo;WixCA&rdquo;  DllEntry=&ldquo;CAQuietExec&rdquo; Execute=&ldquo;deferred&rdquo; Return=&ldquo;check&rdquo; Impersonate=&ldquo;no&rdquo; /&gt;<br>
 <br>
  &lt;!—Finally we set where in the install sequence the CustomActions and that they are only called on a new install<br>
          Note that we don&rsquo;t tidy up the actions of this command line tool on a de-install &ndash;&gt;<br>
  <InstallExecuteSequence><br>
   <Custom Action="PropertyAssign" Before="SilentLaunch">NOT Installed </Custom><br>
   <Custom Action="SilentLaunch" After="InstallFiles">NOT Installed </Custom><br>
  </InstallExecuteSequence></p>
<p> <br>
</Product></p>
<p>So the usual set of non-obvious Wix steps, but we got there in the end</p>
]]></content:encoded>
    </item>
    <item>
      <title>New release of Typemock Isolator</title>
      <link>https://blog.richardfennell.net/posts/new-release-of-typemock-isolator/</link>
      <pubDate>Mon, 25 Feb 2013 11:49:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-release-of-typemock-isolator/</guid>
      <description>&lt;p&gt;Typemock have released a &lt;a href=&#34;http://www.typemock.com/blog/2013/02/24/isolator-7-3-released/&#34;&gt;new version of Isolator (7.3&lt;/a&gt;) that addresses some problems they have been having with Visual Studio 2012 and Windows 8&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Resolved all issues concerning the breaking MS update&lt;/li&gt;
&lt;li&gt;Boosted performance across the board (&lt;em&gt;up to 44% faster&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;Enhanced SmartRunner UI&lt;/li&gt;
&lt;li&gt;Improved mocking capabilities&lt;/li&gt;
&lt;li&gt;Better integration with .NET development ecosystem&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can &lt;a href=&#34;http://www.typemock.com/download/&#34;&gt;download&lt;/a&gt; this release, or a free trial if you don’t have an Isolator license, from the Typemock site&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Typemock have released a <a href="http://www.typemock.com/blog/2013/02/24/isolator-7-3-released/">new version of Isolator (7.3</a>) that addresses some problems they have been having with Visual Studio 2012 and Windows 8</p>
<ul>
<li>Resolved all issues concerning the breaking MS update</li>
<li>Boosted performance across the board (<em>up to 44% faster</em>)</li>
<li>Enhanced SmartRunner UI</li>
<li>Improved mocking capabilities</li>
<li>Better integration with .NET development ecosystem</li>
</ul>
<p>You can <a href="http://www.typemock.com/download/">download</a> this release, or a free trial if you don’t have an Isolator license, from the Typemock site</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking on TEE at VISUG Conference in April</title>
      <link>https://blog.richardfennell.net/posts/speaking-on-tee-at-visug-conference-in-april/</link>
      <pubDate>Mon, 18 Feb 2013 21:14:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-on-tee-at-visug-conference-in-april/</guid>
      <description>&lt;p&gt;On the 17th of April I will be speaking at &lt;a href=&#34;http://www.visugday.be/&#34;&gt;VISUG Conference&lt;/a&gt; in Antwerp.&lt;/p&gt;
&lt;p&gt;Looks to be an interesting day with 3 tracks, Win8, Web and ALM. I am on the ALM track with &lt;a href=&#34;http://woodwardweb.com/&#34;&gt;Martin Woodward&lt;/a&gt; and &lt;a href=&#34;http://blog.nenoloje.com/&#34;&gt;Neno Loje&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On the 17th of April I will be speaking at <a href="http://www.visugday.be/">VISUG Conference</a> in Antwerp.</p>
<p>Looks to be an interesting day with 3 tracks, Win8, Web and ALM. I am on the ALM track with <a href="http://woodwardweb.com/">Martin Woodward</a> and <a href="http://blog.nenoloje.com/">Neno Loje</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Lost with all the ALM ranger projects? A Windows 8 App to here to help</title>
      <link>https://blog.richardfennell.net/posts/lost-with-all-the-alm-ranger-projects-a-windows-8-app-to-here-to-help/</link>
      <pubDate>Wed, 13 Feb 2013 22:09:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/lost-with-all-the-alm-ranger-projects-a-windows-8-app-to-here-to-help/</guid>
      <description>&lt;p&gt;The new &lt;a href=&#34;http://apps.microsoft.com/windows/en-GB/app/alm-readiness-treasure-map/98782e69-ba79-4ab9-890a-44139fa8cd7f&#34;&gt;Windows 8 ALM Readiness Treasure Map application&lt;/a&gt; is available on the store. This is a great way to navigate a whole range of resources available for anyone involved in ALM work.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The new <a href="http://apps.microsoft.com/windows/en-GB/app/alm-readiness-treasure-map/98782e69-ba79-4ab9-890a-44139fa8cd7f">Windows 8 ALM Readiness Treasure Map application</a> is available on the store. This is a great way to navigate a whole range of resources available for anyone involved in ALM work.</p>
]]></content:encoded>
    </item>
    <item>
      <title>More in rights being stripped for the [team project]contributors group in TFS 2012 when QU1 applied and how to sort it.</title>
      <link>https://blog.richardfennell.net/posts/more-in-rights-being-stripped-for-the-team-projectcontributors-group-in-tfs-2012-when-qu1-applied-and-how-to-sort-it/</link>
      <pubDate>Thu, 07 Feb 2013 10:23:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-in-rights-being-stripped-for-the-team-projectcontributors-group-in-tfs-2012-when-qu1-applied-and-how-to-sort-it/</guid>
      <description>&lt;p&gt;I recently wrote &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/28/Fixing-area-permission-issues-when-creating-new-teams-in-TFS-2012-after-QU1-has-been-installed.aspx&#34;&gt;a post&lt;/a&gt; that discussed how the contributor rights had been stripped off areas in TFS 2012 server when QU1 was applied, this included details on the patches to apply the manual steps to resolve the problem.&lt;/p&gt;
&lt;p&gt;Well today I found that it is not just in the area security you can see this problem. We found it too in the main source code repository. Again the [Team project]contributors group was completely missing. I had to re-add it manually. Once this was done all was OK for the users&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently wrote <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/28/Fixing-area-permission-issues-when-creating-new-teams-in-TFS-2012-after-QU1-has-been-installed.aspx">a post</a> that discussed how the contributor rights had been stripped off areas in TFS 2012 server when QU1 was applied, this included details on the patches to apply the manual steps to resolve the problem.</p>
<p>Well today I found that it is not just in the area security you can see this problem. We found it too in the main source code repository. Again the [Team project]contributors group was completely missing. I had to re-add it manually. Once this was done all was OK for the users</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_86.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_86.png" title="image"></a></p>
<p>FYI: You might ask how I missed this before, most of the  users  on this project had higher levels of rights granted by being members of other groups. It was not until someone was re-assigned between team we noticed.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting Windows Phone 7.8 on my Lumia 800</title>
      <link>https://blog.richardfennell.net/posts/getting-windows-phone-7-8-on-my-lumia-800/</link>
      <pubDate>Mon, 04 Feb 2013 18:08:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-windows-phone-7-8-on-my-lumia-800/</guid>
      <description>&lt;p&gt;Microsoft have release Windows Phone 7.8 in the last few days. As usual the rollout appears to be phased, I think based on serial number of your phone. As with previous versions you can force the update, so jumping the phased rollout queue. The process is&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Put the phone in flight mode (so no data connection)&lt;/li&gt;
&lt;li&gt;Connect it to your PC running Zune, it will look to see if there is an OS update. If it finds it great, let it do the upgrade&lt;/li&gt;
&lt;li&gt;If it does not find it, select the settings menu (top right in Zune)&lt;/li&gt;
&lt;li&gt;You need to select the update menu option on the left menu&lt;/li&gt;
&lt;li&gt;Zune will check for an update, about a second or two after it starts this process disconnect the PC from the Internet. This should allow Zune to get a list of updates, but not the filter list of serial numbers. So it assume the update is for you.&lt;/li&gt;
&lt;li&gt;You should get the update available message, reconnect the internet (it needs to download the file) and continue to do the upgrade&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You will probably have to repeat step 5 a few times to get the timing correct&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Microsoft have release Windows Phone 7.8 in the last few days. As usual the rollout appears to be phased, I think based on serial number of your phone. As with previous versions you can force the update, so jumping the phased rollout queue. The process is</p>
<ol>
<li>Put the phone in flight mode (so no data connection)</li>
<li>Connect it to your PC running Zune, it will look to see if there is an OS update. If it finds it great, let it do the upgrade</li>
<li>If it does not find it, select the settings menu (top right in Zune)</li>
<li>You need to select the update menu option on the left menu</li>
<li>Zune will check for an update, about a second or two after it starts this process disconnect the PC from the Internet. This should allow Zune to get a list of updates, but not the filter list of serial numbers. So it assume the update is for you.</li>
<li>You should get the update available message, reconnect the internet (it needs to download the file) and continue to do the upgrade</li>
</ol>
<p>You will probably have to repeat step 5 a few times to get the timing correct</p>
<p>I also had to repeat whole process 3 three for 3 different firmware and OS updates before I ended up with 7.8.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_85.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_85.png" title="image"></a></p>
<p>But now I have multi size tiles and the new lock screen.</p>
<p>Or if you don’t fancy all the hassle you could just wait a few days</p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2012.2 changes</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2012-2-changes/</link>
      <pubDate>Mon, 04 Feb 2013 13:40:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2012-2-changes/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/31/My-TFS-session-at-Black-Marbles-Tech-Update-is-already-out-of-date-there-were-announcements-last-night.aspx&#34;&gt;Git support&lt;/a&gt; was not the only announcement for TFS at the ALM Summit last week. On &lt;a href=&#34;http://blogs.msdn.com/b/bharry/&#34;&gt;Brian Harry’s blog&lt;/a&gt; you can see more on the new features either in the TFS/VS 2012.2 (Update 2) CTP or planned to appear in later CTPs. The list is long, but the ones that caught my eye beyond that of Git support are&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Microsoft Fakes moves from the Ultimate SKU to Premium, thus making it a ‘free’ option for many corporate developers as they already have that SKU&lt;/li&gt;
&lt;li&gt;Easier customisation of the Kanban board&lt;/li&gt;
&lt;li&gt;Tagging of workitems to allow flexible filtering&lt;/li&gt;
&lt;li&gt;Testing visibility in the web admin pages&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;… and many other improvements. Have a look at the blog posts, or even better pull down the CTP for a look. Also remember if you want to see new TFS features why not try them via a &lt;a href=&#34;http://tfs.visualstudio.com/&#34;&gt;Team Foundation Server&lt;/a&gt; account?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/31/My-TFS-session-at-Black-Marbles-Tech-Update-is-already-out-of-date-there-were-announcements-last-night.aspx">Git support</a> was not the only announcement for TFS at the ALM Summit last week. On <a href="http://blogs.msdn.com/b/bharry/">Brian Harry’s blog</a> you can see more on the new features either in the TFS/VS 2012.2 (Update 2) CTP or planned to appear in later CTPs. The list is long, but the ones that caught my eye beyond that of Git support are</p>
<ul>
<li>Microsoft Fakes moves from the Ultimate SKU to Premium, thus making it a ‘free’ option for many corporate developers as they already have that SKU</li>
<li>Easier customisation of the Kanban board</li>
<li>Tagging of workitems to allow flexible filtering</li>
<li>Testing visibility in the web admin pages</li>
</ul>
<p>… and many other improvements. Have a look at the blog posts, or even better pull down the CTP for a look. Also remember if you want to see new TFS features why not try them via a <a href="http://tfs.visualstudio.com/">Team Foundation Server</a> account?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Tales from last weekend’s Imagine Cup North East Hackathon</title>
      <link>https://blog.richardfennell.net/posts/tales-from-last-weekends-imagine-cup-north-east-hackathon/</link>
      <pubDate>Mon, 04 Feb 2013 13:32:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tales-from-last-weekends-imagine-cup-north-east-hackathon/</guid>
      <description>&lt;p&gt;I spent much of the weekend at the &lt;a href=&#34;http://www.imaginecupnortheast.co.uk/&#34;&gt;Imagine Cup North East Hackathon&lt;/a&gt;; for those of you who don’t know the &lt;a href=&#34;http://www.imaginecup.com/#?fbid=-GU3RmJspRO&#34;&gt;Imagine Cup is Microsoft’s world wide student competition&lt;/a&gt;. This event was to help students in the North East of England to get there entries kick started before their regional final in a few weeks, which lead to the UK finals and for one team the worlds in Russia in July.&lt;/p&gt;
&lt;p&gt;The event seemed a great success, everyone seemed to enjoy it and we mentors saw huge progress in the entries of all the teams. Not just in new code written, but arguably the more important areas of better team working and presentation skill.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I spent much of the weekend at the <a href="http://www.imaginecupnortheast.co.uk/">Imagine Cup North East Hackathon</a>; for those of you who don’t know the <a href="http://www.imaginecup.com/#?fbid=-GU3RmJspRO">Imagine Cup is Microsoft’s world wide student competition</a>. This event was to help students in the North East of England to get there entries kick started before their regional final in a few weeks, which lead to the UK finals and for one team the worlds in Russia in July.</p>
<p>The event seemed a great success, everyone seemed to enjoy it and we mentors saw huge progress in the entries of all the teams. Not just in new code written, but arguably the more important areas of better team working and presentation skill.</p>
<p>You can see the event’s Twitter comments on the hashtag <a href="https://twitter.com/search?q=%23ICNE&amp;src=hash">#ICNE</a> and there are some photos on the <a href="http://www.facebook.com/media/set/?set=a.10151403841619916.499013.89550244915&amp;type=1">Black Marble Facebook page</a>.</p>
<p>So does this sound interesting, whether as a student or mentor? If so have a think about entering/helping next year (getting a bit late for this year). Encourage teams near you to get involved.</p>
<p><a href="http://www.imaginecup.com/#?fbid=-GU3RmJspRO"><img alt="Logo[1]" loading="lazy" src="/wp-content/uploads/sites/2/historic/Logo%5B1%5D.jpg" title="Logo[1]"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>My TFS session at Black Marble’s Tech Update is already out of date, there were announcements last night</title>
      <link>https://blog.richardfennell.net/posts/my-tfs-session-at-black-marbles-tech-update-is-already-out-of-date-there-were-announcements-last-night/</link>
      <pubDate>Thu, 31 Jan 2013 13:53:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-tfs-session-at-black-marbles-tech-update-is-already-out-of-date-there-were-announcements-last-night/</guid>
      <description>&lt;p&gt;At the &lt;a href=&#34;http://www.alm-summit.com/&#34;&gt;ALM Summit&lt;/a&gt; yesterday Brian Harry made some major TFS and Visual Studio announcements&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/01/30/git-init-vs.aspx&#34;&gt;Git support for the hosted visualstudio.com&lt;/a&gt;, this allows you to choose if you want a centralised (existing TFS) source control repository or DVSC (using Git). There is also new tools with VS2012 to make using Git easier. &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/01/30/git-init-vs.aspx&#34;&gt;Read more&lt;/a&gt; as to why Microsoft have made this addition to their offering in his blog. For those of you using on premises TFS you will have have to wait for the next major release of TFS, don’t expect to see this in a quarterly update.&lt;/li&gt;
&lt;li&gt;Also he outline  &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/01/30/announcing-visual-studio-2012-update-2-vs2012-2.aspx&#34;&gt;what is to be in Visual Studio 2012 Update 2&lt;/a&gt;, loads of tool enhancements.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Have a look the posts to find out more&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the <a href="http://www.alm-summit.com/">ALM Summit</a> yesterday Brian Harry made some major TFS and Visual Studio announcements</p>
<ul>
<li><a href="http://blogs.msdn.com/b/bharry/archive/2013/01/30/git-init-vs.aspx">Git support for the hosted visualstudio.com</a>, this allows you to choose if you want a centralised (existing TFS) source control repository or DVSC (using Git). There is also new tools with VS2012 to make using Git easier. <a href="http://blogs.msdn.com/b/bharry/archive/2013/01/30/git-init-vs.aspx">Read more</a> as to why Microsoft have made this addition to their offering in his blog. For those of you using on premises TFS you will have have to wait for the next major release of TFS, don’t expect to see this in a quarterly update.</li>
<li>Also he outline  <a href="http://blogs.msdn.com/b/bharry/archive/2013/01/30/announcing-visual-studio-2012-update-2-vs2012-2.aspx">what is to be in Visual Studio 2012 Update 2</a>, loads of tool enhancements.</li>
</ul>
<p>Have a look the posts to find out more</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fixing area permission issues when creating new teams in TFS 2012 after QU1 has been installed</title>
      <link>https://blog.richardfennell.net/posts/fixing-area-permission-issues-when-creating-new-teams-in-tfs-2012-after-qu1-has-been-installed/</link>
      <pubDate>Mon, 28 Jan 2013 16:30:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixing-area-permission-issues-when-creating-new-teams-in-tfs-2012-after-qu1-has-been-installed/</guid>
      <description>&lt;p&gt;&lt;em&gt;[Updated 4 Fe 2013 See &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx&#34;&gt;http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx&lt;/a&gt; for the latest on this ]&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;One of the side effects of the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/18/TF237111-errors-when-trying-to-add-work-items-to-the-backlog-after-TFS-2012-QU1-is-applied.aspx&#34;&gt;problems we had with TFS 2012 QU1&lt;/a&gt; was that when we created a new team within a team project contributors had no rights to the teams default Area. The workaround was that we had to add these rights manually, remembering to add these as you would expect is something you forget all the time, so it would be nice to fix the default.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>[Updated 4 Fe 2013 See <a href="http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx">http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx</a> for the latest on this ]</em></p>
<p>One of the side effects of the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/18/TF237111-errors-when-trying-to-add-work-items-to-the-backlog-after-TFS-2012-QU1-is-applied.aspx">problems we had with TFS 2012 QU1</a> was that when we created a new team within a team project contributors had no rights to the teams default Area. The workaround was that we had to add these rights manually, remembering to add these as you would expect is something you forget all the time, so it would be nice to fix the default.</p>
<p>The solution it turns out is straight forward, any new team gets the area rights inherited from the default team/root of the team project.</p>
<ol>
<li>Open the TFS web based control panel </li>
<li>Select the Team Project Collection</li>
<li>Select the Team Project</li>
<li>Select the ‘Areas’</li>
<li>Select the root node (has the same name as the Team Project)</li>
<li>Using the drop down menu to the left of the checkbox, select security</li>
<li>Add the Contributor TFS Group and grant it the following rights</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_84.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_84.png" title="image"></a></p>
<p>These settings will be used as the template for any new teams created with the Team Project.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Leeds# Meetup 8 - Coding Dojo - Thursday 31 January</title>
      <link>https://blog.richardfennell.net/posts/leeds-meetup-8-coding-dojo-thursday-31-january/</link>
      <pubDate>Thu, 24 Jan 2013 12:26:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/leeds-meetup-8-coding-dojo-thursday-31-january/</guid>
      <description>&lt;p&gt;If you are at a loose end next Thursday and in Leeds, why not check out the &lt;a href=&#34;http://www.leeds-sharp.org/events/2013/1&#34;&gt;Coding Dojo at the Leeds Sharp user group.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Unfortunately looks like I am on a client visit, but it does sound interesting&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are at a loose end next Thursday and in Leeds, why not check out the <a href="http://www.leeds-sharp.org/events/2013/1">Coding Dojo at the Leeds Sharp user group.</a></p>
<p>Unfortunately looks like I am on a client visit, but it does sound interesting</p>
]]></content:encoded>
    </item>
    <item>
      <title>My session today at Modern Jago</title>
      <link>https://blog.richardfennell.net/posts/my-session-today-at-modern-jago/</link>
      <pubDate>Wed, 23 Jan 2013 20:06:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-session-today-at-modern-jago/</guid>
      <description>&lt;p&gt;Thanks  to everyone who came along to the Microsoft event today at Modern Jago. I hope you all found it useful. I got feedback from a few people that my tip on not trusting company WIFI when trying to do remote debugging of Windows RT devices was useful (or any other type of device for that matter).&lt;/p&gt;
&lt;p&gt;I have seen too many corporate level Wifi implementation, and a surprising number of home ASDL/Wifi routers, doing isolation between WiFi clients. So each client can see the internet fine, but not any another Wifi devices. My usual solution is as I did today, use a MiFi or phone as a basic Wifi hub, they are both too dumb to try anything as complex as client isolation. Or look on your Wifi hub to check if you can disable client isolation.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks  to everyone who came along to the Microsoft event today at Modern Jago. I hope you all found it useful. I got feedback from a few people that my tip on not trusting company WIFI when trying to do remote debugging of Windows RT devices was useful (or any other type of device for that matter).</p>
<p>I have seen too many corporate level Wifi implementation, and a surprising number of home ASDL/Wifi routers, doing isolation between WiFi clients. So each client can see the internet fine, but not any another Wifi devices. My usual solution is as I did today, use a MiFi or phone as a basic Wifi hub, they are both too dumb to try anything as complex as client isolation. Or look on your Wifi hub to check if you can disable client isolation.</p>
]]></content:encoded>
    </item>
    <item>
      <title>More on HDD2 boot problems with my Crucial M4-mSATA</title>
      <link>https://blog.richardfennell.net/posts/more-on-hdd2-boot-problems-with-my-crucial-m4-msata/</link>
      <pubDate>Tue, 22 Jan 2013 11:37:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-on-hdd2-boot-problems-with-my-crucial-m4-msata/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/23/Moving-to-an-SSD-on-Lenovo-W520.aspx&#34;&gt;I have been battling my Crucial M4-mSATA 256Gb SDD for a while now&lt;/a&gt;. The drive seems OK most of the time, but if for any reason my PC crashes (i.e. a blue screen, which I have found is luckily rare on Windows8) the PC will not start-up giving a ‘HDD2 cannot be found’ error during POST.&lt;/p&gt;
&lt;p&gt;I had not had this problem for a few months, so though it was fixed, then BANG yesterday Windows crashed out the blue (I was writing a document in Word whilst listening to music, not exactly a huge load for Core i7) and I hit the start-up problem. Of course I had been working on the document all afternoon and was relying on auto-save, not doing a real Ctrl S save to a remote network drive, so I expected to have lost everything.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/23/Moving-to-an-SSD-on-Lenovo-W520.aspx">I have been battling my Crucial M4-mSATA 256Gb SDD for a while now</a>. The drive seems OK most of the time, but if for any reason my PC crashes (i.e. a blue screen, which I have found is luckily rare on Windows8) the PC will not start-up giving a ‘HDD2 cannot be found’ error during POST.</p>
<p>I had not had this problem for a few months, so though it was fixed, then BANG yesterday Windows crashed out the blue (I was writing a document in Word whilst listening to music, not exactly a huge load for Core i7) and I hit the start-up problem. Of course I had been working on the document all afternoon and was relying on auto-save, not doing a real Ctrl S save to a remote network drive, so I expected to have lost everything.</p>
<p>A few attempts at a reboot, using tricks that worked in the past got me nowhere. After a bit more digging in forums I found this new process suggested as a ‘fix’ from Crucial</p>
<ol>
<li>Plug the system into the mains, then start the system you will get the disk not found error, go into the BIOS settings</li>
<li>Leave the PC running, but doing nothing for 20 minutes. As you are in BIOS there will be no activity for the SDD, this gives it a chance to do a self test and sort itself out.</li>
<li>Switch off the system, unplug from the mains and pull the battery out for 30 seconds</li>
<li>Plug the system back in and it hopefully it will restart without error</li>
<li>If not repeat step 1 – 4 until you have had enough.</li>
</ol>
<p>Well this process got me going, and it does sort of fit with the procedures I had tried before, they all gave the SDD time to self test after a crash. However, I really needed a better fix, this is my main PC it needs to be reliable. So I checked to see <a href="http://www.crucial.com/support/firmware.aspx">if there was any new firmware releases from Crucial</a>, and it seems there is. I had 04MF and now there is 04MH. Version 04MH includes the following changes:</p>
<ul>
<li>Improved robustness in the event of an unexpected power loss. Significantly reduces the incidence of long reboot times after an unexpected power loss.</li>
<li>Corrected minor status reporting error during SMART Drive Self Test execution (does not affect SMART attribute data).</li>
<li>Streamlined firmware update command for smoother operation in Windows 8.</li>
<li>Improved wear leveling algorithms to improve data throughput when foreground wear leveling is required.</li>
</ul>
<p>So well worth a try it would seem. Only issue is my SSD is bitlockered, was this going to be a problem? It takes ages to remove it and reapply it.</p>
<p>Well I thought I would risk the update without changing bitblocker (as I had now got the important data off the SDD). So I</p>
<ol>
<li>Downloaded the Windows 8 firmware tool and current release from Crucial.</li>
<li>Ran it, it warned about backups, and BIOS encryption (which had me a bit worried, but what the hell!)</li>
<li>Accepted the license</li>
<li>Selected my SDD and told it to upgrade</li>
<li>And waited……..</li>
<li>And waited…….., the issue is the tool does not really give you much indication you actually hit the update button, and disk activity is also very patchy. Basically the PC looks to have hung.</li>
<li>However, after about 5 minutes the application came back, tried to run again as I had pressed update twice and promptly crashed. However, it had done the upgrade.</li>
<li>I re-ran the tool and it told me the drive was now at 04MH</li>
</ol>
<p>I rebooted the PC and all seemed OK, but only time will tell.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF237111 errors when trying to add work items to the backlog after TFS 2012 QU1 is applied</title>
      <link>https://blog.richardfennell.net/posts/tf237111-errors-when-trying-to-add-work-items-to-the-backlog-after-tfs-2012-qu1-is-applied/</link>
      <pubDate>Fri, 18 Jan 2013 12:58:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf237111-errors-when-trying-to-add-work-items-to-the-backlog-after-tfs-2012-qu1-is-applied/</guid>
      <description>&lt;p&gt;&lt;em&gt;[Updated 4 Feb 2013 See&lt;/em&gt; &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx&#34;&gt;&lt;em&gt;http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx&lt;/em&gt;&lt;/a&gt;_ for the latest on this ]_&lt;/p&gt;
&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/14/Experiences-applying-TFS-2012-QU1-and-it-subsequent-hotfix.aspx&#34;&gt;posted earlier in the week about my experiences with the post TFS 2012 QU1 hotfix&lt;/a&gt;. When I posted I thought we had all our problems sorted, we did for new team projects, but it seems still had an issue for teams on our team projects that were created prior to the upgraded from RTM to QU1. As I said in the past post we got into this position due to trying to upgraded a TPC form RTM to QU1 by detaching from the 2012 RTM server and attaching to a 2012 QU1 server – this is not the recommended route and caused us to suffer the problem the &lt;a href=&#34;http://support.microsoft.com/kb/2795609&#34;&gt;KB2795609 patch&lt;/a&gt; addresses.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>[Updated 4 Feb 2013 See</em> <a href="http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx"><em>http://blogs.msdn.com/b/bharry/archive/2013/02/01/hotfixes-for-tfs-2012-update-1-tfs-2012-1.aspx</em></a>_ for the latest on this ]_</p>
<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/14/Experiences-applying-TFS-2012-QU1-and-it-subsequent-hotfix.aspx">posted earlier in the week about my experiences with the post TFS 2012 QU1 hotfix</a>. When I posted I thought we had all our problems sorted, we did for new team projects, but it seems still had an issue for teams on our team projects that were created prior to the upgraded from RTM to QU1. As I said in the past post we got into this position due to trying to upgraded a TPC form RTM to QU1 by detaching from the 2012 RTM server and attaching to a 2012 QU1 server – this is not the recommended route and caused us to suffer the problem the <a href="http://support.microsoft.com/kb/2795609">KB2795609 patch</a> addresses.</p>
<p>The problem we still had was follows:</p>
<ul>
<li>
<p>I have two users a Team  Project called ‘BM’ who are in the team called ‘Bad TP’</p>
</li>
<li>
<p>Richard (the Team project creator and administrator)</p>
</li>
<li>
<p>Fred (a Team Project contributor)</p>
</li>
<li>
<p>All is fine for Richard, he can see the team’s product backlog and add items to it.</p>
</li>
<li>
<p>Fred can get to the team backlog page in the web client, but cannot see any work items and gets a TF237111 error if they try to add a new work item</p>
</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_82.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_82.png" title="image"></a></p>
<ul>
<li>
<p>The quick fix was to make Fred a team project administrator, but not a long term solution</p>
</li>
<li>
<p>We checked the following rights</p>
</li>
<li>
<p>Richard was a member of basically all the groups on the ‘BM’ team project (he was the creator so that was expected), the important ones were [BMProject administrators, [BM]contributors and ‘Bad TP’</p>
</li>
<li>
<p>Fred was a member of the [BM]contributors  and ‘Bad TP’ team</p>
</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image001%5B6%5D.png"><img alt="clip_image001[6]" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image001%5B6%5D_thumb.png" title="clip_image001[6]"></a></p></blockquote>
<ul>
<li>The ‘Bad TP’ team had the following permissions</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image001_1.png"><img alt="clip_image001" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image001_thumb_1.png" title="clip_image001"></a></p></blockquote>
<p>So all these permissions looked OK as you would expect. What I had forgotten was that the team model in TFS 2012 is build around the Area’s hierarchy. This has security permissions too. To check this</p>
<ul>
<li>Go to the Admin page for ‘Bad TP’</li>
<li>Click the “Areas” tab</li>
<li>Right click the “default area” for the team and select “security”</li>
<li>We had expect to see some like this</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_83.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_83.png" title="image"></a></p></blockquote>
<ul>
<li>However there was no entry at all for the Contributors group.</li>
<li>I added this in and had to explicitly set the four ‘inherited allow‘ permissions to ‘allow’ and everything started to work.</li>
</ul>
<p>So the problem was that during the problematic upgraded we had managed to strip off all the contributor group entries from area in the existing Team Project. The clue was actually in the TF237111 error as this does mention permissions are the area path.</p>
<p>So now we know we can fix the issue. It should be noted that any new teams created in the team project seem to not get this right applied, so we have to remember to added it when we create a new team.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Incorrect IIS IP Bindings and TFS Server Url</title>
      <link>https://blog.richardfennell.net/posts/incorrect-iis-ip-bindings-and-tfs-server-url/</link>
      <pubDate>Tue, 15 Jan 2013 12:27:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/incorrect-iis-ip-bindings-and-tfs-server-url/</guid>
      <description>&lt;p&gt;By default the TFS server uses &lt;a href=&#34;http://localhost:8080/tfs&#34;&gt;http://localhost:8080/tfs&lt;/a&gt; as it’s Server URL, this is the URL used for internal communication, whereas the Notification Url is the one TFS tells client to communicate to it via. Both these Urls can be changed via the Team Foundation Server Console, but I find you do not usually need to change the Server Url, only the notification one.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_80.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_80.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I hit a problem recently on a site where if you tried to edit the Team Project Collection Group Membership (via the web or TFS admin console) you got a dialog popping up saying  ‘HTTP 400 error’. Now this you have to say looks like a URL/binding issue, the tools cannot find an end point.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>By default the TFS server uses <a href="http://localhost:8080/tfs">http://localhost:8080/tfs</a> as it’s Server URL, this is the URL used for internal communication, whereas the Notification Url is the one TFS tells client to communicate to it via. Both these Urls can be changed via the Team Foundation Server Console, but I find you do not usually need to change the Server Url, only the notification one.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_80.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_80.png" title="image"></a></p>
<p>I hit a problem recently on a site where if you tried to edit the Team Project Collection Group Membership (via the web or TFS admin console) you got a dialog popping up saying  ‘HTTP 400 error’. Now this you have to say looks like a URL/binding issue, the tools cannot find an end point.</p>
<p>Turns out the issue was that there had been a IP addressing schema changes on the network. The different services on the network had been assigned their own IP addresses (as well as the host having its own IP address) e.g. On our TFS server we might have</p>
<ul>
<li>10.0.0.1 – physicalservername.domain.com</li>
<li>10.0.1.1 – tfs2012.domain.com</li>
<li>10.0.1.2 – sharepoint.domain.com</li>
</ul>
<p>This is all well end good, but a mistake had been made in the bindings in IIS during the reconfiguration.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_81.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_81.png" title="image"></a></p>
<p>The HTTPS bind was correct the hostname matched the IP address, this has to be the case else SSL does not work. However, the HTTP port 8080 should have been bound  to all IP Addresses (i.e. no hostname and the * IP address as above). On the site, HTTP was bound to a specific IP address. This was fine if a client connected to <a href="http://tfs2012.domain.com:8080/tfs">http://tfs2012.domain.com:8080/tfs</a> (which resolved to the correct address), but failed for <a href="http://loclahost:8080/tfs">http://loclahost:8080/tfs</a>  as the binding did not match.</p>
<p>Once the edit was made to remove the hostname all was OK (the other option would have been to alter the server Url to match)</p>
<p>So problem fixed, the strangest thing is that this issue only appeared to effect setting TPC group membership, everything else was fine.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences applying TFS 2012 QU1 and it subsequent hotfix</title>
      <link>https://blog.richardfennell.net/posts/experiences-applying-tfs-2012-qu1-and-it-subsequent-hotfix/</link>
      <pubDate>Mon, 14 Jan 2013 14:57:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-applying-tfs-2012-qu1-and-it-subsequent-hotfix/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2013/01/11/tfs-2012-update-1-hotfix.aspx&#34;&gt;Brian Harry posted last week about a hotfix for TFS 2012 QU1 (KB2795609)&lt;/a&gt;. This should not be needed by most people, but as his post points out does fix issues for a few customers. Well we were one of those customers. When upgrading from 2012 RTM to 2012 QU1 we had attempted what with hindsight was an over ambitious hardware migration too. This involved swapping our data tier from a SQL 2012 instance to a new 2012 availability group and merging team project collections from different server as well as applying the QU1. Our migration plan contained some team project collection detach/attach steps hence getting into the area this hotfix addresses.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.msdn.com/b/bharry/archive/2013/01/11/tfs-2012-update-1-hotfix.aspx">Brian Harry posted last week about a hotfix for TFS 2012 QU1 (KB2795609)</a>. This should not be needed by most people, but as his post points out does fix issues for a few customers. Well we were one of those customers. When upgrading from 2012 RTM to 2012 QU1 we had attempted what with hindsight was an over ambitious hardware migration too. This involved swapping our data tier from a SQL 2012 instance to a new 2012 availability group and merging team project collections from different server as well as applying the QU1. Our migration plan contained some team project collection detach/attach steps hence getting into the area this hotfix addresses.</p>
<p>The end point was we ended up with a QU1 upgraded server, but we could only get users connected if we made them team project administrators, a valid short term solution, but something we needed to fix.</p>
<p>We therefore applied the new <a href="http://support.microsoft.com/kb/2795609">KB2795609 patch</a>, but hit a gotcha that you should be aware of</p>
<ul>
<li>We ran the patch EXE on our TFS server that was showing the problem.</li>
<li>This ran without error, taking about 5 minutes</li>
<li>We tried to connect to the patched TFS server via the web client and VS2012, we could make a connection to TFS but could open any TPCs</li>
<li>On checking the TFS admin console we saw the TPC was offline and reporting that the servicing had failed (but this had <strong>not</strong> been reported back via the patch tool)</li>
<li>We reran the servicing job (via the TFS admin console) but it failed in the core step we saw in the logs</li>
</ul>
<blockquote>
<p><em>[Error] TF400744: An error occurred while executing the following script: TurnOnRCSI.sql. Failed batch starts on the line 1. Statement line: 1. Script line: 1. Error: 5069 ALTER DATABASE statement failed.</em></p></blockquote>
<ul>
<li>Our TFS DBs are now stored with a SQL 2012 availability group, during the upgrade to QU1 we had seen problems applying the upgrade unless we removed the DBs from the availability groups. So we removed the tfs_configuration and tfs_[mytpc] from availability groups and re applied the servicing job and all was OK</li>
<li>Once the servicing of the TPC was completed it went online as expected.</li>
<li>We then put the DBs back into the availability group</li>
<li>We could then remove the users from the team project administrators group as their previous rights were working again.</li>
</ul>
<p>So we now had a patched and working TFS 2012 QU1 server. Lets hope that QU2 is a little smoother and we don’t need the direct help of product group, who I must say have been great in getting this problem addressed. I really like the openness we see in <a href="http://blogs.msdn.com/b/bharry/">Brian’s blog</a> of both the good and the bad.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking on Windows 8 development</title>
      <link>https://blog.richardfennell.net/posts/speaking-on-windows-8-development/</link>
      <pubDate>Thu, 10 Jan 2013 21:22:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-on-windows-8-development/</guid>
      <description>&lt;p&gt;On the 23rd of January I am doing a short presentation on the various options for Windows 8 development in Visual Studio 2012 as part of the Modern Jango ‘Tools for Windows 8 - Tips &amp;amp; Tricks on Visual Studio 2012 to inspire beautiful app development’ event.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/ukmsdn/archive/2013/01/10/event-tools-for-windows-8-tips-amp-tricks-on-visual-studio-2012-to-inspire-beautiful-app-development.aspx&#34;&gt;For more details check out the UK MSDN site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On the 23rd of January I am doing a short presentation on the various options for Windows 8 development in Visual Studio 2012 as part of the Modern Jango ‘Tools for Windows 8 - Tips &amp; Tricks on Visual Studio 2012 to inspire beautiful app development’ event.</p>
<p><a href="http://blogs.msdn.com/b/ukmsdn/archive/2013/01/10/event-tools-for-windows-8-tips-amp-tricks-on-visual-studio-2012-to-inspire-beautiful-app-development.aspx">For more details check out the UK MSDN site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Lost in nLog configuration files</title>
      <link>https://blog.richardfennell.net/posts/lost-in-nlog-configuration-files/</link>
      <pubDate>Tue, 08 Jan 2013 18:43:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/lost-in-nlog-configuration-files/</guid>
      <description>&lt;p&gt;I have been trying to track down a performance problem today on an ASP.NET MVC. It all turned out to be down to an incorrect connection string in a &lt;a href=&#34;http://nlog-project.org/wiki/Configuration_file&#34;&gt;nLog.config&lt;/a&gt; file for logging to a SQL DB. As soon as I commented out the Db target for nLog my login to the web site was virtually instant as opposed to taking 30 seconds (what i assume is a SQL timeout)&lt;/p&gt;
&lt;p&gt;I had suspected a problem with the logged as I was not seeing anything in &lt;a href=&#34;http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx&#34;&gt;DebugView&lt;/a&gt;, but it all took a while to track down as i did not seem to get any logging output.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been trying to track down a performance problem today on an ASP.NET MVC. It all turned out to be down to an incorrect connection string in a <a href="http://nlog-project.org/wiki/Configuration_file">nLog.config</a> file for logging to a SQL DB. As soon as I commented out the Db target for nLog my login to the web site was virtually instant as opposed to taking 30 seconds (what i assume is a SQL timeout)</p>
<p>I had suspected a problem with the logged as I was not seeing anything in <a href="http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx">DebugView</a>, but it all took a while to track down as i did not seem to get any logging output.</p>
<p>It seems the blocker was if I had Visual Studio 2012 running in debug mode then any output to the <a href="http://nlog-project.org/wiki/OutputDebugString_target">OutputDebugString</a> was lost, this took forever to realise.  The only truly reliable target was that of a text file. I had expected the logging messages to appear in the VS debug window – it did not.</p>
<p>So this is what worked as a user who is a local administrator, but with UAC enable and NOT running any of the tools as administrator (so nothing special).</p>
<ol>
<li>Load DebugView  with default settings.</li>
<li>From in VS2012 start the web site without debugging (so it loads IIS Express and IE)</li>
<li>Load the web page with some logging</li>
<li>The logging appears in the file, debugview and the DB</li>
</ol>
<p>This was done using the nlog.config file</p>
<?xml version="1.0" encoding="utf-8" ?>  
<p>&lt;nlog xmlns=&quot;<a href="http://www.nlog-project.org/schemas/NLog.xsd%22">http://www.nlog-project.org/schemas/NLog.xsd&quot;</a><br>
      xmlns:xsi=&quot;<a href="http://www.w3.org/2001/XMLSchema-instance">http://www.w3.org/2001/XMLSchema-instance&quot;</a>   &gt;</p>
<p>  <targets><br>
    &lt;target xsi:type=&ldquo;Database&rdquo; name=&ldquo;db&rdquo;<br>
       commandText=&ldquo;INSERT INTO [LogEntries](TimeStamp, Message, Level, Logger) VALUES(getutcdate(), @msg, @level, @logger)&rdquo;<br>
       connectionString=&ldquo;server=.sql2012;database=MyDb;integrated security=sspi&rdquo; dbProvider=&ldquo;System.Data.SqlClient&rdquo; &gt;<br>
      <parameter name="@msg" layout="${message}" /><br>
      <parameter name="@level" layout="${level}" /><br>
      <parameter name="@logger" layout="${logger}" /><br>
    </target></p>
<p>    <target name="ds" xsi:type="OutputDebugString" layout=" ${message}"/></p>
<p>    &lt;target xsi:type=&ldquo;File&rdquo; name=&ldquo;f&rdquo; fileName=&quot;${basedir}/logs/${shortdate}.log&quot;<br>
            layout=&quot;${longdate} ${uppercase:${level}} ${message}&quot; /&gt;</p>
<p>  </targets></p>
<p>  <rules><br>
    <logger name="\*" minlevel="Trace" writeTo="db" /><br>
    <logger name="\*" minlevel="Trace" writeTo="ds" /><br>
    <logger name="\*" minlevel="Trace" writeTo="f" /><br>
  </rules><br>
</nlog></p>
]]></content:encoded>
    </item>
    <item>
      <title>EPG for Windows Media Center is fixed</title>
      <link>https://blog.richardfennell.net/posts/epg-for-windows-media-center-is-fixed/</link>
      <pubDate>Fri, 04 Jan 2013 15:53:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/epg-for-windows-media-center-is-fixed/</guid>
      <description>&lt;p&gt;It seems that the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/03/More-fun-with-Windows-8-Media-Center.aspx&#34;&gt;EPG problem with Media Center&lt;/a&gt; has been &lt;a href=&#34;http://www.theregister.co.uk/2013/01/04/windows_media_center_epg_fixed/&#34;&gt;fixed&lt;/a&gt; by Microsoft. The are publishing the EPG data again, so we don’t have to rely on the 7 day guide embedded within the TV signal.&lt;/p&gt;
&lt;p&gt;It is interesting to see the happy MCE users coming out the wood work on &lt;a href=&#34;http://forums.theregister.co.uk/forum/1/2013/01/04/windows_media_center_epg_fixed/&#34;&gt;The Register’s comments section&lt;/a&gt;. We MCE users might not be too numerous, but the people who use it really seem to like it. However, after reading the comments I should have a look at &lt;a href=&#34;http://xbmc.org/&#34;&gt;XBMC&lt;/a&gt; too as it now has PVR features. Though it does look a bit more complex too setup!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It seems that the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/03/More-fun-with-Windows-8-Media-Center.aspx">EPG problem with Media Center</a> has been <a href="http://www.theregister.co.uk/2013/01/04/windows_media_center_epg_fixed/">fixed</a> by Microsoft. The are publishing the EPG data again, so we don’t have to rely on the 7 day guide embedded within the TV signal.</p>
<p>It is interesting to see the happy MCE users coming out the wood work on <a href="http://forums.theregister.co.uk/forum/1/2013/01/04/windows_media_center_epg_fixed/">The Register’s comments section</a>. We MCE users might not be too numerous, but the people who use it really seem to like it. However, after reading the comments I should have a look at <a href="http://xbmc.org/">XBMC</a> too as it now has PVR features. Though it does look a bit more complex too setup!</p>
]]></content:encoded>
    </item>
    <item>
      <title>More fun with Windows 8 Media Center</title>
      <link>https://blog.richardfennell.net/posts/more-fun-with-windows-8-media-center/</link>
      <pubDate>Thu, 03 Jan 2013 16:47:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-fun-with-windows-8-media-center/</guid>
      <description>&lt;h2 id=&#34;signal-strength&#34;&gt;Signal Strength&lt;/h2&gt;
&lt;p&gt;Since &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/12/03/Upgrading-my-Windows-7-Media-Center-to-Windows-8.aspx&#34;&gt;upgrading my Media Center PC to Windows 8&lt;/a&gt;, on the same hardware, I have seen that it seems to perform better that on Windows 7. However some channels seem to be dropping out, the usual digital pixilation or a message saying no signal (but often with some audio).&lt;/p&gt;
&lt;p&gt;As I had not changed any hardware I though this strange, but I have always seemed to have a borderline signal strength even though I can see my local terrestrial transmitter, it is about 15 miles away in direct line of sight. After a &lt;a href=&#34;http://www.stevelarkins.freeuk.com/freeview_digital_tv.htm&#34;&gt;bit of reading&lt;/a&gt; on the subject I found that my signal booster may be the issue, amplifying the noise and not the signal (seems you should only use masthead amps for digital). This was a hangover from my old flat with an awful signal. Once this was removed from the system my problems appear to have gone away. However, as I did not see the issue all the time I will wait a while before declaring it a complete success.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="signal-strength">Signal Strength</h2>
<p>Since <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/12/03/Upgrading-my-Windows-7-Media-Center-to-Windows-8.aspx">upgrading my Media Center PC to Windows 8</a>, on the same hardware, I have seen that it seems to perform better that on Windows 7. However some channels seem to be dropping out, the usual digital pixilation or a message saying no signal (but often with some audio).</p>
<p>As I had not changed any hardware I though this strange, but I have always seemed to have a borderline signal strength even though I can see my local terrestrial transmitter, it is about 15 miles away in direct line of sight. After a <a href="http://www.stevelarkins.freeuk.com/freeview_digital_tv.htm">bit of reading</a> on the subject I found that my signal booster may be the issue, amplifying the noise and not the signal (seems you should only use masthead amps for digital). This was a hangover from my old flat with an awful signal. Once this was removed from the system my problems appear to have gone away. However, as I did not see the issue all the time I will wait a while before declaring it a complete success.</p>
<h2 id="where-did-my-epg-go">Where did my EPG go?</h2>
<p>More irritating is the fact that the EPG for my HD channels has disappeared. This could well be down to fact <a href="http://www.theregister.co.uk/2013/01/03/windows_media_center/">Microsoft appear to have switched off the EPG service from Red Bee</a>. MCE can get EPG for Freeview channels via the TV broadcast, but it seems not for HD (or cable or satellite which I don’t have).</p>
<p>Hopefully Microsoft will see sense and get this re-enabled, I don’t want to really have to jump through the hoop on the <a href="http://www.thegreenbutton.tv/forums/viewtopic.php?f=5&amp;t=1659&amp;start=420">greenbutton</a>.</p>
<p>I does worry me that this is a signal that MCE is being killed. I know it has not taken off as a main stream product, but I still prefer it to any set top box I have owned.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why can’t I create an environment using a running VM on my Lab Management system?</title>
      <link>https://blog.richardfennell.net/posts/why-cant-i-create-an-environment-using-a-running-vm-on-my-lab-management-system/</link>
      <pubDate>Tue, 04 Dec 2012 21:47:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-cant-i-create-an-environment-using-a-running-vm-on-my-lab-management-system/</guid>
      <description>&lt;p&gt;With TFS lab management you can build environments from stored VM and VM templates stored in an SCVMM library or from VMs running on a Hyper-V host within your lab infrastructure. This second form is what used to be called composing an environment in TFS 2010. Recently when I tried to compose an environment I had a problem. After selecting the running VM inside the new environment wizard I got the red star that shows an error in the machine properties&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With TFS lab management you can build environments from stored VM and VM templates stored in an SCVMM library or from VMs running on a Hyper-V host within your lab infrastructure. This second form is what used to be called composing an environment in TFS 2010. Recently when I tried to compose an environment I had a problem. After selecting the running VM inside the new environment wizard I got the red star that shows an error in the machine properties</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_75.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_75.png" title="image"></a></p>
<p>Now I would only expect to see this when creating an environment with a VM templates as a red star usually means the OS profile is not set e.g. you have missed a product key, or passwords don’t match. However, this was a running VM so there were no settings I could make, and no obvious way to diagnose the problem. After a few email with <a href="http://blogs.msdn.com/b/lab_management/">Microsoft Lab management team</a> we go to the bottom of the problem, it was all down to the Hyper-V hosts network connections, but that is rushing ahead, first lets see why it was a confusing problem.</p>
<p><strong>First the red herring</strong></p>
<p>We now know the issue was the Hyper-V host network, but at first it looked like I could compose some guest VMs but not others. I wrongly assumed the issue was some bad meta-data or corrupt settings within the VMs. Tthis problem all started after a server crash and so we were fearing corruption, which clouded our thoughts.</p>
<p>The actual reason some VMs could be composed and some could not was dependant on which Hyper-V host they were running on. Not the VMs themselves.</p>
<p><strong>The diagnostic steps</strong></p>
<p>To get to the root of this issue a few commands and tools were used. Don’t think for a second there was not a lot of random jumping about and trial and error. In this post I am just going to point out what was helpful.</p>
<p>Firstly you need to use the TFSConfig command on your TFS server to find out your network location setting. So run</p>
<blockquote>
<p><em>C:Program FilesMicrosoft Team Foundation Server 11.0Tools&gt;tfsconfig lab /settings /list<br>
SCVMM Server Name: vmm.blackmarble.co.uk<br>
Network Location: VSLM Network Location<br>
IP Block: 192.168.23.0/24<br>
DNS Suffix: blackmarble.co.uk</em></p></blockquote>
<p>Next you need to see which, if any, of your Hyper-V hosts are connected to this location. You can do this in a few graphically ways in SCVMM (and I am sure via PowerShell too)</p>
<p>If you select a Hyper-V host in SCVVM, right click and select View networking. On a healthy host you see the VSLM network location connected to external network adaptor the VMs are using</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_76.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_76.png" title="image"></a></p>
<p>On my failing Hyper-V host the VSLM network was connected to an empty network port</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_77.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_77.png" title="image"></a></p>
<p>You can also see this on the SCVMM &gt; host (right click) &gt; properties. If you look on  the networking tab for the main virtual network  you should see the VSLM network as the location. On the failing Hyper-V host this location was empty.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_78.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_78.png" title="image"></a></p>
<p><strong>The solution</strong></p>
<p>You would naively think selecting the edit option on the screen shot above would allow you to enter the VSLM Network as the location, but no. Not on that tab. You need to select the hardware tab.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_79.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_79.png" title="image"></a></p>
<p>You can then select the correct network adaptor and override the discovered network location to point to the VSLM Network Location. Once this was done I could compose environments as I would expect.</p>
<p>I have said it before, but Lab Management has a lot of moving parts, and they all must be setup right else nothing works. A small configuration error can seriously ruin your day.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Agile Yorkshire - December Lightening Talks</title>
      <link>https://blog.richardfennell.net/posts/agile-yorkshire-december-lightening-talks/</link>
      <pubDate>Tue, 04 Dec 2012 09:31:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/agile-yorkshire-december-lightening-talks/</guid>
      <description>&lt;p&gt;December&amp;rsquo;s Agile Yorkshire, on the Tuesday, 11 December 2012, will be &lt;a href=&#34;http://en.wikipedia.org/wiki/Lightning_talk&#34;&gt;Lightening Talks&lt;/a&gt; covering a range of agile topics and be held at a new venue The Round Foundry Media Centre.&lt;/p&gt;
&lt;p&gt;Submissions for Lightening Talks are still arriving so the precise agenda may change but proposed topics included:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Visual agile assessment for improvement&lt;/li&gt;
&lt;li&gt;Continuous Integration&lt;/li&gt;
&lt;li&gt;Agile Interviewing&lt;/li&gt;
&lt;li&gt;Difficult conversations&lt;/li&gt;
&lt;li&gt;Time management&lt;/li&gt;
&lt;li&gt;Refactoring using the Mikado Method&lt;/li&gt;
&lt;li&gt;PowerShell&lt;/li&gt;
&lt;li&gt;Agile testing&lt;/li&gt;
&lt;li&gt;Burn up charts&lt;/li&gt;
&lt;li&gt;Multi-team, multiple kanban board problems.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more details and registration see the &lt;a href=&#34;http://www.eventbrite.com/event/4966755700/?ref=enivtefor001&amp;amp;invite=MjgyMjA4MC9yZmVubmVsbEBibGFja21hcmJsZS5jby51ay8x&amp;amp;utm_source=eb_email&amp;amp;utm_medium=email&amp;amp;utm_campaign=inviteformal001&amp;amp;utm_term=eventpage&#34;&gt;event site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>December&rsquo;s Agile Yorkshire, on the Tuesday, 11 December 2012, will be <a href="http://en.wikipedia.org/wiki/Lightning_talk">Lightening Talks</a> covering a range of agile topics and be held at a new venue The Round Foundry Media Centre.</p>
<p>Submissions for Lightening Talks are still arriving so the precise agenda may change but proposed topics included:</p>
<ul>
<li>Visual agile assessment for improvement</li>
<li>Continuous Integration</li>
<li>Agile Interviewing</li>
<li>Difficult conversations</li>
<li>Time management</li>
<li>Refactoring using the Mikado Method</li>
<li>PowerShell</li>
<li>Agile testing</li>
<li>Burn up charts</li>
<li>Multi-team, multiple kanban board problems.</li>
</ul>
<p>For more details and registration see the <a href="http://www.eventbrite.com/event/4966755700/?ref=enivtefor001&amp;invite=MjgyMjA4MC9yZmVubmVsbEBibGFja21hcmJsZS5jby51ay8x&amp;utm_source=eb_email&amp;utm_medium=email&amp;utm_campaign=inviteformal001&amp;utm_term=eventpage">event site</a></p>
<p><strong>IMPORTANT  - LETS SAID IT AGAIN - THERE IS A NEW VENUE</strong></p>
<p>This event is at a new venue, The Round Foundry. It has plenty of free (evening) parking right in front and is 5 minutes walk from Leeds train station. The Midnight Bell pub is also next door</p>
<p>Tuesday, 11 December 2012 from 18:30 to 21:00 (GMT)</p>
<p><strong>The Round Foundry Media Centre</strong><br>
Foundry St<br>
LS11 5QP Leeds<br>
United Kingdom<br>
<a href="http://maps.google.com/maps?q=Foundry&#43;St,&#43;Leeds,&#43;West&#43;York&#43;LS11&#43;5QP&#43;United&#43;Kingdom&amp;hl=en">View Map</a></p>
<p><a href="http://www.eventbrite.com/event/4966755700/?ref=enivtefor001&amp;invite=MjgyMjA4MC9yZmVubmVsbEBibGFja21hcmJsZS5jby51ay8x&amp;utm_source=eb_email&amp;utm_medium=email&amp;utm_campaign=invitenew&amp;utm_term=eventimage&amp;ref=enivtefor001"><img loading="lazy" src="https://ebmedia.eventbrite.com/s3-build/images/856610/7527575743/4/logo.png"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading my Windows 7 Media Center to Windows 8</title>
      <link>https://blog.richardfennell.net/posts/upgrading-my-windows-7-media-center-to-windows-8/</link>
      <pubDate>Mon, 03 Dec 2012 22:27:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-my-windows-7-media-center-to-windows-8/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 4th December 2012&lt;/strong&gt; – Roku setup&lt;/p&gt;
&lt;p&gt;I have been a happy user of Windows Media Center since (even) XP. I have found it reasonably stable and more importantly it has a nice user interface compared to most pvr/set-top boxes I have owned. So as my Windows 7 based system has been stable for just over a year (&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/11/14/experiences-upgrading-my-media-center-to-receive-freeview-hd.aspx&#34;&gt;since a move to HD tuners prompted by a motherboard failure&lt;/a&gt;) I thought it high time to destabilise it with an upgrade to Windows 8. What actually prompted this was &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/boss&#34;&gt;Robert&lt;/a&gt; had upgraded his Media Center which is also based on similar Acer Revo (Intel Atom) hardware to mine and he had found the general performance much improved. The Atom CPU is only just up to the job, but I do like the Revo as it is a nice low wattage package for Media Center&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 4th December 2012</strong> – Roku setup</p>
<p>I have been a happy user of Windows Media Center since (even) XP. I have found it reasonably stable and more importantly it has a nice user interface compared to most pvr/set-top boxes I have owned. So as my Windows 7 based system has been stable for just over a year (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/11/14/experiences-upgrading-my-media-center-to-receive-freeview-hd.aspx">since a move to HD tuners prompted by a motherboard failure</a>) I thought it high time to destabilise it with an upgrade to Windows 8. What actually prompted this was <a href="http://blogs.blackmarble.co.uk/blogs/boss">Robert</a> had upgraded his Media Center which is also based on similar Acer Revo (Intel Atom) hardware to mine and he had found the general performance much improved. The Atom CPU is only just up to the job, but I do like the Revo as it is a nice low wattage package for Media Center</p>
<p>The  main issue with this upgrade is Microsoft have chosen to put Media Center in an add-in pack for Windows 8 not part of the base package; however, this pack is currently <a href="http://winsupersite.com/windows-8/windows-8-tip-get-media-center-free">free</a>. Armed with this information this was my upgrade process:</p>
<ol>
<li>Applied for my Windows 8 <a href="http://winsupersite.com/windows-8/windows-8-tip-get-media-center-free">Media Center product code</a>. It says this can take up to 72 hours, mine took nearer 150. Looking at the comments this seems not uncommon. Note that you don’t need to apply for the code on the same PC you wish to use it on. But as the comments mention it is one code per email address used, and the code has to be activated before February.</li>
<li>Downloaded the Windows 8 Professional ISO from MSDN, I got the EN-GB specific version. Just make sure it is the retail version not the volume license as people have commented that the the Media Center product code does not work for VL editions.</li>
<li>So now ready to start the upgrade…..
<ol>
<li>I first removed my external RAID disk array that stores pictures, music, recordings etc.</li>
<li>I backed up my Windows 7 boot disk (using <a href="http://technet.microsoft.com/en-us/library/cc766068%28v=WS.10%29.aspx">imagex</a> off a <a href="http://en.wikipedia.org/wiki/Windows_pe">Windows 8 PE boot USB</a>) to an external USB disk (just in case I wanted to go back)</li>
<li>Booted my PC in Windows 7, inserted the Windows 8 Professional USB media and did an in-place upgrade (frankly it moved over so little I might as well have wiped the disk, see what I had to reinstall below). After entering my MDSN sourced product key the upgrade found all the hardware without issue except my PCTV 290e tuners, but a check for new drivers got these from Windows update without any other intervention.</li>
<li>I now used the add feature option on the control panel to enter my Media Center product key. It accepted this, and said it was downloading the new feature and it might take a while. In my case this was best part of two hours, so be prepared to wait…..</li>
</ol>
</li>
<li>Once complete (and a good few automatic reboots later) I could run the Media Center wizard,but before I did this I re-attached my RAID disk</li>
</ol>
<p>So eventually I had a Windows 8 install with Media Center, but what had a lost in the process?</p>
<ul>
<li>All my Media Center configuration
<ul>
<li>Monitor preferences</li>
<li>My series recording settings</li>
<li>Interestingly I <strong>did not</strong> need to re-enter my settings to point to my RAID disk for libraries</li>
</ul>
</li>
<li>Had to re-join the PC to my home group (of which it was the creator, I would have it expected to remember more than just it’s name)</li>
<li>Had to re-install the Windows desktop Skydrive  application, which I use to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/04/26/Thoughts-on-the-new-Skydrive.aspx">provide a backup</a> from my RAID drive for photos. I do need to see if I can think of an easier solution under Windows 8. I also had to re-create the scheduled batch file that copies the files I wish to backup to this folder. Due to the coping all the time stamps meant the files had to re-sync to SkyDrive.</li>
<li>Had to re-authorise my Roku Soundbridge media as an audio client.</li>
</ul>
<p>So it is any faster? Only time will tell. However, I do think that pining browser windows to the Windows 8 home page for <a href="http://bbc.co.uk/iplayer">IPlayer</a> and other web streaming services will make life a bit bit easier until they provide Windows 8 Apps to do the job.</p>
<p><strong>4th Dec Update</strong></p>
<p>When tried to stream MP3 to my old <a href="http://soundbridge.roku.com/soundbridge/index.php">Roku M1000</a> this morning I could connect to my Windows 8 PC (file source) but got the error “invalid result received” when it tried to play a file. Last night I had only checked it connected.  <a href="http://forums.roku.com/viewtopic.php?p=357347">I am not alone with this problem</a>, seems the Roku is too old and changes in the Media Player streaming server in Windows 8 (from Windows 7) means it will not work.</p>
<p>The fix was to swap my media streaming server, I downloaded and installed <a href="http://www.mysqueezebox.com/download">SqueezeBox</a>, once this easy install was done it all leap into life.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS &amp;amp; Visual Studio 2012 Update 1 is available</title>
      <link>https://blog.richardfennell.net/posts/tfs-visual-studio-2012-update-1-is-available/</link>
      <pubDate>Tue, 27 Nov 2012 22:36:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-visual-studio-2012-update-1-is-available/</guid>
      <description>&lt;p&gt;For those who attended &lt;a href=&#34;http://www.blackmarble.co.uk/events&#34;&gt;Black Marble’s event&lt;/a&gt; today I mentioned there was an upcoming TFS and Visual Studio 2012 Update 1. Well I was out of date whilst talking, it had already been released overnight.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2012/11/26/visual-studio-2012-update-1-is-available.aspx&#34;&gt;Brian Harry’s blog&lt;/a&gt; for details and &lt;a href=&#34;http://blogs.msdn.com/b/granth/archive/2012/11/27/visual-studio-and-team-foundation-server-2012-update-1-is-now-available.aspx&#34;&gt;Grant Holiday’s blogs&lt;/a&gt;  for a nice explanation of the download options.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For those who attended <a href="http://www.blackmarble.co.uk/events">Black Marble’s event</a> today I mentioned there was an upcoming TFS and Visual Studio 2012 Update 1. Well I was out of date whilst talking, it had already been released overnight.</p>
<p>See <a href="http://blogs.msdn.com/b/bharry/archive/2012/11/26/visual-studio-2012-update-1-is-available.aspx">Brian Harry’s blog</a> for details and <a href="http://blogs.msdn.com/b/granth/archive/2012/11/27/visual-studio-and-team-foundation-server-2012-update-1-is-now-available.aspx">Grant Holiday’s blogs</a>  for a nice explanation of the download options.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Did I delete the right lab?</title>
      <link>https://blog.richardfennell.net/posts/did-i-delete-the-right-lab/</link>
      <pubDate>Thu, 22 Nov 2012 16:18:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/did-i-delete-the-right-lab/</guid>
      <description>&lt;p&gt;It was bound to happen in the end, the wrong environment got deleted on our TFS Lab Management instance. The usual selection of rushing, minor mistakes, misunderstandings and not reading the final dialog properly and BANG you get that sinking feeling as you see the wrong set of VMs being deleted. Well this happened yesterday, so was there anything that can be done? Luckily the answer is yes, if you are quick.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It was bound to happen in the end, the wrong environment got deleted on our TFS Lab Management instance. The usual selection of rushing, minor mistakes, misunderstandings and not reading the final dialog properly and BANG you get that sinking feeling as you see the wrong set of VMs being deleted. Well this happened yesterday, so was there anything that can be done? Luckily the answer is yes, if you are quick.</p>
<p>Firstly we knew SCVMM operations are slow, so I RDP’d onto the Hyper-V host  and quickly copied the folders that contained the VMs scheduled to be deleted. We now had a copy of the VHDs.</p>
<p>On the SCVMM host I cancelled the delete jobs. Turns out this did not really help as the jobs just get rescheduled. In fact it may make matters worse as the failing of jobs and their restarting seems to confuse SCVMM, took it hours before it was happy again, kept giving ‘can’t run job as XXX in use’ and losing sight of the Hyper-V hosts (needed to restart the VMM service in the end).</p>
<p>So I now had a copy of three network isolated VM, so I</p>
<ul>
<li>Created <strong>new</strong> VMs on a Hyper-V host using Hyper-V manager with the saved VHDs as their disks. I then made sure they ran and were not corrupted</li>
<li>In SCVMM cleared down the saved state so they were stopped (I forgot to do this the first time I went through this process and it meant I could not deploy the stored VMs into an isolated environment, that wasted hours!)</li>
<li>In SCVMM put them into the library on a path our Lab Management server knows about (gotcha here is SCVMM deletes the VM after putting it into the library, this is unlike MTM Lab Center which leaves the original in place, always scares me when I forget)</li>
<li>In MTM Lab Center import the new VMs from the library</li>
<li>Create a new network isolated environment with the VMs</li>
<li>Wait……………………….</li>
</ul>
<p>When it eventually started I had a network isolated environment back to the state it was when we in effect pulled the power out. All took about 24 hours, but most of this was waiting for copies to and from the library to complete.</p>
<p>So the top tip is try to avoid the problem, this is down to process frankly</p>
<ul>
<li>Use the ‘mark a in use’ feature to say who is using a VM</li>
<li>Put a process in place to manage the lab resources. It does not matter how much Hyper-V resource you have you will run out in the end and be unable to add that extra VM. You need a way to delete/archive out what is not currently need</li>
<li>Read the confirmation dialogs, they are there for a reason</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>New book from Gojko Adzic ‘Impact Mapping’</title>
      <link>https://blog.richardfennell.net/posts/new-book-from-gojko-adzic-impact-mapping/</link>
      <pubDate>Wed, 21 Nov 2012 17:24:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-book-from-gojko-adzic-impact-mapping/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.amazon.co.uk/dp/B009KWDKVA/ref=as_li_ss_til?tag=buitwoonmypc-21&amp;amp;camp=2902&amp;amp;creative=19466&amp;amp;linkCode=as4&amp;amp;creativeASIN=B009KWDKVA&amp;amp;adid=095KPRP5WN9M1QRSWRCZ&amp;amp;&amp;amp;ref-refURL=http%3A%2F%2Fblogs.blackmarble.co.uk%2Fblogs%2Frfennell%2Fpage%2FReading-List.aspx&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://impactmapping.org/site/cover500.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A common problem with getting software developed is the needing to get everyone aiming for the same goal. This too often gets lost in the development process; the real goal of the business is not communicated to the development team.  It maybe that the goal professed by the business is not the one they even really want, but their current viewpoint obscures the true goal.&lt;/p&gt;
&lt;p&gt;In this new book from Gojko Adzic provides a excellent introduction to &lt;a href=&#34;http://impactmapping.org/about.php&#34;&gt;Impact Mapping&lt;/a&gt; as a tool to help address this problem. It describes using workshops and simple graphical tools as a way to tackle this problem of keeping an eye on the true goal. These are tools to use well before starting down the user story/ALM path to make sure the goal of your project is sound, known and measurable.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.amazon.co.uk/dp/B009KWDKVA/ref=as_li_ss_til?tag=buitwoonmypc-21&amp;camp=2902&amp;creative=19466&amp;linkCode=as4&amp;creativeASIN=B009KWDKVA&amp;adid=095KPRP5WN9M1QRSWRCZ&amp;&amp;ref-refURL=http%3A%2F%2Fblogs.blackmarble.co.uk%2Fblogs%2Frfennell%2Fpage%2FReading-List.aspx"><img loading="lazy" src="http://impactmapping.org/site/cover500.png"></a></p>
<p>A common problem with getting software developed is the needing to get everyone aiming for the same goal. This too often gets lost in the development process; the real goal of the business is not communicated to the development team.  It maybe that the goal professed by the business is not the one they even really want, but their current viewpoint obscures the true goal.</p>
<p>In this new book from Gojko Adzic provides a excellent introduction to <a href="http://impactmapping.org/about.php">Impact Mapping</a> as a tool to help address this problem. It describes using workshops and simple graphical tools as a way to tackle this problem of keeping an eye on the true goal. These are tools to use well before starting down the user story/ALM path to make sure the goal of your project is sound, known and measurable.</p>
<p>This is a refreshingly thin books that should be easily accessible to anyone involved in software projects irrespective of their technical skill level or team role. Well worth a look by everyone</p>
]]></content:encoded>
    </item>
    <item>
      <title>Black Marble at the 2012 Abbey Dash</title>
      <link>https://blog.richardfennell.net/posts/black-marble-at-the-2012-abbey-dash/</link>
      <pubDate>Mon, 19 Nov 2012 12:28:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/black-marble-at-the-2012-abbey-dash/</guid>
      <description>&lt;p&gt;Another good turn out this year for the &lt;a href=&#34;http://www.ageuk.org.uk/get-involved/events-and-challenges/leeds-abbey-dash-in-aid-of-age-uk/&#34;&gt;Age UK Abbey Dash 10K in Leeds&lt;/a&gt;. Over 9000 runners this year, certainly seemed much busier than previous years.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_73.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_73.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Again Black Marble had staff members running. As with last year we did a &lt;a href=&#34;http://en.wikipedia.org/wiki/The_Wisdom_of_Crowds&#34;&gt;wisdom of crowds&lt;/a&gt; based handicap race for the impressive Black Marble trophy (we all estimate each others expected times, the winner is who beats mean estimate the most). This year there was a tie to the second between Jon and Becky, who are as we speak negotiating over trophy sharing for the next year.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Another good turn out this year for the <a href="http://www.ageuk.org.uk/get-involved/events-and-challenges/leeds-abbey-dash-in-aid-of-age-uk/">Age UK Abbey Dash 10K in Leeds</a>. Over 9000 runners this year, certainly seemed much busier than previous years.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_73.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_73.png" title="image"></a></p>
<p>Again Black Marble had staff members running. As with last year we did a <a href="http://en.wikipedia.org/wiki/The_Wisdom_of_Crowds">wisdom of crowds</a> based handicap race for the impressive Black Marble trophy (we all estimate each others expected times, the winner is who beats mean estimate the most). This year there was a tie to the second between Jon and Becky, who are as we speak negotiating over trophy sharing for the next year.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_74.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_74.png" title="image"></a></p>
<p>Congratulations to all who took part, I am sure plenty of good causes benefited from the efforts of everyone who ran</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upcoming free events from Black Marble</title>
      <link>https://blog.richardfennell.net/posts/upcoming-free-events-from-black-marble/</link>
      <pubDate>Wed, 14 Nov 2012 21:07:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upcoming-free-events-from-black-marble/</guid>
      <description>&lt;p&gt;Winter is rushing on, so it is time for a new season of Black Marble events in Yorkshire.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;21 Nov 2012 (All day) -&lt;/strong&gt; [&lt;strong&gt;Whats New in SharePoint 2013&lt;/strong&gt;](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Whats&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Whats&lt;/a&gt; New in SharePoint 2013)&lt;/p&gt;
&lt;p&gt;All the updates and announcements from the Microsoft SharePoint Conference around SharePoint 2013.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;27 Nov 2012 (AM) -&lt;/strong&gt; [&lt;strong&gt;An Introduction to Windows 8 Development&lt;/strong&gt;](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=An&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=An&lt;/a&gt; Introduction to Windows 8 Development)&lt;/p&gt;
&lt;p&gt;We welcome Microsoft Evangelist Mike Taulty to present a morning on Windows 8 for developers.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Winter is rushing on, so it is time for a new season of Black Marble events in Yorkshire.</p>
<p><strong>21 Nov 2012 (All day) -</strong> [<strong>Whats New in SharePoint 2013</strong>](<a href="http://www.blackmarble.co.uk/events.aspx?event=Whats">http://www.blackmarble.co.uk/events.aspx?event=Whats</a> New in SharePoint 2013)</p>
<p>All the updates and announcements from the Microsoft SharePoint Conference around SharePoint 2013.</p>
<p><strong>27 Nov 2012 (AM) -</strong> [<strong>An Introduction to Windows 8 Development</strong>](<a href="http://www.blackmarble.co.uk/events.aspx?event=An">http://www.blackmarble.co.uk/events.aspx?event=An</a> Introduction to Windows 8 Development)</p>
<p>We welcome Microsoft Evangelist Mike Taulty to present a morning on Windows 8 for developers.</p>
<p><strong>27 Nov 2012 (PM) -</strong> <a href="http://www.blackmarble.co.uk/events.aspx?event=ReBuild"><strong>ReBuild</strong></a></p>
<p>Bringing the key announcements and developments from Microsoft Build about Windows 8</p>
<p><strong>5 Dec 2012 (All day) -</strong> [<strong>Architecture Forum in the North 5</strong>](<a href="http://www.blackmarble.co.uk/events.aspx?event=Architecture">http://www.blackmarble.co.uk/events.aspx?event=Architecture</a> Forum in the North 5)</p>
<p>Our Architecture Forum returns, now in its 5th year, and Black Marble and Microsoft once again invite you to join us for a unique opportunity to learn about the latest technologies and best practices from luminaries in the field of computing.</p>
<p><strong>17 Jan 2013 (AM) -</strong> [<strong>A Windows Azure Update</strong>](<a href="http://www.blackmarble.co.uk/events.aspx?event=A">http://www.blackmarble.co.uk/events.aspx?event=A</a> Windows Azure Update)</p>
<p>Take time to out to explore a round-up of what’s new in Windows Azure from the last 12 months. Join our experts for their views on what’s been happening, what’s coming and how it will impact your business.</p>
<p><strong>30 Jan 2013 -</strong> [<strong>The Tenth Annual Technical Update - AM</strong>](<a href="http://www.blackmarble.co.uk/events.aspx?event=The">http://www.blackmarble.co.uk/events.aspx?event=The</a> Tenth Annual Technical Update - AM)</p>
<p>From Windows 8 to Windows Phone 8, to Visual Studio 2012 and SharePoint and Office 2013 – join us as ever for our views on the new releases, as we focus on the story for IT managers and Business Decision Makers.</p>
<p><strong>30 Jan 2013 -</strong> [<strong>The Tenth Annual Technical Update - PM</strong>](<a href="http://www.blackmarble.co.uk/events.aspx?event=The">http://www.blackmarble.co.uk/events.aspx?event=The</a> Tenth Annual Technical Update - PM)</p>
<p>From Windows 8 to Windows Phone 8, to Visual Studio 2012 and SharePoint and Office 2013 – join us as ever for our views on the new releases, as we focus on the story for developers and Technical Decision Makers.</p>
<p><a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events">For full details and registration check out the Black Marble site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgraded to BlogEngine .NET 2.7</title>
      <link>https://blog.richardfennell.net/posts/upgraded-to-blogengine-net-2-7/</link>
      <pubDate>Wed, 14 Nov 2012 09:47:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgraded-to-blogengine-net-2-7/</guid>
      <description>&lt;p&gt;Just upgraded this blog server to &lt;a href=&#34;http://dotnetblogengine.net/page/BlogEngineNET-27-Features-Notes.aspx&#34;&gt;BlogEngine .NET 2.7&lt;/a&gt; from 2.6. Nice straight forward upgrade.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just upgraded this blog server to <a href="http://dotnetblogengine.net/page/BlogEngineNET-27-Features-Notes.aspx">BlogEngine .NET 2.7</a> from 2.6. Nice straight forward upgrade.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why does ‘Send to &amp;gt; email link’ in SharePoint open Chrome on my PC?</title>
      <link>https://blog.richardfennell.net/posts/why-does-send-to-email-link-in-sharepoint-open-chrome-on-my-pc/</link>
      <pubDate>Tue, 13 Nov 2012 19:44:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-does-send-to-email-link-in-sharepoint-open-chrome-on-my-pc/</guid>
      <description>&lt;p&gt;I must have clicked something in error on my Win 8 PC as when I open one of our SharePoint 2010 sites and select a file, right clicked and selected Send To &amp;gt; Email Link instead of an Outlook email opening my PC tries to open Chrome.&lt;/p&gt;
&lt;p&gt;A bit of quick digging showed the issue was that the file association for mailto: was wrong. You can check this setting in IE &amp;gt; Internet Options &amp;gt; Programs &amp;gt; Internet Programs &amp;gt; set Programs (button)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I must have clicked something in error on my Win 8 PC as when I open one of our SharePoint 2010 sites and select a file, right clicked and selected Send To &gt; Email Link instead of an Outlook email opening my PC tries to open Chrome.</p>
<p>A bit of quick digging showed the issue was that the file association for mailto: was wrong. You can check this setting in IE &gt; Internet Options &gt; Programs &gt; Internet Programs &gt; set Programs (button)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_72.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_72.png" title="image"></a></p>
<p>Once I changed this to Outlook I got the behaviour I expected</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF900546, can’t run Windows 8 App Store unit tests in a TFS build</title>
      <link>https://blog.richardfennell.net/posts/tf900546-cant-run-windows-8-app-store-unit-tests-in-a-tfs-build/</link>
      <pubDate>Mon, 12 Nov 2012 16:20:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf900546-cant-run-windows-8-app-store-unit-tests-in-a-tfs-build/</guid>
      <description>&lt;p&gt;Today has been one of &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/11/12/More-thoughts-on-addressing-TF900546-Unable-to-load-one-or-more-of-the-requested-types-on-TFS2012.aspx&#34;&gt;purging build system problems&lt;/a&gt;. On my TFS 2012 Windows 8 build box I was was getting the following error when trying to run Windows 8 App Store unit tests&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TF900546: An unexpected error occurred while running the RunTests activity: &amp;lsquo;Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.&amp;rsquo;.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;On further investigation, I am not really sure anything was working too well on this box. To give a bit of background&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today has been one of <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/11/12/More-thoughts-on-addressing-TF900546-Unable-to-load-one-or-more-of-the-requested-types-on-TFS2012.aspx">purging build system problems</a>. On my TFS 2012 Windows 8 build box I was was getting the following error when trying to run Windows 8 App Store unit tests</p>
<blockquote>
<p><em>TF900546: An unexpected error occurred while running the RunTests activity: &lsquo;Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.&rsquo;.</em></p></blockquote>
<p>On further investigation, I am not really sure anything was working too well on this box. To give a bit of background</p>
<ul>
<li>I have one build controller <strong>build2012</strong></li>
<li>with a number of build agents spread across various VMs. I use tags to target the correct agent e.g. SUR40 or WIN8</li>
</ul>
<p>In the case of Windows 8 builds (where the  TFS build agent has to run on a Windows 8 box) the build seemed to run, but tests failed with the TF900546 ‘its broken error, but I am not saying why’ error. As usual there was nothing in the logs to help.</p>
<p>To try to debug the error I added a build controller to this box, and eventually, just like <a href="http://blog.hinshelwood.com/tfs-2012-issue-stuck-builds-in-team-foundation-build-with-no-build-number/">Martin in his post</a> noticed, after far too long, that I was getting a error on the build service on the Windows 8 box and the agent was not fully online.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_71.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_71.png" title="image"></a></p>
<p>The main symptom is the build agent says ready, but shows a red box (stopped). If you hit the details link that appears you get the error dialog. Martin had a 500 error, I was getting a 404. I had seen <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/12/TFS-build-service-cannot-connect-to-a-TFS-2012-server-seeing-EventID-206-MessageQueue-in-error-log.aspx">similar problems before</a>, I really should read (or at least remember) my own blog posts.</p>
<p><strong>I can’t stress enough, if you don’t see a green icon on build controllers and agent you have a problem, it might not be obvious at that point but it will bite you later!</strong></p>
<p>For me the fix was the URL I was using to connect to the TFS server. i was using HTTPS (SSL), as soon as switched to HTTP all was OK. In this case this was fine as both the TFS server and build box were in the same rack so SSL was not really needed. I suspect that the solution, if I had wanted SSL, would be as Martin outlined, a config file edit to sort out the bindings.</p>
<p><strong>But remember….</strong></p>
<p>That having a working build system is not enough for Windows 8 App Store unit tests. You also have to manually install the application certificate for test assembly as detailed in <a href="http://msdn.microsoft.com/en-us/library/tfs/hh691189%28v=vs.110%29.aspx#agent_test">MSDN</a> as well as getting the build service running in interactive mode.</p>
<p>Once this was done my application build and the tests ran OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>More thoughts on addressing TF900546 ‘Unable to load one or more of the requested types’ on TFS2012</title>
      <link>https://blog.richardfennell.net/posts/more-thoughts-on-addressing-tf900546-unable-to-load-one-or-more-of-the-requested-types-on-tfs2012/</link>
      <pubDate>Mon, 12 Nov 2012 11:33:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-thoughts-on-addressing-tf900546-unable-to-load-one-or-more-of-the-requested-types-on-tfs2012/</guid>
      <description>&lt;p&gt;A while ago I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/09/TF900546-error-on-a-TFS-2012-build.aspx&#34;&gt;posted&lt;/a&gt; about seeing the TF900546 error when running unit tests in a previously working TFS 2012 build. The full error being:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TF900546: An unexpected error occurred while running the RunTests activity: &amp;lsquo;Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.&amp;rsquo;.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Well late last week this problem came back with avengeance on a number of builds run on the same build controller/agent(s). Irritatingly I first noticed it after a major refactor of a codebase, so I had plenty of potential root causes as assemblies had been renamed and it was possible they might not be found. However, after a bit of testing there were no obvious candidates as all tests worked fine locally on my development PC, and a new very simple test application showed the same issues. It was defiantly an issue on the build system.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A while ago I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/09/TF900546-error-on-a-TFS-2012-build.aspx">posted</a> about seeing the TF900546 error when running unit tests in a previously working TFS 2012 build. The full error being:</p>
<blockquote>
<p><em>TF900546: An unexpected error occurred while running the RunTests activity: &lsquo;Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.&rsquo;.</em></p></blockquote>
<p>Well late last week this problem came back with avengeance on a number of builds run on the same build controller/agent(s). Irritatingly I first noticed it after a major refactor of a codebase, so I had plenty of potential root causes as assemblies had been renamed and it was possible they might not be found. However, after a bit of testing there were no obvious candidates as all tests worked fine locally on my development PC, and a new very simple test application showed the same issues. It was defiantly an issue on the build system.</p>
<p>I can still find no good way to debug this error, <a href="http://stackoverflow.com/questions/1091853/unable-to-load-one-or-more-of-the-requested-types-retrieve-the-loaderexceptions">Stackoverflow</a> mention Fuslogvw and WinDbg, as well as various copy local settings and the like. Again all seems too much as this build was working in the past, just seemed to stop. I tried a couple but got no real information, and the error logs were empty.</p>
<p>In the end I just tried what I did before (as I could think of no better tactic to pin down the true issue). I went into the build controller config, removed the reference to the custom assemblies, saved this settings (causing a controller restart), then put it back (another restart of the controller)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_70.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_70.png" title="image"></a></p>
<p>After this my test started working again, with no other changes</p>
<p>Interesting a restart of VM running the build controller did not fix the problem. However this does somewhat chime with comments in the StackOverFlow thread that causing the AppPool in MVC apps to rebuild completely, ignoring any cached assemblies, seems to fix the issue.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Change in the System.AssignedTo in TFS SOAP alerts with TFS 2012</title>
      <link>https://blog.richardfennell.net/posts/change-in-the-system-assignedto-in-tfs-soap-alerts-with-tfs-2012/</link>
      <pubDate>Fri, 09 Nov 2012 20:29:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/change-in-the-system-assignedto-in-tfs-soap-alerts-with-tfs-2012/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.ewaldhofman.nl/post/2010/08/02/How-to-use-WCF-to-subscribe-to-the-TFS-2010-Event-Service-rolling-up-hours.aspx&#34;&gt;Ewald’s post&lt;/a&gt; explains how to create a WCF web service to act as an end point for TFS Alerts. I have been using the model with a TFS 2010 to check for work item changed events, using the work item’s &lt;strong&gt;System.AssignedTo&lt;/strong&gt; field to retrieve the owner of the work item (via the TFS API) so I can send an email, as well as other tasks (I know I could just send the email with a standard alert).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.ewaldhofman.nl/post/2010/08/02/How-to-use-WCF-to-subscribe-to-the-TFS-2010-Event-Service-rolling-up-hours.aspx">Ewald’s post</a> explains how to create a WCF web service to act as an end point for TFS Alerts. I have been using the model with a TFS 2010 to check for work item changed events, using the work item’s <strong>System.AssignedTo</strong> field to retrieve the owner of the work item (via the TFS API) so I can send an email, as well as other tasks (I know I could just send the email with a standard alert).</p>
<p>In TFS 2010 this worked fine, if the work item was assigned to me I got back the name <strong>richard</strong>, which I could use as the to address for the email by appending our domain name.</p>
<p>When I moved this WCF event receiver onto a TFS 2012 (using the TFS 2012 API) I had not expected any problems, but the emails did not arrive. On checking my logging I saw they were being sent to <a href="mailto:fennell@blackmarble.co.uk"><strong>fennell@blackmarble.co.uk</strong></a>. Turns out the issue was that the API call</p>
<blockquote>
<p>value = this.workItem.Fields[“System.AssignedTo ”].Value.ToString();</p></blockquote>
<p>was returning the display name ‘Richard Fennell’, which was not a valid part of the email address.</p>
<p>The best solution I found, thus far, was to check to see if had a display name in the AD using the method I found on <a href="http://stackoverflow.com/questions/9845444/how-to-get-a-username-in-active-directory-from-a-display-name-in-c">stackoverflow</a>. If I got a user name back I used that, if I got a empty string (because I have been passed a non display name) I just use the initial value assuming it is a valid address.</p>
<p>Seems to work but this there a easier solution?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot run coded ui test on a TFS lab due to lack of rights to the drops folder</title>
      <link>https://blog.richardfennell.net/posts/cannot-run-coded-ui-test-on-a-tfs-lab-due-to-lack-of-rights-to-the-drops-folder/</link>
      <pubDate>Fri, 09 Nov 2012 20:09:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-run-coded-ui-test-on-a-tfs-lab-due-to-lack-of-rights-to-the-drops-folder/</guid>
      <description>&lt;p&gt;Whilst setting up a TFS 2012 Lab management environment for Coded UI testing we got the problem that none of the tests were running, in fact we could see no tests were even being passed to the agent in the lab&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_65.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_65.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;On the build report I clicked on  the ‘View Test Results’ which loaded the result in Microsoft Test Manager (MTM)&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_66.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_66.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;and viewed the test run log, and we saw&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst setting up a TFS 2012 Lab management environment for Coded UI testing we got the problem that none of the tests were running, in fact we could see no tests were even being passed to the agent in the lab</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_65.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_65.png" title="image"></a></p>
<p>On the build report I clicked on  the ‘View Test Results’ which loaded the result in Microsoft Test Manager (MTM)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_66.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_66.png" title="image"></a></p>
<p>and viewed the test run log, and we saw</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_67.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_67.png" title="image"></a></p>
<p>The issue, it claimed, was that the build controller did not have rights to access the drop folder containing the assembly with the CodedUI. tests.</p>
<p>Initially i thought the issue was the test controller was running a ‘local service’, so I changed it to the domaintfsbuild account (which obviously has rights to the drops folder as it put the files there) but still got the same error. i was confused.</p>
<p>So I checked the event log on the build controller and found the following</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_68.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_68.png" title="image"></a></p>
<p>The problem was my tfslab account, not the local service or tfsbuild one. So the message shown in the build report was just confusing, mentioning the wrong user. The lab account is the one configured in the test controller (yes you have to asked how had I missed that when I had been into the same tools to change the user the test controller ran as!)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_69.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_69.png" title="image"></a></p>
<p>As soon as I granted this tfslab user rights to the drops folder all was OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using IISExpress for addresses other than localhost</title>
      <link>https://blog.richardfennell.net/posts/using-iisexpress-for-addresses-other-than-localhost/</link>
      <pubDate>Tue, 06 Nov 2012 22:34:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-iisexpress-for-addresses-other-than-localhost/</guid>
      <description>&lt;p&gt;I recently had the need to use IISExpress on Windows 8 to provide a demo server to a number of Surface RT clients. I found this took me longer than I expected. It might be me but the documentation did not leap out.&lt;/p&gt;
&lt;p&gt;So as a summary this is what I had to do, let us say for example that I want to serve out &lt;a href=&#34;http://mypc:1234&#34;&gt;http://mypc:1234&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Make sure you have a project  &lt;strong&gt;MyWebProject&lt;/strong&gt; in Visual Studio that works for &lt;a href=&#34;http://localhost:1234&#34;&gt;http://localhost:1234&lt;/a&gt; using IISExpress&lt;/li&gt;
&lt;li&gt;Open the TCP port 1234 on the PC in the Control Panel &amp;gt; Admin Tools &amp;gt; Firewall&lt;/li&gt;
&lt;li&gt;Edit &lt;strong&gt;C:Users[current user]DocumentsIISExpressconfigapplicationhost.config&lt;/strong&gt; and find the site name section for your Visual Studio project. And change the &lt;strong&gt;&lt;binding protocol=&#34;http&#34; bindingInformation=&#34;\*:1234:localhost&#34; /&gt;&lt;/strong&gt; to &lt;strong&gt;&lt;binding protocol=&#34;http&#34; bindingInformation=&#34;\*:1234:\*&#34; /&gt;.&lt;/strong&gt;  This means IISexpress can now listen on this port for any IP address&lt;/li&gt;
&lt;li&gt;You finally need to run IISExpress with administrative privileges. I did this by opening a PowerShell prompt with administrative privileges and running the command  &lt;strong&gt;C:Program FilesIIS Expressiisexpress.exe /site:MyWebProject&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once all this was done my client PCs could connect&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently had the need to use IISExpress on Windows 8 to provide a demo server to a number of Surface RT clients. I found this took me longer than I expected. It might be me but the documentation did not leap out.</p>
<p>So as a summary this is what I had to do, let us say for example that I want to serve out <a href="http://mypc:1234">http://mypc:1234</a></p>
<ul>
<li>Make sure you have a project  <strong>MyWebProject</strong> in Visual Studio that works for <a href="http://localhost:1234">http://localhost:1234</a> using IISExpress</li>
<li>Open the TCP port 1234 on the PC in the Control Panel &gt; Admin Tools &gt; Firewall</li>
<li>Edit <strong>C:Users[current user]DocumentsIISExpressconfigapplicationhost.config</strong> and find the site name section for your Visual Studio project. And change the <strong><binding protocol="http" bindingInformation="\*:1234:localhost" /></strong> to <strong><binding protocol="http" bindingInformation="\*:1234:\*" />.</strong>  This means IISexpress can now listen on this port for any IP address</li>
<li>You finally need to run IISExpress with administrative privileges. I did this by opening a PowerShell prompt with administrative privileges and running the command  <strong>C:Program FilesIIS Expressiisexpress.exe /site:MyWebProject</strong></li>
</ul>
<p>Once all this was done my client PCs could connect</p>
]]></content:encoded>
    </item>
    <item>
      <title>Reinstalling again…</title>
      <link>https://blog.richardfennell.net/posts/reinstalling-again/</link>
      <pubDate>Fri, 02 Nov 2012 17:10:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/reinstalling-again/</guid>
      <description>&lt;p&gt;Just completing the second reinstall of &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/23/Moving-to-an-SSD-on-Lenovo-W520.aspx&#34;&gt;Windows 8 on my Lenovo W520&lt;/a&gt; in 10 days due to my new SSD failing and needing to be replaced.&lt;/p&gt;
&lt;p&gt;To try to ease the process I though I would try putting on the miscellaneous tools I use such as &lt;a href=&#34;http://chocolatey.org/packages/7zip&#34;&gt;7Zip&lt;/a&gt;, &lt;a href=&#34;http://chocolatey.org/packages/filezilla&#34;&gt;Filezilla&lt;/a&gt; etc. using &lt;a href=&#34;http://chocolatey.org/&#34; title=&#34;http://chocolatey.org/&#34;&gt;Chocolatey&lt;/a&gt;. I have to say first impressions are good, one command and the product is on, the files pulled from the appropriate site.&lt;/p&gt;
&lt;p&gt;Obviously there is the issue that packages are may not be kept up to date, unlike Nuget (which is at Chocolatey’s core) the packages are not stored on the Chocolatey site. I noticed the &lt;a href=&#34;http://chocolatey.org/packages/sysinternals&#34;&gt;SysInternal&lt;/a&gt; package is a bit behind, but I could always submit the updated package myself couldn’t I.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just completing the second reinstall of <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/23/Moving-to-an-SSD-on-Lenovo-W520.aspx">Windows 8 on my Lenovo W520</a> in 10 days due to my new SSD failing and needing to be replaced.</p>
<p>To try to ease the process I though I would try putting on the miscellaneous tools I use such as <a href="http://chocolatey.org/packages/7zip">7Zip</a>, <a href="http://chocolatey.org/packages/filezilla">Filezilla</a> etc. using <a href="http://chocolatey.org/" title="http://chocolatey.org/">Chocolatey</a>. I have to say first impressions are good, one command and the product is on, the files pulled from the appropriate site.</p>
<p>Obviously there is the issue that packages are may not be kept up to date, unlike Nuget (which is at Chocolatey’s core) the packages are not stored on the Chocolatey site. I noticed the <a href="http://chocolatey.org/packages/sysinternals">SysInternal</a> package is a bit behind, but I could always submit the updated package myself couldn’t I.</p>
<p>Emboldened by my success with simple utilities I tried <a href="http://chocolatey.org/packages/eclipse-java-juno">Eclipse</a> and <a href="http://chocolatey.org/packages/javaruntime.x64">Java</a>, they worked fine.</p>
<p>The biggest gain was <a href="http://chocolatey.org/packages/poshgit">git, posh git</a> and <a href="http://chocolatey.org/packages/Git-TF">git-tf</a>. Usually there is a degree of file/path editing here, but with chocolatey just a single command for each.</p>
<p>To find out more why not listen to <a href="http://herdingcode.com/?p=489">Herding Code podcast on the subject</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Team Foundation Service RTMs</title>
      <link>https://blog.richardfennell.net/posts/team-foundation-service-rtms/</link>
      <pubDate>Wed, 31 Oct 2012 17:15:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/team-foundation-service-rtms/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/somasegar/archive/2012/10/31/team-foundation-service-is-released.aspx&#34;&gt;Today at Build 2012 it was announced&lt;/a&gt; that &lt;a href=&#34;https://tfspreview.com&#34;&gt;https://tfspreview.com&lt;/a&gt; has RTM&amp;rsquo;d as Team Foundation Service on &lt;a href=&#34;https://tfs.visualstudio.com&#34;&gt;https://tfs.visualstudio.com&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Up until now there has been no pricing information, which had been a barrier to some people I have spoken to as they did not want to started using something without knowing the future cost.&lt;/p&gt;
&lt;p&gt;So to the really good news, as of now&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It is free for up to 5 users&lt;/li&gt;
&lt;li&gt;If you have an active MSDN subscription it is also free. So a team of any size can use it as long as they all have MSDN&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The announcement said to look out for further price options next year.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.msdn.com/b/somasegar/archive/2012/10/31/team-foundation-service-is-released.aspx">Today at Build 2012 it was announced</a> that <a href="https://tfspreview.com">https://tfspreview.com</a> has RTM&rsquo;d as Team Foundation Service on <a href="https://tfs.visualstudio.com">https://tfs.visualstudio.com</a>.</p>
<p>Up until now there has been no pricing information, which had been a barrier to some people I have spoken to as they did not want to started using something without knowing the future cost.</p>
<p>So to the really good news, as of now</p>
<ul>
<li>It is free for up to 5 users</li>
<li>If you have an active MSDN subscription it is also free. So a team of any size can use it as long as they all have MSDN</li>
</ul>
<p>The announcement said to look out for further price options next year.</p>
<p>Check the full details at <a href="http://blogs.msdn.com/b/somasegar/archive/2012/10/31/team-foundation-service-is-released.aspx">Soma&rsquo;s</a> blog</p>
]]></content:encoded>
    </item>
    <item>
      <title>403 and 413 errors when publishing to a local Nuget Server</title>
      <link>https://blog.richardfennell.net/posts/403-and-413-errors-when-publishing-to-a-local-nuget-server/</link>
      <pubDate>Wed, 31 Oct 2012 12:03:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/403-and-413-errors-when-publishing-to-a-local-nuget-server/</guid>
      <description>&lt;p&gt;We have an &lt;a href=&#34;http://www.nuget.org/packages/NuGet.Server&#34;&gt;internal Nuget Server&lt;/a&gt; we use to manage our software packages. As part of our upgrade to TFS2012 this needed to be moved to a new server VM and I took the chance to upgrade it from 1.7 to 2.1.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The problem&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Now we had had a problem that we could publish to the server via a file copy to its underlying &lt;em&gt;Packages&lt;/em&gt; folder (a UNC share) but could never publish using the Nuget command e.g.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have an <a href="http://www.nuget.org/packages/NuGet.Server">internal Nuget Server</a> we use to manage our software packages. As part of our upgrade to TFS2012 this needed to be moved to a new server VM and I took the chance to upgrade it from 1.7 to 2.1.</p>
<p><strong>The problem</strong></p>
<p>Now we had had a problem that we could publish to the server via a file copy to its underlying <em>Packages</em> folder (a UNC share) but could never publish using the Nuget command e.g.</p>
<p><em>Nuget push mypackage.nupkg -s <a href="http://mynugetserver">http://mynugetserver</a></em></p>
<p>I had never had the time to get around to sorting this out until now.</p>
<p>The reported error if I used the URL above was</p>
<p><em>Failed to process request. &lsquo;Access denied for package &lsquo;Mypackage.&rsquo;.</em><br>
<em>The remote server returned an error: (403) Forbidden..</em></p>
<p>If I changed the URL to</p>
<p><em>Nuget push mypackage.nupkg -s <a href="http://mynugetserver/nuget">http://mynugetserver/nuget</a></em></p>
<p>The error became</p>
<p><em>Failed to process request. &lsquo;Request Entity Too Large&rsquo;.</em><br>
<em>The remote server returned an error: (413) Request Entity Too Large..</em></p>
<p>Important: This second error was a red herring, you don&rsquo;t need the <em>/nuget</em> on the end of the URL</p>
<p><strong>The solution</strong></p>
<p>The solution was actually simple, and in the documentation though it took me a while to find.</p>
<p>I had not specificed an APIKey in the web.config on my server, obvious really my access was blocked as I did not have the shared key. The 413 errors just caused me to waste loads of time looking at WCF packet sizes because I had convinced myself I needed to use the same URL as you enter in <em>Visual Studio &gt; Tools &gt; Option &gt; Package Management &gt; Add Source,</em> which you don&rsquo;t</p>
<p>Once I had edited my web.config file to add the key (or I could have switched off the requirement as an alternative solution)</p>
<p>  <em><appSettings></em><br>
_    <!--_  
_            Determines if an Api Key is required to pushdelete packages from the server._  
_    -->_<br>
_    <add key="requireApiKey" value="true" />_<br>
_    <!--_  
_            Set the value here to allow people to push/delete packages from the server._  
_            NOTE: This is a shared key (password) for all users._  
_    -->_<br>
_    <add key="apiKey" value="myapikey" />_<br>
_    <!--_  
_            Change the path to the packages folder. Default is ~/Packages._  
_            This can be a virtual or physical path._  
_        -->_<br>
_    <add key="packagesPath" value="" />_<br>
_  </appSettings>_</p>
<p>I could then publish using</p>
<p><em>Nuget mypackage.nupkg myapikey -s <a href="http://mynugetserver/">http://mynugetserver</a></em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Nice introduction to the new features of VS2012</title>
      <link>https://blog.richardfennell.net/posts/nice-introduction-to-the-new-features-of-vs2012/</link>
      <pubDate>Tue, 30 Oct 2012 20:01:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/nice-introduction-to-the-new-features-of-vs2012/</guid>
      <description>&lt;p&gt;If you are looking for a nice introduction to the new features of Visual Studio 2012, I can heartily recommend &lt;a href=&#34;http://www.amazon.co.uk/dp/1849686521/ref=as_li_ss_til?tag=buitwoonmypc-21&amp;amp;camp=2902&amp;amp;creative=19466&amp;amp;linkCode=as4&amp;amp;creativeASIN=1849686521&amp;amp;adid=1J4X5AYKARWKPG4877ZQ&amp;amp;&amp;amp;ref-refURL=http%3A%2F%2Fblogs.blackmarble.co.uk%2Fblogs%2Frfennell%2Fpage%2FReading-List.aspx&#34;&gt;Richard Banks &amp;lsquo;Visual Studio 2012 Cookbook&amp;rsquo;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This book covers a wide range of subjects including the IDE, .NET 4.5 features, Windows 8 development, Web development, C++, debugging, async and TFS 2012. This is all done in a easy to read format that will get you going with the key concepts, providing sample and links to further reading. A great starting off point.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are looking for a nice introduction to the new features of Visual Studio 2012, I can heartily recommend <a href="http://www.amazon.co.uk/dp/1849686521/ref=as_li_ss_til?tag=buitwoonmypc-21&amp;camp=2902&amp;creative=19466&amp;linkCode=as4&amp;creativeASIN=1849686521&amp;adid=1J4X5AYKARWKPG4877ZQ&amp;&amp;ref-refURL=http%3A%2F%2Fblogs.blackmarble.co.uk%2Fblogs%2Frfennell%2Fpage%2FReading-List.aspx">Richard Banks &lsquo;Visual Studio 2012 Cookbook&rsquo;</a>.</p>
<p>This book covers a wide range of subjects including the IDE, .NET 4.5 features, Windows 8 development, Web development, C++, debugging, async and TFS 2012. This is all done in a easy to read format that will get you going with the key concepts, providing sample and links to further reading. A great starting off point.</p>
<p>There is stuff in the book for people new to any of the subjects as well as nuggets for the more expererienced users. I particularly like the sections on what is not in 2012 but was in previous versions, and what to do about it. This type of information too oftan left out of new product books.</p>
<p>So a book that is well worth a look, and has it has been <a href="http://www.packtpub.com/visual-studio-2012-cookbook/book">published by Packt</a> there are no shortage of formats to choose from.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems re-publishing an Access site to SharePoint 2010</title>
      <link>https://blog.richardfennell.net/posts/problems-re-publishing-an-access-site-to-sharepoint-2010/</link>
      <pubDate>Thu, 25 Oct 2012 22:33:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-re-publishing-an-access-site-to-sharepoint-2010/</guid>
      <description>&lt;p&gt;After applying SP1 to a SharePoint 2010 farm we found we were unable to run any macros in an Access Services site, it gave a –4002 error. We had seen this error in the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/04/error-4002-on-access-services-on-sharepoint-2010.aspx&#34;&gt;past&lt;/a&gt;, but the solutions that worked then did not help. As this site was critical, as a workaround, we moved the site to a non-patched SP2010 instance. This was done via a quick site collection backup and restore process.  This allowed us to dig into the problem at our leisure.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After applying SP1 to a SharePoint 2010 farm we found we were unable to run any macros in an Access Services site, it gave a –4002 error. We had seen this error in the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/10/04/error-4002-on-access-services-on-sharepoint-2010.aspx">past</a>, but the solutions that worked then did not help. As this site was critical, as a workaround, we moved the site to a non-patched SP2010 instance. This was done via a quick site collection backup and restore process.  This allowed us to dig into the problem at our leisure.</p>
<p>Eventually we fixed the problem by deleting and recreating the Access Services application within SharePoint on the patched farm. We assume some property was changed/corrupted/deleted in the application of the service pack.</p>
<p>So we now had a working patched farm, but also a duplicate of Access Services site with changed data. We could not just backup and restore this as other sites in the collection had also changed. Turns out getting this data back onto the production farm took a bit of work, more than we expected. This is the process we used</p>
<ol>
<li>Open the Access Services site in a browser on the duplicate server</li>
<li>Select the open in Access option, we used Access 2010, which it had originally been created in</li>
<li>When Access had opened the site, use the ‘save as’ option to save a local copy of the DB. We now had a disconnected local copy on a PC. We thought we could just re-publish this, how wrong we were.</li>
<li>We ran the web compatibility checker expecting no errors, but it reported a couple of them. In one form and one query extra column references had been added that referenced the auto created SharePoint library columns (date and id stamps basically) These had to be deleted by hand.</li>
<li>We then could publish back to the production server</li>
<li>We watched as the structure and data was publish</li>
<li>Then it errored. On checking the log we saw that it claimed a lookup reference had invalid data (though we could not see offending rows and it was working OK). Luckily the table in question contained temporary data we could just delete, so we tried to publish again</li>
<li>Then it errored .On checking the logs again we saw it reported it could not copy to <a href="http://127.0.0.1">http://127.0.0.1</a> – No idea why it looking for localhost! Interestingly if we tried to publish back to another site URL on the non-patched server it work! Very strange</li>
<li>On a whim I repeated this whole process but using Access 2013 RC, and strangely it worked</li>
</ol>
<p>So I now had my Access Services site re-published and fully working on a patched farm. That was all a bit too complex for my tastes</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving to an SSD on Lenovo W520</title>
      <link>https://blog.richardfennell.net/posts/moving-to-an-ssd-on-lenovo-w520/</link>
      <pubDate>Tue, 23 Oct 2012 10:20:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-to-an-ssd-on-lenovo-w520/</guid>
      <description>&lt;p&gt;[Also see &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/22/More-on-HDD2-boot-problems-with-my-Crucial-M4-mSATA.aspx&#34;&gt;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/22/More-on-HDD2-boot-problems-with-my-Crucial-M4-mSATA.aspx&lt;/a&gt;]&lt;/p&gt;
&lt;p&gt;I have just reinstalled Windows 8 (again) on my Lenovo W520. This time it was because I moved to a Crucial m4 256Gb 2.5” internal SSD as my primary disk. There is a special slot for this type of drive under the keyboard, so I could also keep my 750Gb Hybrid SATA drive to be used for storing virtual machines.&lt;/p&gt;
&lt;p&gt;I had initially planned to backup/restore my previous installation using IMAGEX as I had all I needed in my PC, but after two days of fiddling I had got nowhere, the problems being&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>[Also see <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/22/More-on-HDD2-boot-problems-with-my-Crucial-M4-mSATA.aspx">http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/01/22/More-on-HDD2-boot-problems-with-my-Crucial-M4-mSATA.aspx</a>]</p>
<p>I have just reinstalled Windows 8 (again) on my Lenovo W520. This time it was because I moved to a Crucial m4 256Gb 2.5” internal SSD as my primary disk. There is a special slot for this type of drive under the keyboard, so I could also keep my 750Gb Hybrid SATA drive to be used for storing virtual machines.</p>
<p>I had initially planned to backup/restore my previous installation using IMAGEX as I had all I needed in my PC, but after two days of fiddling I had got nowhere, the problems being</p>
<ul>
<li>The IMAGEX from my hybrid drive to an external disk (only 150Gb of data after I had moved out all my virtual PCs) took well over 10 hours. I thought this was due to using an old USB1 (maybe it was a USB2 at a push) disk caddy, but it was just as slow with a ESata. The restore from the same hardware only too an hour or so. One suggest made that I did not try was to enable compression in the image, as this would reduce the bandwidth on the disk connection, it is not as if my i&amp; CPU could not handle the compression load.</li>
<li>When the images restored, we had to fiddle with Bcdedit to get the PC to boot</li>
<li>Eventually the Windows 8 SSD based image came up, you could open the login page with no issues but got no cursor for a long time, it it was sloooow to do anything – I have no idea why.</li>
</ul>
<p>So in the end I gave up, and installed anew, including Visual Studio and Office it took about 30-45 minutes. There were still a couple of gotcha’s though</p>
<ol>
<li>I still had to enable the Nvidia Optumus graphics mode in BIOS, thus enabling both the Intel and Nvidia graphics sub systems. I usually only run on the discrete NVidia one as this does not get <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/12/The-battle-of-the-Lenovo-W520-and-projectors.aspx">confused by projects</a>. if you don’t enable the Intel based one then the Windows 8 install hangs after installing drivers and before the reboot that then allows you to login and choose colours etc. As soon as this stage is passed you can switch back to discrete graphics as you wish. I assume the Windows 8 media is missing some NVidia bits that it find after this first reboot or via WIndows Update.</li>
<li>Windows 8 is still missing a couple of drivers for the Ricoh card reader and Power management, but these are both released on the <a href="http://support.lenovo.com/en_US/" title="http://support.lenovo.com/en_US/">http://support.lenovo.com/en_US/</a> site. You do have to download these manually and install them. All the other Lenovo drivers (including updated audio I have mentioned before) come down from Windows update.</li>
</ol>
<p>So the moral of the story is reinstall, don’t try to move disk images. Make sure your data is in SkyDrive, Dropbox, SharePoint, source control etc. so it is just applications you are missing which are quick to sort out. The only pain of a job I had was to sort out my podcasts, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/16/Moving-podcast-subscriptions-with-Zune.aspx">but even that was not too bad</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>New video on unit testing in VS2012 and TFS</title>
      <link>https://blog.richardfennell.net/posts/new-video-on-unit-testing-in-vs2012-and-tfs/</link>
      <pubDate>Mon, 08 Oct 2012 13:23:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-video-on-unit-testing-in-vs2012-and-tfs/</guid>
      <description>&lt;p&gt;A &lt;a href=&#34;http://www.youtube.com/watch?v=FREohi-47S8&amp;amp;list=PLEA6C7FB5CD6818CF&amp;amp;index=1&amp;amp;feature=plpp_video&#34;&gt;video&lt;/a&gt; has just be uploaded that I did on the new unit testing features in Visual Studio  and TFS 2012. This is quick 10 minute introduction to some of the material I will be covering at &lt;a href=&#34;http://developerdeveloperdeveloper.com/north2/&#34;&gt;DDDNorth next weekend&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Hope you find it useful&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A <a href="http://www.youtube.com/watch?v=FREohi-47S8&amp;list=PLEA6C7FB5CD6818CF&amp;index=1&amp;feature=plpp_video">video</a> has just be uploaded that I did on the new unit testing features in Visual Studio  and TFS 2012. This is quick 10 minute introduction to some of the material I will be covering at <a href="http://developerdeveloperdeveloper.com/north2/">DDDNorth next weekend</a>.</p>
<p>Hope you find it useful</p>
]]></content:encoded>
    </item>
    <item>
      <title>More fun with creating TFS 2012 SC-VMM environments</title>
      <link>https://blog.richardfennell.net/posts/more-fun-with-creating-tfs-2012-sc-vmm-environments/</link>
      <pubDate>Fri, 05 Oct 2012 17:14:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-fun-with-creating-tfs-2012-sc-vmm-environments/</guid>
      <description>&lt;p&gt;Whilst setting up new a new SC-VMM based lab environment I managed to find some new ways to fail above and beyond the ones I have found &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/20/Getting-TFS-2012-Agents-to-communicate-cross-domain.aspx&#34;&gt;before&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We needed to build a new environment for testing CRM application, this needed to have its own DC, IIS server and a CRM server. The aim was to have this as a network isolated environment, but you have to build it first as the various VMs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst setting up new a new SC-VMM based lab environment I managed to find some new ways to fail above and beyond the ones I have found <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/20/Getting-TFS-2012-Agents-to-communicate-cross-domain.aspx">before</a>.</p>
<p>We needed to build a new environment for testing CRM application, this needed to have its own DC, IIS server and a CRM server. The aim was to have this as a network isolated environment, but you have to build it first as the various VMs.</p>
<p>So we did the following</p>
<ul>
<li>On the Hyper-V hosts managed by our SC-VMM server create 3 new VMs connect to our corporate LAN</li>
<li>Install the OS on the three VMs</li>
<li>Make one of the a DC for dev.local</li>
<li>Join the others to the DC’s domain dev.local (they are not joined to our corporate domain)</li>
<li>On the IIS box add the web server role</li>
<li>On the CRM box install Microsoft CRM</li>
</ul>
<p>So we now have a three box domain that does what we want, but it is not network isolated. We could have used the features of SC-VMM to push these VMs into the library and hence import into the Lab Management library. However we choose to make sure we could connect to them first as an environment.</p>
<p>So first I tried to create a standard environment, not using SC-VMM. I had to create a local hosts file on the PC running MTM, but once this was done I could verify the environment, so all was OK. I did not actually create it.</p>
<p>Next I tried to create a SC-VMM based environment and this is where I hit a problem. I was basically trying to do something I have done before with all our pre-lab management test VMs, wrapper existing VMs in an environment. When I tried to do this the verification failed, saying I could not connect to any of the VMs. First we made sure file sharing was enable, firewalls were not blocking etc. All to no effect.</p>
<p>To cut a long story short I had a number of issues, mostly down to the reuse of names</p>
<ul>
<li>The SC-VMM VM names for the VMs (e.g. LabDC) did not match the actual host name (DevDC). I had to rename the VM in SC-VMM to match the name of the host in the operating system (I am still unsure if this is a red herring and not really that important, but I think it is good practice anyway)</li>
<li>We had to have a hosts file on the MTM box with the fully qualified names for the three boxes (not just the server name). Not that this hosts entry (or could be DNS if you want) is only needed until the environment is built</li>
</ul>
<blockquote>
<p>192.168.200.102 devcrm.dev.local<br>
192.168.200.103 deviis.dev.local<br>
192.168.200.104 devdc.dev.local</p></blockquote>
<ul>
<li>The name DevDC has been used on another VM that was being run on one of our developers Windows 8 Hyper-V setup. This was causing problems when MTM tried to resolve the machine name via SMB (Netbios, IP resolution was fine). Switching off this other VM fixed this, we only spotted it by using <a href="http://www.wireshark.org/">Wireshark</a> on the PC running MTM (note you have to run the installer in Win7 compatibility mode to get it to work with Windows 8)</li>
<li>When entering the login details for development domain when creating the new environment in MTM  the user ID had to be entered as <a href="mailto:administator@dev.local">administator@dev.local</a> and not devadministrator</li>
</ul>
<p>Once this was all done I could verify my environment and create it, the TFS agent was installed, but did not connect to the test controller. This is exactly as expect as details in my previous <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/20/Getting-TFS-2012-Agents-to-communicate-cross-domain.aspx">post</a>.</p>
<p>I now have a few choices</p>
<ul>
<li>If I don’t want to network isolate it I can install a Test Controller in the domain</li>
<li>I can save each of the three VMs into the SC-VMM library via MTM and create an isolated environment.</li>
</ul>
<p>So I hope this helps you avoid some of the problems I have seen, I just wish that the MTM environment creation step gave out a better log file so i don’t have to second guess it or use wireshark.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting a SQL LocalDB to create an ASPNETDB data base without aspnet_regsql</title>
      <link>https://blog.richardfennell.net/posts/getting-a-sql-localdb-to-create-an-aspnetdb-data-base-without-aspnet_regsql/</link>
      <pubDate>Tue, 02 Oct 2012 22:42:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-a-sql-localdb-to-create-an-aspnetdb-data-base-without-aspnet_regsql/</guid>
      <description>&lt;p&gt;I started working today on a solution I had not worked on for a while. It makes use of an ASP.NET web application as a site to host SharePoint webparts (&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/06/update-on-using-typemock-isolator-to-allow-webpart-development-without-a-sharepoint-server.aspx&#34;&gt;using Typemock to mock out any troublesome calls&lt;/a&gt;). The problem I had was that when I opened this VS2010 solution in VS2012 I could not run up this test web site. As the test web pages have WebpartManager controls it needs an ASPNETDB in the AppData folder to persist the settings data. This is usually auto created when SQLExpress is installed, problem is with VS2012 you get the newer LocalDB and I am trying to avoid installing SQLExpress&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I started working today on a solution I had not worked on for a while. It makes use of an ASP.NET web application as a site to host SharePoint webparts (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/06/update-on-using-typemock-isolator-to-allow-webpart-development-without-a-sharepoint-server.aspx">using Typemock to mock out any troublesome calls</a>). The problem I had was that when I opened this VS2010 solution in VS2012 I could not run up this test web site. As the test web pages have WebpartManager controls it needs an ASPNETDB in the AppData folder to persist the settings data. This is usually auto created when SQLExpress is installed, problem is with VS2012 you get the newer LocalDB and I am trying to avoid installing SQLExpress</p>
<p>So the first step was to modify the web.config to point to the right place, by adding</p>
<p><em><connectionStrings><br>
    <clear/><br>
    <add name="LocalSQLServer" connectionString="Data Source=(LocalDB)projects; Integrated Security=true ;AttachDbFileName=|DataDirectory|ASPNETDB.mdf" providerName="System.Data.SqlClient"/><br>
  </connectionStrings></em></p>
<p>I then loaded the web site but got the error</p>
<p><em>An error occurred during the execution of the SQL file &lsquo;InstallMembership.sql&rsquo;. The SQL error number is -2 and the SqlException message is: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.</em></p>
<p>after a retry I saw</p>
<p><em>An error occurred during the execution of the SQL file &lsquo;InstallCommon.sql&rsquo;. The SQL error number is 5170 and the SqlException message is: Cannot create file &lsquo;C:PROJECTSSABSSOURCESABSSABSWEBSERVICETESTHARNESSAPP_DATAASPNETDB_TMP.MDF&rsquo; because it already exists. Change the file path or the file name, and retry the operation.<br>
CREATE DATABASE failed. Some file names listed could not be created. Check related errors.<br>
Creating the ASPNETDB_5689a053209d438db3622d593ea632fb database&hellip;</em></p>
<p>So I decided to try the aspnet_regsql.exe in wizard mode from the .NET 4 framework folder to populate a pre-created DB, this gave the same timeout errors as seen when it was run by the web process</p>
<p>So finally I tried the following process</p>
<ol>
<li>Created a new empty DB in the APPDATA folder attached to my LocalDB instance in SQL Server Object Explorer</li>
<li>From the .NET framework folder loaded and then ran the following SQL scripts (in the order they were listed in the folder)</li>
</ol>
<p>-&ndash;        16/08/2012     12:59      24603 InstallCommon.sql<br>
-&ndash;        16/08/2012     12:59      56073 InstallMembership.sql<br>
-&ndash;        16/08/2012     12:59      52347 InstallPersistSqlState.sql<br>
-&ndash;        16/08/2012     12:59      34950 InstallPersonalization.sql<br>
-&ndash;        16/08/2012     12:59      20891 InstallProfile.SQL<br>
-&ndash;        16/08/2012     12:59      34264 InstallRoles.sql</p>
<ol start="4">
<li>Made sure my test harness targeted .NET 4</li>
<li>and my test harness loaded</li>
</ol>
<p>Not a great solution but it got me working, especially as I could find little on ASPNETDB and LocalDB</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Test Agent cannot connect to Test Controller – Part 2</title>
      <link>https://blog.richardfennell.net/posts/tfs-test-agent-cannot-connect-to-test-controller-part-2/</link>
      <pubDate>Mon, 01 Oct 2012 22:33:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-test-agent-cannot-connect-to-test-controller-part-2/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/09/29/TFS-Test-Agent-cannot-connect-to-Test-Controller-gives-No-connection-could-be-made-because-the-target-machine-actively-refused-it-1270016910.aspx&#34;&gt;posted last week&lt;/a&gt; on the problems I had had getting the test agents and controller in a TFS2012 Standard environment talking to each other and a workaround. Well after a good few email with various people at Microsoft and other consultants at Black Marble I have a whole range of workarounds solutions.&lt;/p&gt;
&lt;p&gt;First a reminder of my architecture, and note that this could be part of the problem, it is all running on a single Hyper-V host. Remember this is a demo rig to show the features of Standard Environments. I think it is unlikely that this problem will be seen in a more ‘realistic’ environment i.e. running on multiple boxes&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/09/29/TFS-Test-Agent-cannot-connect-to-Test-Controller-gives-No-connection-could-be-made-because-the-target-machine-actively-refused-it-1270016910.aspx">posted last week</a> on the problems I had had getting the test agents and controller in a TFS2012 Standard environment talking to each other and a workaround. Well after a good few email with various people at Microsoft and other consultants at Black Marble I have a whole range of workarounds solutions.</p>
<p>First a reminder of my architecture, and note that this could be part of the problem, it is all running on a single Hyper-V host. Remember this is a demo rig to show the features of Standard Environments. I think it is unlikely that this problem will be seen in a more ‘realistic’ environment i.e. running on multiple boxes</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_64.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_64.png" title="image"></a></p>
<p>The problem is that when the test agent running on the Server2008 should request the test controller (running the on VSTFS server) should call it back on either it 169.254.x.x address or on abn address obtained via DHCP from the external virtual switch. However the problem is it is requesting a call back on 127.0.0.1, as can be seen in the error log</p>
<blockquote>
<p><em>Unable to connect to the controller on &lsquo;vstfs:6901&rsquo;. The agent can connect to the controller but the controller cannot connect to the agent because of following reason: No connection could be made because the target machine actively refused it 127.0.0.1:6910. Make sure that the firewall on the test agent machine is not blocking the connection.</em></p></blockquote>
<p><strong>The root cause</strong></p>
<p>It turns out the root cause of this problem was I had edited the <em>c:windowssystem32driversetchosts</em> file on the test server VM to add an entry to allow a URL used in CodedUI tests to be resolved to the localhost</p>
<blockquote>
<p><em>127.0.0.1   <a href="https://www.mytestsite.com">www.mytestsite.com</a></em></p></blockquote>
<p><strong>Solution 1 – Edit the test agent config to bind to a specific address</strong></p>
<p>The first solution is the one I outlined in my previous post, tell the test agent to bind to a specific IP address. Edit</p>
<blockquote>
<p><em>C:Program Files (x86)Microsoft Visual Studio 11.0Common7IDEQTAgentService.exe.config</em></p></blockquote>
<p>and added a <strong>BindTo</strong> line with the correct address for the controller to call back to the agent</p>
<blockquote>
<appSettings>  
     // other bits …  
      <add key="BindTo" value="169.254.1.1"/>  
</appSettings></blockquote>
<p>The problem with this solution you need to remember to edit a config file, all seems a bit complex!</p>
<p><strong>Solution 2 – Don’t resolve the test URL to localhost</strong></p>
<p>Change the hosts file entry used by the CodedUI test to resolve to the actual address of the test VM e.g.</p>
<blockquote>
<p><em>169.254.1.1   <a href="https://www.mytestsite.com">www.mytestsite.com</a></em></p></blockquote>
<p>Downside here is you need to know the test agents IP address, which depending on the system in use could change, and will certainly be different on each test VM in an environment. Again all seems a bit complex and prone to human error.</p>
<p><strong>Solution 3 – Add an actual loopback entry to the hosts file.</strong></p>
<p>The simplest workaround which <a href="http://blogs.blackmarble.co.uk/blogs/rhancock/">Robert Hancock at Black Marble</a> came up with was to add a second entry to the hosts file for the name loopback</p>
<blockquote>
<p><em>127.0.0.1   localhost<br>
127.0.0.1   <a href="https://www.mytestsite.com">www.mytestsite.com</a></em></p></blockquote>
<p>Once this was done the test agent could connect, I did not have to edit any agent config files, or know the address the agent need to bind to. By far the best solution</p>
<p>So thanks to all who helped get to the bottom of this surprisingly complex issue.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Test Agent cannot connect to Test Controller gives ‘No connection could be made because the target machine actively refused it 127.0.0.1:6910’</title>
      <link>https://blog.richardfennell.net/posts/tfs-test-agent-cannot-connect-to-test-controller-gives-no-connection-could-be-made-because-the-target-machine-actively-refused-it-127-0-0-16910/</link>
      <pubDate>Sat, 29 Sep 2012 14:38:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-test-agent-cannot-connect-to-test-controller-gives-no-connection-could-be-made-because-the-target-machine-actively-refused-it-127-0-0-16910/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 1st October&lt;/strong&gt; – See the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/01/TFS-Test-Agent-cannot-connect-to-Test-Controller-Part-2.aspx&#34;&gt;Part 2 post&lt;/a&gt; which provides more workaround solutions&lt;/p&gt;
&lt;p&gt;Whilst setting up a  &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2011/10/31/lab-management-improvements-in-tfs-11.aspx&#34;&gt;TFS 2012 Standard Lab Environment&lt;/a&gt; for an upcoming demo I hit a problem. Initially my environment had worked fine, I could deploy to my server VM in the environment without error. However, after a reboot of the TFS server (which has the build and test controllers on it) and the single server VM in the environment, the test agent on the VM could not connect to the test controller on the TFS SERVER. The VM’s event log showed&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 1st October</strong> – See the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/10/01/TFS-Test-Agent-cannot-connect-to-Test-Controller-Part-2.aspx">Part 2 post</a> which provides more workaround solutions</p>
<p>Whilst setting up a  <a href="http://blogs.msdn.com/b/bharry/archive/2011/10/31/lab-management-improvements-in-tfs-11.aspx">TFS 2012 Standard Lab Environment</a> for an upcoming demo I hit a problem. Initially my environment had worked fine, I could deploy to my server VM in the environment without error. However, after a reboot of the TFS server (which has the build and test controllers on it) and the single server VM in the environment, the test agent on the VM could not connect to the test controller on the TFS SERVER. The VM’s event log showed</p>
<blockquote>
<p><em>Unable to connect to the controller on &rsquo;tfsserver:6901&rsquo;. The agent can connect to the controller but the controller cannot connect to the agent because of following reason: No connection could be made because the target machine actively refused it 127.0.0.1:6910. Make sure that the firewall on the test agent machine is not blocking the connection.</em></p></blockquote>
<p>The key here was test controller was being told to call back to the test agent on 127.0.0.1 – which is obviously wrong being the loopback address.</p>
<p>So it seems the test agent was telling the test server the wrong IP address, not sure why it was resolving this address but I did find a workaround, on the test VM I edited </p>
<blockquote>
<p>‘C:Program Files (x86)Microsoft Visual Studio 11.0Common7IDEQTAgentService.exe.config’</p></blockquote>
<p>and added the BindTo line with the correct address for the controller to call back to the agent</p>
<blockquote>
<appSettings>  
     // other bits …  
      <add key="BindTo" value="10.0.0.1"/>  
</appSettings></blockquote>
<p>Once I restarted the test agent it connected to the controller and I could run my builds.</p>
<p>For more details on this config file see <a href="http://msdn.microsoft.com/en-us/library/ff934571.aspx">http://msdn.microsoft.com/en-us/library/ff934571.aspx</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Release of Typemock Isolator Basic edition</title>
      <link>https://blog.richardfennell.net/posts/release-of-typemock-isolator-basic-edition/</link>
      <pubDate>Thu, 27 Sep 2012 21:05:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/release-of-typemock-isolator-basic-edition/</guid>
      <description>&lt;p&gt;Some great news today from Typemock, &lt;a href=&#34;http://us2.campaign-archive2.com/?u=e888bab428ec5dcaea84550e3&amp;amp;id=bdb6ce40b0&#34;&gt;there is now a free basic edition of Typemock Isolator.&lt;/a&gt; The addresses a key historic problem with Isolator, that of its cost when you don’t need the advanced features of Isolator all the time.&lt;/p&gt;
&lt;p&gt;Now if you need the cool advanced mocking features of Isolator, such as mocking sealed private classes, then the cost is not really a factor, you buy the product or don’t get the features. However what do you do if you just want to do just do ‘normal mocking’ in a project ? e.g. mock out an interfaces. Do you use Typemock as you already have it, or swap to a different mocking framework, only using Typemock when you have to use its advanced features?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Some great news today from Typemock, <a href="http://us2.campaign-archive2.com/?u=e888bab428ec5dcaea84550e3&amp;id=bdb6ce40b0">there is now a free basic edition of Typemock Isolator.</a> The addresses a key historic problem with Isolator, that of its cost when you don’t need the advanced features of Isolator all the time.</p>
<p>Now if you need the cool advanced mocking features of Isolator, such as mocking sealed private classes, then the cost is not really a factor, you buy the product or don’t get the features. However what do you do if you just want to do just do ‘normal mocking’ in a project ? e.g. mock out an interfaces. Do you use Typemock as you already have it, or swap to a different mocking framework, only using Typemock when you have to use its advanced features?</p>
<p>This is a particular problem for consulting/bespoke development companies such as mine, we write code for clients that in the future they will have to maintain themselves, they are not that happy with us passing over code with a dependency on a licensed mocking framework unless it is essential to their project. This means in the past I have tended to use other mocking frameworks, usually <a href="https://github.com/FakeItEasy/FakeItEasy">FakeItEasy</a> as its syntax is very similar to Typemock Isolator, unless I need its advanced features of Typemock such as in SharePoint projects.</p>
<p>However with this new basic edition release from Typemock this is no longer an issue. I can use Typemock in all my projects. If a client need to run the tests, as long as they are ‘normal mocking’ ones, all they need to do is install this new free version of Typemock and the project builds and the tests run. There is only a need to purchase a license if the advanced features of Typemock are required.</p>
<p>So longer do I need to swap mocking framework for only licensing reasons, hence reducing the friction I have had in the past changing mocking syntax.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences upgrading an MVC 1 application to MVC 3</title>
      <link>https://blog.richardfennell.net/posts/experiences-upgrading-an-mvc-1-application-to-mvc-3/</link>
      <pubDate>Wed, 05 Sep 2012 16:16:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-upgrading-an-mvc-1-application-to-mvc-3/</guid>
      <description>&lt;p&gt;I have recently had to do some work on a MVC 1 application and thought it sensible to bring it up to MVC 3, you don’t want to be left behind if you can avoid it. This was a simple data capture application written in MVC1 in Visual Studio 2008 and never needed to be touched since. A user fills in a form, the controller then takes the form contents and stores it. The key point to note here is that it was using the &lt;em&gt;Controller.UpdateModel&lt;TModel&gt; Method (TModel, IValueProvider)&lt;/em&gt; method, so most of the controller actions look like&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently had to do some work on a MVC 1 application and thought it sensible to bring it up to MVC 3, you don’t want to be left behind if you can avoid it. This was a simple data capture application written in MVC1 in Visual Studio 2008 and never needed to be touched since. A user fills in a form, the controller then takes the form contents and stores it. The key point to note here is that it was using the <em>Controller.UpdateModel<TModel> Method (TModel, IValueProvider)</em> method, so most of the controller actions look like</p>
<blockquote>
<p>[AcceptVerbs(HttpVerbs.Post)]<br>
       public ActionResult PostDataToApplication(FormCollection form)<br>
       {<br>
           NewApplication data = new NewApplication();<br>
           try<br>
           {<br>
               UpdateModel(data, form.ToValueProvider());<br>
              <br>
               // process data object<br>
               bool success = DoSomething(data);</p></blockquote>
<p>               if (success)<br>
               {<br>
                   return RedirectToAction(&ldquo;FormSuccess&rdquo;, &ldquo;Home&rdquo;);<br>
               }<br>
               else<br>
               {<br>
                   return RedirectToAction(&ldquo;FormUnsuccessful&rdquo;, &ldquo;Home&rdquo;);<br>
               }<br>
           }<br>
           catch (InvalidOperationException)<br>
           {<br>
               return View();<br>
           }<br>
       }</p>
<p>This worked fine on MVC 1 on VS2008, but I wanted to move it onto MVC3 on VS2012 if possible; with a little changes as possible as this is a small web site that has very few changes so not worth a major investment in time to keep updated to current frameworks. So these were steps I took and gotcha’s I found</p>
<p><strong>The upgrade itself</strong></p>
<p>First I opened the VS2008 solution in VS2010 and it automatically upgraded to MVC2, a good start!</p>
<p>I then used the <a href="http://blogs.msdn.com/b/marcinon/archive/2011/01/13/mvc-3-project-upgrade-tool.aspx">MVC2 to MVC3 tool on Codeplex</a>, this initially failed and it took me a while to spot that you can only use this tool if your MVC2 application targets .NET 4. Once I changed the MVC2 to target .NET 4 as opposed to 3.5 this upgrade tool worked fine.</p>
<p>I could now load my MVC3 application in either VS2010 or VS2012.</p>
<p><strong>Using the Web Site</strong></p>
<p>At this point I though I had better test it, and instantly saw a problem. Pages that did not submit data worked fine, but submitting a data capture forms failed with Null Exception errors. Turns out the problem was a change in default behaviour of the models between MVC releases. On MVC1 empty fields on the form were passed as empty strings, with MVC 2(?) and later they are passed as nulls.</p>
<p>Luckily the fix was simple. Previously my model had been</p>
<blockquote>
<p>public class SomeModel: IDataErrorInfo<br>
{<br>
     public string AgentNumber { get; set; }<br>
     …..<br>
}</p></blockquote>
<p>I needed to add a  <em>[DisplayFormat(ConvertEmptyStringToNull = false)]</em> attribute on  each string property to get back to the previous behaviour my controller expected</p>
<blockquote>
<p>public class SomeModel: IDataErrorInfo<br>
{<br>
    [DisplayFormat(ConvertEmptyStringToNull = false)]<br>
     public string AgentNumber { get; set; }<br>
     …..<br>
}</p></blockquote>
<p>Now my web site ran as I had expected.</p>
<p><strong>Unit Tests</strong></p>
<p>I had previously noticed my unit tests were failing. I had expected the change to the model would fix this too, but it did not. On the web there a good many posts as to how unit testing of MVC2 and later fails unless you mock out the controller.context. You see errors in the form</p>
<blockquote>
<p>Test method Website.Tests.Controllers.HomeControllerTest.DetailsValidation_AlphaInValidAccountID_ErrorMessage threw exception:<br>
System.ArgumentNullException: Value cannot be null.<br>
Parameter name: controllerContext<br>
Result StackTrace:   <br>
at System.Web.Mvc.ModelValidator..ctor(ModelMetadata metadata, ControllerContext controllerContext)<br>
   at System.Web.Mvc.ModelValidator.CompositeModelValidator..ctor(ModelMetadata metadata, ControllerContext controllerContext)<br>
   at System.Web.Mvc.ModelValidator.GetModelValidator(ModelMetadata metadata, ControllerContext context)<br>
   at System.Web.Mvc.DefaultModelBinder.OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext)<br>
   at System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model)<br>
   at System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext)<br>
   at System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)<br>
   at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider)<br>
   at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider)<br>
   at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, IValueProvider valueProvider)<br>
   at Website.Controllers.HomeController.Details(FormCollection form)</p></blockquote>
<p>The fix is to not just new up a controller in your unit tests like this</p>
<blockquote>
<p>HomeController controller = new HomeController();</p></blockquote>
<p>But to have a helper method to mock it all out (which is created for MVC associated test projects for you, so it is easy)</p>
<blockquote>
<p>private static HomeController GetHomeController()<br>
{<br>
   IFormsAuthentication formsAuth = new MockFormsAuthenticationService();     <br>
   MembershipProvider membershipProvider = new MockMembershipProvider();<br>
   RoleProvider roleProvider = new MockRoleProvider();</p>
<p>   AccountMembershipService membershipService = new AccountMembershipService(membershipProvider, roleProvider);<br>
   HomeController controller = new HomeController(formsAuth, membershipService);<br>
   MockHttpContext mockHttpContext = new MockHttpContext();</p>
<p>   ControllerContext controllerContext = new ControllerContext(mockHttpContext, new RouteData(), controller);<br>
   controller.ControllerContext = controllerContext;<br>
   return controller;<br>
}</p></blockquote>
<p>However, this problem with a missing context was not my problem, I was already doing this. The error my test runner was showing did not mention the context, rather binding errors.</p>
<blockquote>
<p>Test method CollectorWebsite.Tests.Controllers.HomeControllerTest.CardValidation_AlphaInValidAccountID_ErrorMessage threw exception:<br>
System.NullReferenceException: Object reference not set to an instance of an object.<br>
Result StackTrace:   <br>
at CollectorWebsite.Models.CardRecovery.get_Item(String columnName) <br>
   at System.Web.Mvc.DataErrorInfoModelValidatorProvider.DataErrorInfoPropertyModelValidator.Validate(Object container)<br>
   at System.Web.Mvc.ModelValidator.CompositeModelValidator.<Validate>d__5.MoveNext()<br>
   at System.Web.Mvc.DefaultModelBinder.OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext)<br>
   at System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model)<br>
   at System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext)<br>
   at System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)<br>
   at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider)<br>
   at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider)<br>
   at System.Web.Mvc.Controller.UpdateModel[TModel](TModel model, IValueProvider valueProvider)<br>
   at CollectorWebsite.Controllers.HomeController.CardRecovery(FormCollection form)</p></blockquote>
<p>I got stuck here for a good while………</p>
<p>Then it occurred to me if the behaviour has changed such that on the web site I see nulls when I expect empty strings, I bet the same is happening in unit tests. It is trying to iterate though what was a collection of strings and is now at best a collection of nulls or just an empty collection. The bind failed as it could not match the form to the data.</p>
<p>The fix was to make sure in my unit tests I passed in a FormCollection that had all the expected fields (with suitable empty values e.g string.empty). This meant my unit tests changed from</p>
<blockquote>
<p>[TestMethod, Isolated]<br>
public void ApplicationValidation_CurrentPostcodeNumbersOnly_ErrorMessage()<br>
{</p></blockquote>
<p>// Arrange<br>
           HomeController controller = GetHomeController();  FormCollection form = new FormCollection();<br>
           form.Add(&ldquo;CurrentPostcode&rdquo;, &ldquo;12345&rdquo;);</p>
<p>           // Act<br>
           ViewResult result = controller.Application(form) as ViewResult;</p>
<p>           // Assert<br>
           Assert.IsNotNull(result);<br>
           Assert.AreEqual(&ldquo;Please provide a valid postcode&rdquo;, result.ViewData.ModelState[&ldquo;CurrentPostcode&rdquo;].Errors[0].ErrorMessage);</p>
<p>       }</p>
<p>To</p>
<blockquote>
<p>[TestMethod, Isolated]<br>
public void ApplicationValidation_CurrentPostcodeNumbersOnly_ErrorMessage()<br>
{</p></blockquote>
<p>// Arrange<br>
           HomeController controller = GetHomeController(); FormCollection form = GetEmptyApplicationFormCollection();<br>
           form.Set(&ldquo;CurrentPostcode&rdquo;, &ldquo;12345&rdquo;);</p>
<p>           // Act<br>
           ViewResult result = controller.Application(form) as ViewResult;</p>
<p>           // Assert<br>
           Assert.IsNotNull(result);<br>
           Assert.AreEqual(&ldquo;Please provide a valid postcode&rdquo;, result.ViewData.ModelState[&ldquo;CurrentPostcode&rdquo;].Errors[0].ErrorMessage);</p>
<p>       }</p>
<p>where the <em>GetEmptyApplicationFormCollection()</em> helper method just creates a <em>FormCollection</em> with all the forms fields.</p>
<p>Once this was done my unit test passed.</p>
<p><strong>Summary</strong></p>
<p>So I now have an MVC3 application that works and passes unit tests. You could argue I should do more work so it does not need these special fixes, but it meets my needs for now.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Update on my experiences with Lenovo W520 Drivers and Windows 8</title>
      <link>https://blog.richardfennell.net/posts/update-on-my-experiences-with-lenovo-w520-drivers-and-windows-8/</link>
      <pubDate>Mon, 03 Sep 2012 12:50:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/update-on-my-experiences-with-lenovo-w520-drivers-and-windows-8/</guid>
      <description>&lt;p&gt;After I installed Windows 8 RTM I still had two devices missing drivers. I have made a little progress&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Base System Device - was the Ricoh PCIe SDXC/MMC Host controller &lt;a href=&#34;http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS014960&#34;&gt;I used this Win 8 beta driver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Unknown driver – seems to be the Lenovo Power Management devices. However the  &lt;a href=&#34;http://support.lenovo.com/en_US/downloads/detail.page?&amp;amp;DocID=DS030737&#34;&gt;Win 8 Beta driver&lt;/a&gt;  fails to install. I had to use the Windows 7 driver, installed OK and seems to show the right information in the tool tray, but in Device Manager it still says the unknown driver.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I guess I really need to wait until Lenovo ship their release drivers&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After I installed Windows 8 RTM I still had two devices missing drivers. I have made a little progress</p>
<ul>
<li>Base System Device - was the Ricoh PCIe SDXC/MMC Host controller <a href="http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS014960">I used this Win 8 beta driver</a></li>
<li>Unknown driver – seems to be the Lenovo Power Management devices. However the  <a href="http://support.lenovo.com/en_US/downloads/detail.page?&amp;DocID=DS030737">Win 8 Beta driver</a>  fails to install. I had to use the Windows 7 driver, installed OK and seems to show the right information in the tool tray, but in Device Manager it still says the unknown driver.</li>
</ul>
<p>I guess I really need to wait until Lenovo ship their release drivers</p>
]]></content:encoded>
    </item>
    <item>
      <title>Its events time again</title>
      <link>https://blog.richardfennell.net/posts/its-events-time-again/</link>
      <pubDate>Sun, 02 Sep 2012 18:57:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/its-events-time-again/</guid>
      <description>&lt;p&gt;if you missed yesterdays DDD10, as I did, then don’t worry. You can come to &lt;a href=&#34;http://developerdeveloperdeveloper.com/north2/&#34;&gt;DDD North in Bradford&lt;/a&gt; at the University Management Centre on the 13th of October. There is another great set of speakers, I was lucky enough to get my session on Unit Testing and Fakes in VS2012 accepted. Hope to see you there&lt;/p&gt;
&lt;p&gt;Also my company Black Marble as announced our free autumn  and winter events, check them out on our &lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events&#34;&gt;events page&lt;/a&gt;. There is a good varied selection this year, something for everyone.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>if you missed yesterdays DDD10, as I did, then don’t worry. You can come to <a href="http://developerdeveloperdeveloper.com/north2/">DDD North in Bradford</a> at the University Management Centre on the 13th of October. There is another great set of speakers, I was lucky enough to get my session on Unit Testing and Fakes in VS2012 accepted. Hope to see you there</p>
<p>Also my company Black Marble as announced our free autumn  and winter events, check them out on our <a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events">events page</a>. There is a good varied selection this year, something for everyone.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Type ‘InArgument(mtbwa:BuildSettings)’ of property ‘BuildSettings’ errors in TFS 2012 RTM builds</title>
      <link>https://blog.richardfennell.net/posts/type-inargumentmtbwabuildsettings-of-property-buildsettings-errors-in-tfs-2012-rtm-builds/</link>
      <pubDate>Thu, 30 Aug 2012 15:30:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/type-inargumentmtbwabuildsettings-of-property-buildsettings-errors-in-tfs-2012-rtm-builds/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/30/Two-problems-editing-TFS2012-build-workflows-with-the-same-solution.aspx&#34;&gt;posted a while ago&lt;/a&gt; that you saw errors when trying to edit TFS 2012RC build process templates in VS 2012RC if the Visual Studio class library project you were using to manage the process template editing was targeting .NET 4.5, it needed to be 4.0. Well with Visual Studio 2012 RTM this is no longer the case, in fact it is the other way around.&lt;/p&gt;
&lt;p&gt;I have recently upgraded our TFS 2012 RC –&amp;gt; RTM and I today came to edit one of our build process templates (&lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;amp;referringTitle=Documentation&#34;&gt;using the standard method to edit a process template with custom activities&lt;/a&gt;) and got the following error when I tried to open the XAML process template for editing&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/30/Two-problems-editing-TFS2012-build-workflows-with-the-same-solution.aspx">posted a while ago</a> that you saw errors when trying to edit TFS 2012RC build process templates in VS 2012RC if the Visual Studio class library project you were using to manage the process template editing was targeting .NET 4.5, it needed to be 4.0. Well with Visual Studio 2012 RTM this is no longer the case, in fact it is the other way around.</p>
<p>I have recently upgraded our TFS 2012 RC –&gt; RTM and I today came to edit one of our build process templates (<a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;referringTitle=Documentation">using the standard method to edit a process template with custom activities</a>) and got the following error when I tried to open the XAML process template for editing</p>
<blockquote>
<p><em>System.Xaml.XamlException: &lsquo;The type ‘InArgument(mtbwa:BuildSettings)’ of property ‘BuildSettings’ could not be resolved.&rsquo; Line number &lsquo;3&rsquo; and line position &lsquo;38&rsquo;.</em></p></blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_63.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_63.png" title="image"></a></p>
<p>At first I assumed it was my custom activities, so I tried editing the <strong>DefaultTemplate.11.1.xaml</strong> in the same manner, but got the same problem.</p>
<p>Strangely I found that if I had no solution open in Visual Studio then I could just double click on the <strong>DefaultTemplate.11.1.xaml</strong> file in Source Control Explorer and it opened without error. However, if I had a solution open in the same instance of VS2012 that contained a class library project that linked to the same XAML file I got the error. Unloading the project within the solution allowed me to open the file via Source Control Explorer, reloading the project again stopped it loading.</p>
<p>So it all pointed to something about the containing class library project stopping referenced assemblies loading. On checking the project properties I saw that it was targeting .NET 4.0 (as required for the RC), as soon as I changed this to .NET 4.5 it was able to load all the required Team Foundation assemblies and I was able to edit both the default template and my custom build process template.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Two bits of good news – DDDNorth and Raspberry PI</title>
      <link>https://blog.richardfennell.net/posts/two-bits-of-good-news-dddnorth-and-raspberry-pi/</link>
      <pubDate>Tue, 28 Aug 2012 14:06:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/two-bits-of-good-news-dddnorth-and-raspberry-pi/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://developerdeveloperdeveloper.com/north2/&#34;&gt;registration has opened for DDDNorth 2 (In Bradford)&lt;/a&gt; where I will be speaking on Unit testing in VS2012&lt;/p&gt;
&lt;p&gt;Also my Raspberry PI has just arrived – after only 6 months of waiting, but I don’t think I can work it into my DDDNorth session&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://developerdeveloperdeveloper.com/north2/">registration has opened for DDDNorth 2 (In Bradford)</a> where I will be speaking on Unit testing in VS2012</p>
<p>Also my Raspberry PI has just arrived – after only 6 months of waiting, but I don’t think I can work it into my DDDNorth session</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting Typemock Isolator running within a TFS 2012 build – part 2</title>
      <link>https://blog.richardfennell.net/posts/getting-typemock-isolator-running-within-a-tfs-2012-build-part-2/</link>
      <pubDate>Fri, 24 Aug 2012 15:05:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-typemock-isolator-running-within-a-tfs-2012-build-part-2/</guid>
      <description>&lt;p&gt;I posted &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx&#34;&gt;previously on getting Typemock 7.x running in a TFS 2012 RC build process&lt;/a&gt; . Well it seems the activities I previously published did not work on the TFS 2012 RTM build i.e if you do nothing other than upgrade your TFS server from RC to RTM a previously working build fails, no attempt was made to run any tests and I got the unhelpful error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TF900546: An unexpected error occurred while running the RunTests activity: &amp;lsquo;Executor process exited.&amp;rsquo;.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx">previously on getting Typemock 7.x running in a TFS 2012 RC build process</a> . Well it seems the activities I previously published did not work on the TFS 2012 RTM build i.e if you do nothing other than upgrade your TFS server from RC to RTM a previously working build fails, no attempt was made to run any tests and I got the unhelpful error</p>
<blockquote>
<p><em>TF900546: An unexpected error occurred while running the RunTests activity: &lsquo;Executor process exited.&rsquo;.</em></p></blockquote>
<p><strong>Note:</strong> TF900546 seems to be the generic – test failed error number. If you see it you will usually have to look elsewhere for anything helpful.</p>
<p>So I assumed that the problem must be some difference with the TeamFoundation assemblies I was referencing between the RC and RTM versions, so I rebuilt my activities, all to no effect, I got the same error. So I did some more digging into the code. I found a number of issues, why these had not caused an issue before I don’t know:</p>
<p><strong>Target property</strong></p>
<p>If you do not specify a .NET version via the Target property of the TypeMockRegister activity it does not attempt to start interception. As setting this property every time you want to use the activity is a pain, I modified the activity so that if no Target property is passed then the value <strong>v4.0.30319</strong> is used, the version of .NET 4.5 as it appears in the c:windowsMicrosoft.Netframework folder.</p>
<p><strong>Note</strong> Missing of the leading v if the Target value causes the TFS build agent to hang, I have no idea why.</p>
<p>Once this change was made the build ran and it tried to run all my tests, but the ones involving Typemock failed, with the message</p>
<blockquote>
<p><em>Test method BuildProcessValidation.Tests.MSTestTypemockTests.DirtyTrickMockingWithTypemock_Email_is_sent_when_client_order_is_processed threw exception:<br>
System.TypeInitializationException: The type initializer for &lsquo;f5&rsquo; threw an exception. &mdash;&gt; System.TypeInitializationException: The type initializer for &lsquo;TypeMock.InterceptorsWrapper&rsquo; threw an exception. &mdash;&gt; TypeMock.TypeMockException:<br>
*** Typemock Isolator needs to be linked with Coverage Tool to run, to enable do one of the following:<br>
   1. link the Coverage tool through the Typemock Isolator Configuration<br>
   2. run tests via TMockRunner.exe -link<br>
   3. use TypeMockStart tasks for MSBuild or NAnt with Link<br>
For more information consult the documentation (see Code Coverage with Typemock Isolator topic)</em></p></blockquote>
<p>On looking in the build box’s event log I saw the message</p>
<blockquote>
<p><em>.NET Runtime version 4.0.30319.17929 - Loading profiler failed during CoCreateInstance.  Profiler CLSID: &lsquo;{B146457E-9AED-4624-B1E5-968D274416EC}&rsquo;.  HRESULT: 0x8007007e.  Process ID (decimal): 2068.  Message ID: [0x2504].</em></p></blockquote>
<p><strong>AutoDeployment</strong></p>
<p>Basically the issue was the Typemock interceptor, the profiler, was not being started because Typemock was not installed on the build box. To prove this I manually installed Typemock on the build box and the error went away, all my tests ran. So happy my activity basically worked, I removed Typemock from the build box and the problem returned, so I know I had an autodeployment issue.</p>
<p>On checking the activity code again I found I was not handling the nullable boolean correctly for the AutoDeploy build argument of the type TypemockSettings. As soon as this was fixed and deployed by build leapt into life.</p>
<p><strong>In summary</strong></p>
<p>So I am please to say I have a working activity again, as I said in my previous post I see this as stopgap measure until Typemock Release their official version. This set of activities have had minimal testing and I am not sure the undeploy logic is working fully, but as I don’t need this feature I am not worrying about it for now.</p>
<p>Hope you find it useful in its current state.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences upgrading our TFS2012RC to RTM</title>
      <link>https://blog.richardfennell.net/posts/experiences-upgrading-our-tfs2012rc-to-rtm/</link>
      <pubDate>Mon, 20 Aug 2012 12:56:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-upgrading-our-tfs2012rc-to-rtm/</guid>
      <description>&lt;p&gt;We have just completed the upgraded of our TFS2012 server from RC to RTM. All went smoothly, just a few comments worth mentioning&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The install of TFS2012 (after the removal of the RC) required three reboots, 2 for C++ components and one for .NET 4.5. So  if seeing reboots don’t worry too much.&lt;/li&gt;
&lt;li&gt;When running the upgrade wizard we got a verify warning over port 443 already being used (we had manually configured via IIS manager for our server to use 8080 and 443). We ignored this warning. However after the upgrade wizard had completed, with no errors, we found that the new web server could not start. Turns out it it had been left bound as HTTP to Port 443, so it was very confused. We just deleted this binding and re-added HTTP on 8080 and HTTPS on 433 with our wildcard certificate and it was fine. So in hindsight we should have headed the warning and removed our custom bindings.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So now off to the long job of upgrading build box, test controller and the rest.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have just completed the upgraded of our TFS2012 server from RC to RTM. All went smoothly, just a few comments worth mentioning</p>
<ul>
<li>The install of TFS2012 (after the removal of the RC) required three reboots, 2 for C++ components and one for .NET 4.5. So  if seeing reboots don’t worry too much.</li>
<li>When running the upgrade wizard we got a verify warning over port 443 already being used (we had manually configured via IIS manager for our server to use 8080 and 443). We ignored this warning. However after the upgrade wizard had completed, with no errors, we found that the new web server could not start. Turns out it it had been left bound as HTTP to Port 443, so it was very confused. We just deleted this binding and re-added HTTP on 8080 and HTTPS on 433 with our wildcard certificate and it was fine. So in hindsight we should have headed the warning and removed our custom bindings.</li>
</ul>
<p>So now off to the long job of upgrading build box, test controller and the rest.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Weekend trip to see family – 1 OS upgrade, 1 network printer installed and 3 machines de-virused</title>
      <link>https://blog.richardfennell.net/posts/weekend-trip-to-see-family-1-os-upgrade-1-network-printer-installed-and-3-machines-de-virused/</link>
      <pubDate>Mon, 20 Aug 2012 11:48:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/weekend-trip-to-see-family-1-os-upgrade-1-network-printer-installed-and-3-machines-de-virused/</guid>
      <description>&lt;p&gt;I have been away over the weekend seeing family, and as anyone who is in IT (or is a medical doctor I suspect) would expect I had the standard experience – everyone wanted me to show me something they were worried about that turned to be virus related. This trip I did one operating system upgrade, one network printer installation and de-virused three PCs. So nothing out of ordinary.&lt;/p&gt;
&lt;p&gt;The one thing I would mention was how useful I found the contents of &lt;a href=&#34;http://blogs.msdn.com/b/deva/archive/2012/06/16/teched-2012-mark-russinovich-s-malware-hunting-with-the-sysinternals-tools.aspx&#34;&gt;Mark Russinovich’s TechEd Session ‘Malware Hunting with the Sysinterals Tools’&lt;/a&gt;. This saved me the complete machine rebuild I had feared for one PC which had got infected with a bit of poor quality &lt;a href=&#34;http://en.wikipedia.org/wiki/Ransomware_%28malware%29&#34;&gt;ransomware&lt;/a&gt;  that turns out to only be a splash screen that I could easily spot with &lt;a href=&#34;http://en.wikipedia.org/wiki/Ransomware_%28malware%29&#34;&gt;Autoruns from the Sysinternals Suite&lt;/a&gt;. The video is well worth a watch for all of us in the family IT support game.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been away over the weekend seeing family, and as anyone who is in IT (or is a medical doctor I suspect) would expect I had the standard experience – everyone wanted me to show me something they were worried about that turned to be virus related. This trip I did one operating system upgrade, one network printer installation and de-virused three PCs. So nothing out of ordinary.</p>
<p>The one thing I would mention was how useful I found the contents of <a href="http://blogs.msdn.com/b/deva/archive/2012/06/16/teched-2012-mark-russinovich-s-malware-hunting-with-the-sysinternals-tools.aspx">Mark Russinovich’s TechEd Session ‘Malware Hunting with the Sysinterals Tools’</a>. This saved me the complete machine rebuild I had feared for one PC which had got infected with a bit of poor quality <a href="http://en.wikipedia.org/wiki/Ransomware_%28malware%29">ransomware</a>  that turns out to only be a splash screen that I could easily spot with <a href="http://en.wikipedia.org/wiki/Ransomware_%28malware%29">Autoruns from the Sysinternals Suite</a>. The video is well worth a watch for all of us in the family IT support game.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A move to Windows 8 RTM (and VS2012 RTM), not too painful</title>
      <link>https://blog.richardfennell.net/posts/a-move-to-windows-8-rtm-and-vs2012-rtm-not-too-painful/</link>
      <pubDate>Thu, 16 Aug 2012 16:45:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-move-to-windows-8-rtm-and-vs2012-rtm-not-too-painful/</guid>
      <description>&lt;p&gt;First I thought I could do an in-place upgrade of my Windows 8RC PC, turns out you can’t (not sure if it would have been wise anyway) so it was format disk time.&lt;/p&gt;
&lt;p&gt;I had hoped for a seamless install of Windows 8, but it hung after detecting devices. It seems it was the ‘old problem’ down to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/06/01/Windows-8-RP-and-my-Lenovo-W520.aspx&#34;&gt;Lenovo/Nvidia Optimus graphics drivers&lt;/a&gt; issues. So I checked my bios settings which were set to Nvidia Optimus mode disabled (the only way I could get the RC to install), and changed it to Optimus mode enable and the install all worked without an issue. However though as a laptop it work find,  including with a second external monitor. I did have to set Optimus back to disabled and run in discrete video mode if I wanted to use a projector. It seems the Optimus mode certainly gets confused with the Benq projector we have in the office, it will allow you to extend your desktop but not duplicate it. As soon as you switch back to discrete graphic mode all is OK (though you do lose the ability to run two external monitors)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>First I thought I could do an in-place upgrade of my Windows 8RC PC, turns out you can’t (not sure if it would have been wise anyway) so it was format disk time.</p>
<p>I had hoped for a seamless install of Windows 8, but it hung after detecting devices. It seems it was the ‘old problem’ down to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/06/01/Windows-8-RP-and-my-Lenovo-W520.aspx">Lenovo/Nvidia Optimus graphics drivers</a> issues. So I checked my bios settings which were set to Nvidia Optimus mode disabled (the only way I could get the RC to install), and changed it to Optimus mode enable and the install all worked without an issue. However though as a laptop it work find,  including with a second external monitor. I did have to set Optimus back to disabled and run in discrete video mode if I wanted to use a projector. It seems the Optimus mode certainly gets confused with the Benq projector we have in the office, it will allow you to extend your desktop but not duplicate it. As soon as you switch back to discrete graphic mode all is OK (though you do lose the ability to run two external monitors)</p>
<p>On completing the installation I ran a Windows update which found an update Lenovo display driver (it had no effect on the Optimus issue) and Conexant audio driver (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/06/07/Drivers-needed-on-my-Lenovo-W520-with-Windows-8-RP.aspx">one I had to manually update on the RC to get Lync 2013 working</a>).</p>
<p>However, on checking the Device Manager I was still missing drives for a couple of devices</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_62.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_62.png" title="image"></a></p>
<p>I think they are Power Management and Intel AMT, but the <a href="http://support.lenovo.com/en_US/downloads/detail.page?LegacyDocID=WIN8-BETA">beta drivers from Lenovo</a> don’t seem to work so I will need to keep looking.</p>
<p>So now to see how it runs, first impressions are good, seem quick.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving podcast subscriptions with Zune</title>
      <link>https://blog.richardfennell.net/posts/moving-podcast-subscriptions-with-zune/</link>
      <pubDate>Thu, 16 Aug 2012 16:29:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-podcast-subscriptions-with-zune/</guid>
      <description>&lt;p&gt;If like me you listen to many podcasts, then swapping the PC your Phone7 syncs to collect the podcasta is a real pain. The problem being as far as I can see Zune has no podcast subscription export/import, so you are left with a lot of copy typing to re-enter them.&lt;/p&gt;
&lt;p&gt;Whilst rebuilding my PC with Windows 8 today I have at least found a work around&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;On your old Pc open you ‘c:user[user]My Podcasts’ folder (shown in Windows Explorer as the ‘Podcast’ folder).&lt;/li&gt;
&lt;li&gt;You will see a folder for each podcast you are subscribed to&lt;/li&gt;
&lt;li&gt;Copy the whole folder to the same location on your new PC (I did via a USB drive as I was reformatting the disk on the same PC)&lt;/li&gt;
&lt;li&gt;Install Zune on the new PC&lt;/li&gt;
&lt;li&gt;Open Zune and look in the Collection&amp;gt;Podcasts, you should see all your podcast – but your are not subscribed yet&lt;/li&gt;
&lt;li&gt;In Zune, highlight and select all podcasts&lt;/li&gt;
&lt;li&gt;Right click and you should see  a Subscribe option, select it.&lt;/li&gt;
&lt;li&gt;Zune now sorts itself out re-subscribing and checking for new programmes&lt;/li&gt;
&lt;li&gt;It gets a bit confused over what you have watched so might pull them down again. Also you might want to alter subscription settings for specific podcasts as it will default back to just 3 programmes.&lt;/li&gt;
&lt;li&gt;When you are happy with your settings just drag the podcasts onto your newly resync’d mobile device to finish the job.&lt;/li&gt;
&lt;li&gt;You might need to look at the podcasts that are on the device as seems it does not removed one previously there via Zune (again it seems unsure of what you have listened too)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Not perfect but better than trying to removed load of site URLs&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If like me you listen to many podcasts, then swapping the PC your Phone7 syncs to collect the podcasta is a real pain. The problem being as far as I can see Zune has no podcast subscription export/import, so you are left with a lot of copy typing to re-enter them.</p>
<p>Whilst rebuilding my PC with Windows 8 today I have at least found a work around</p>
<ol>
<li>On your old Pc open you ‘c:user[user]My Podcasts’ folder (shown in Windows Explorer as the ‘Podcast’ folder).</li>
<li>You will see a folder for each podcast you are subscribed to</li>
<li>Copy the whole folder to the same location on your new PC (I did via a USB drive as I was reformatting the disk on the same PC)</li>
<li>Install Zune on the new PC</li>
<li>Open Zune and look in the Collection&gt;Podcasts, you should see all your podcast – but your are not subscribed yet</li>
<li>In Zune, highlight and select all podcasts</li>
<li>Right click and you should see  a Subscribe option, select it.</li>
<li>Zune now sorts itself out re-subscribing and checking for new programmes</li>
<li>It gets a bit confused over what you have watched so might pull them down again. Also you might want to alter subscription settings for specific podcasts as it will default back to just 3 programmes.</li>
<li>When you are happy with your settings just drag the podcasts onto your newly resync’d mobile device to finish the job.</li>
<li>You might need to look at the podcasts that are on the device as seems it does not removed one previously there via Zune (again it seems unsure of what you have listened too)</li>
</ol>
<p>Not perfect but better than trying to removed load of site URLs</p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2012 RTM and the ALM Rangers SIM Ship best practice guidance</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2012-rtm-and-the-alm-rangers-sim-ship-best-practice-guidance/</link>
      <pubDate>Thu, 16 Aug 2012 10:07:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2012-rtm-and-the-alm-rangers-sim-ship-best-practice-guidance/</guid>
      <description>&lt;p&gt;Overnight &lt;a href=&#34;http://msdn.microsoft.com/en-US/&#34;&gt;Visual Studio and TFS 2012 became available on MSDN&lt;/a&gt; for download; I really pleased to say that the &lt;a href=&#34;http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/08/15/cowabunga-visual-studio-alm-rangers-readiness-gig-goes-rtm.aspx&#34;&gt;ALM Rangers also SIM shipped all our 2012 guidance at the same time&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;I have been working on two of the Ranger project for the past year and have to say I have learnt a lot working with such a great crowd of Microsoft and external TFS experts. Some stuff I learnt was about the 2012 release it is true, but there was plenty on working with a a globally distributed team with a whole host of logistical and time constraints.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Overnight <a href="http://msdn.microsoft.com/en-US/">Visual Studio and TFS 2012 became available on MSDN</a> for download; I really pleased to say that the <a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2012/08/15/cowabunga-visual-studio-alm-rangers-readiness-gig-goes-rtm.aspx">ALM Rangers also SIM shipped all our 2012 guidance at the same time</a>. </p>
<p>I have been working on two of the Ranger project for the past year and have to say I have learnt a lot working with such a great crowd of Microsoft and external TFS experts. Some stuff I learnt was about the 2012 release it is true, but there was plenty on working with a a globally distributed team with a whole host of logistical and time constraints.</p>
<p>It has been a blast, but there is still more to do. Keep an eye open for supplementary ranger guidance releases in the next few months, but for now why not download and read the <a href="http://msdn.microsoft.com/en-us/vstudio/ee358787">released guidance</a> and do the hands on labs. I really do think they will answer many questions you have about TFS.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF900546 error on a TFS 2012 build</title>
      <link>https://blog.richardfennell.net/posts/tf900546-error-on-a-tfs-2012-build/</link>
      <pubDate>Thu, 09 Aug 2012 12:27:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf900546-error-on-a-tfs-2012-build/</guid>
      <description>&lt;p&gt;Whilst moving over to our new TFS 2012 installation I got the following error when a build tried to run tests&lt;/p&gt;
&lt;p&gt;&lt;em&gt;TF900546: An unexpected error occurred while running the RunTests activity: &amp;lsquo;Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.&amp;rsquo;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This was a new one on me, and nothing of much use on the web other than a &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/tfs/microsoft.teamfoundation.build.workflow.activities.activitiesresources.unexpectedagiletestplatformexception%28v=vs.110%29.aspx&#34;&gt;basic MSDN page&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Turns out the immediate fix is to just restart the build controller. Initially I did this after switching to the default build process template, and setting it to NOT load any custom activities, but I seems a simple restart would have been enough as once I re-enabled all custom activities it still worked.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst moving over to our new TFS 2012 installation I got the following error when a build tried to run tests</p>
<p><em>TF900546: An unexpected error occurred while running the RunTests activity: &lsquo;Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.&rsquo;.</em></p>
<p>This was a new one on me, and nothing of much use on the web other than a <a href="http://msdn.microsoft.com/en-us/library/tfs/microsoft.teamfoundation.build.workflow.activities.activitiesresources.unexpectedagiletestplatformexception%28v=vs.110%29.aspx">basic MSDN page</a>.</p>
<p>Turns out the immediate fix is to just restart the build controller. Initially I did this after switching to the default build process template, and setting it to NOT load any custom activities, but I seems a simple restart would have been enough as once I re-enabled all custom activities it still worked.</p>
<p>As to the root cause I have no idea, one to keep an eye on, especially as I am currently on the RC, lets see what the RTM build does.</p>
]]></content:encoded>
    </item>
    <item>
      <title>At last my Nokia Lumia 800 gets its firmware upgrade to allow tethering</title>
      <link>https://blog.richardfennell.net/posts/at-last-my-nokia-lumia-800-gets-its-firmware-upgrade-to-allow-tethering/</link>
      <pubDate>Tue, 07 Aug 2012 14:14:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/at-last-my-nokia-lumia-800-gets-its-firmware-upgrade-to-allow-tethering/</guid>
      <description>&lt;p&gt;At last my Nokia Lumia 800 gets its firmware upgrade to allow tethering. The &lt;a href=&#34;http://www.nokia.com/gb-en/support/product/lumia800/&#34;&gt;8773&lt;/a&gt; update seems to be made up of three updates, two operating system ones (&lt;a href=&#34;http://www.engadget.com/2011/09/28/psa-force-windows-phone-7-5-mango-to-update-right-now/&#34;&gt;which I managed to force down in the usual way&lt;/a&gt;), but this Nokia firmware one has taken weeks to get to me, forcing did not help. I had to wait.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://discussions.nokia.com/t5/Nokia-with-Windows-Phone/Got-8773-update-in-uk-but-no-firmware-update/td-p/1489210&#34;&gt;I don’t think I am alone&lt;/a&gt; in not being too impressed with the update process. The throttling/delaying update process is probably Ok for the man in the street, who just wants a working phone, but there should be an easier way to get updates if you want/need them ASAP for development purposes or are trying to run consistent versions for all phones in an organisation.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At last my Nokia Lumia 800 gets its firmware upgrade to allow tethering. The <a href="http://www.nokia.com/gb-en/support/product/lumia800/">8773</a> update seems to be made up of three updates, two operating system ones (<a href="http://www.engadget.com/2011/09/28/psa-force-windows-phone-7-5-mango-to-update-right-now/">which I managed to force down in the usual way</a>), but this Nokia firmware one has taken weeks to get to me, forcing did not help. I had to wait.</p>
<p><a href="http://discussions.nokia.com/t5/Nokia-with-Windows-Phone/Got-8773-update-in-uk-but-no-firmware-update/td-p/1489210">I don’t think I am alone</a> in not being too impressed with the update process. The throttling/delaying update process is probably Ok for the man in the street, who just wants a working phone, but there should be an easier way to get updates if you want/need them ASAP for development purposes or are trying to run consistent versions for all phones in an organisation.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using an internal Nuget server to manage the Typemock assembly references.</title>
      <link>https://blog.richardfennell.net/posts/using-an-internal-nuget-server-to-manage-the-typemock-assembly-references/</link>
      <pubDate>Mon, 06 Aug 2012 10:38:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-an-internal-nuget-server-to-manage-the-typemock-assembly-references/</guid>
      <description>&lt;p&gt;In &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx&#34;&gt;my last post&lt;/a&gt; I discussed the process I needed to go through to get &lt;a href=&#34;http://www.typemock.com&#34;&gt;Typemock Isolator&lt;/a&gt; running under TFS 2012. In this process I used the Auto Deploy feature of Isolator. However this raised the  question of how to manage the references within projects. You cannot just assume the Typemock assemblies are in the GAC, they are not on the build box using auto deploy. You could get all projects to reference the auto deployment location in source control. However, if you use build process templates across projects it might be you do not want to have production code referencing build tools in the build process are directly.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx">my last post</a> I discussed the process I needed to go through to get <a href="http://www.typemock.com">Typemock Isolator</a> running under TFS 2012. In this process I used the Auto Deploy feature of Isolator. However this raised the  question of how to manage the references within projects. You cannot just assume the Typemock assemblies are in the GAC, they are not on the build box using auto deploy. You could get all projects to reference the auto deployment location in source control. However, if you use build process templates across projects it might be you do not want to have production code referencing build tools in the build process are directly.</p>
<p>For most issues of this nature we now use <a href="http://nuget.codeplex.com/">Nuget</a>. At Black Marble we make use of the public Nuget repository for tools such as XUnit, SpecFlow etc. but we also have an internal Nuget repository for our own cross project code libraries. This includes licensing modules, utility and data loggers etc.</p>
<p>It struck me after writing the last post that the best way to manage my Typemock references was with a Nuget package, obviously not a public one, this would be for Typemock to produce. So I create one to place on our internal Nuget server that just contained the two DLLs I needed to reference (I could include more but we usually only need the core and act assert arrange assemblies).</p>
<p><em>[Update 6th Aug PM] – After playing with this today seems I need the following in my Nuget package</em></p>
<blockquote>
<p><em>Lib<br>
    Net20<br>
         Configuration.dll<br>
         Typemock.ArrangeActAssert.dll<br>
         TypeMock.dlll</em></p></blockquote>
<p><em>If you miss out the configuration.dll it all works locally on a developers PC, but you get a ‘cannot load assembly error’ when trying to run a TFS build with Typemock auto deployment. Can’t see why obviously but adding the reference (assembly to the package) is a quick fix.</em></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_61.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_61.png" title="image"></a></p>
<p>IT IS IMPORANT TO NOTE that using a Nuget package here in no way alters the Typemock licensing. Your developers still each need a license, they also need to install Typemock Isolator, to be able to run the tests and your build box needs to use auto deployment. All using Nuget means is that you are now managing references in the same way for Typemock as any other Nuget managed set of assemblies. You are internally consistent, which I like.</p>
<p>So in theory as new versions of Typemock are released I can update my internal Nuget package allowing projects to use the version they require. It will be interesting to see how well this works in practice.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting Typemock Isolator running within a TFS 2012 build</title>
      <link>https://blog.richardfennell.net/posts/getting-typemock-isolator-running-within-a-tfs-2012-build/</link>
      <pubDate>Sat, 04 Aug 2012 12:01:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-typemock-isolator-running-within-a-tfs-2012-build/</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Update 23rd Aug 2012:&lt;/strong&gt; This blog post was produced testing against the 2012RC, seems it does not work against the 2012 RTM release. I am seeing the error in my build logs&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;_ TF900546: An unexpected error occurred while running the RunTests activity: &amp;lsquo;Executor process exited._&lt;/p&gt;
&lt;p&gt;&lt;em&gt;I am looking into this, expect to see a new blog post soon&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Update 24th Aug 2012&lt;/strong&gt;: I have fixed the issues with TFS2012RTM, the links to zip containing a working version of the activity in this post should now work. &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/24/Getting-Typemock-Isolator-running-within-a-TFS-2012-build-part-2.aspx&#34;&gt;For more details see my follow up part 2 post&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em><strong>Update 23rd Aug 2012:</strong> This blog post was produced testing against the 2012RC, seems it does not work against the 2012 RTM release. I am seeing the error in my build logs</em></p>
<p>_ TF900546: An unexpected error occurred while running the RunTests activity: &lsquo;Executor process exited._</p>
<p><em>I am looking into this, expect to see a new blog post soon</em></p>
<p><em><strong>Update 24th Aug 2012</strong>: I have fixed the issues with TFS2012RTM, the links to zip containing a working version of the activity in this post should now work. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/24/Getting-Typemock-Isolator-running-within-a-TFS-2012-build-part-2.aspx">For more details see my follow up part 2 post</a></em></p>
<p>I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx">posted in the past about getting Typemock Isolator to function within the TFS build process</a>. In <a href="http://docs.typemock.com/Isolator/#%23typemock.chm/Documentation/MSBuild.html">TFS 2008 it was easy</a>, you just ran a couple of MSBUILD tasks that started/stopped the Typemock Isolator inception process (the bit that does the magic other mocking frameworks cannot do). However with TFS 2010’s move to a windows workflow based build model it became more difficult. This was due to the parallel processing nature of the 2010 build process, running a single task to enable interception cannot be guaranteed to occur in the correct thread (or maybe even on the correct build agent). So <a href="http://www.typemock.com/files/Addons/VS2010%20TypemockBuildActivity%201.0.0.0.zip">I wrote wrapper build activity for MStest to get around this problem</a>. Howerver, with the release of <a href="http://docs.typemock.com/Isolator/#%23typemock.chm/Documentation/TFSBuild.html">Typemock Isolator 6.2 direct support for TFS 2010 was added</a> and these TFS build activities have been refined in later releases. In the current beta (7.0.8) you get a pre-created TFS build process template to get you going and some great auto deploy features, but more of that later.</p>
<p>The problem was I wanted to put Isolator based tests within a TFS 2012 build process. I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/06/23/Why-Typemock-Isolator-does-not-work-on-TFS-2012-Build-and-what-you-can-do-about-it.aspx">posted before about my initial thoughts on the problem</a>. The main problem is that TFS build activities have to be built against the correct version of the TFS API assemblies (this is the reason the <a href="http://tfsbuildextensions.codeplex.com/">community custom activities</a> have two sets of DLLs in the release ZIP file). So out the box you can’t use the <strong>Typemock.TFS2010.DLL</strong> with TFS 2012 as it is built against the 2010 API.</p>
<p>Also you cannot just use the Typemock provided sample build process template. This is built against 2010 too, so full of 2010 activities which all fail.</p>
<p><strong>What I tried that did not work (so don’t waste your time)</strong></p>
<p>So I took a copy of the default TFS 2012 build process template and followed the process to add the <strong>Typemock.TFS2010.DLL</strong> containing the Typemock activities to the Visual Studio 2012 toolbox (<a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;referringTitle=Documentation">the community activity documentation provides a good overview of this strangely complex process</a> also see the <a href="http://vsarbuildguide.codeplex.com/">ALM Rangers guidance</a>). I then added the <strong>TypemockRegister</strong> and <strong>TypemockStart</strong> activities at the start of the testing block. For initial tests I did not both adding the <strong>TypemockStop</strong> activity</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_56.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_56.png" title="image"></a></p>
<p>I then made sure that</p>
<ul>
<li>Typemock was installed on the build agent PC</li>
<li>The <strong>Typemock.TFS2010.dll</strong> was in the correct <strong>CustomActivities</strong> folder in source control</li>
<li>The build controller was set to load activities from the <strong>CustomActivities</strong> folder.</li>
</ul>
<p>However, when I tried to queue this build I got an error </p>
<blockquote>
<p>Exception Message: Object reference not set to an instance of an object. (type NullReferenceException)<br>
Exception Stack Trace:    at TypeMock.CLI.Common.TypeMockRegisterInfo.Execute()</p></blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_57.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_57.png" title="image"></a> </p>
<p>The issue was that though Typemock was installed, the required DLLs could not be found. Checking in a bit more detailed (by running the build with diagnostic level of logging and using <a href="http://msdn.microsoft.com/en-us/library/e74a18c4.aspx">Fuslogvw</a>) I saw it was trying load the wrong versions of DLLs as expected. So the first thing I tried to use was binding redirection (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2009/05/29/addressing-binding-issues-with-with-ivonna-2-0-0-using-lt-dependentassembly-gt-in-web-config.aspx">a technique I used before with similar Typemock</a>). This in effect told the Typemock activity to use the 2012 DLLs when it asks for the 2010 ones. This is done by using an XML config file <strong>(Typemock.TFS2010.DLL.config)</strong>  in the same folder as the DLL file.</p>
<blockquote>
<configuration>  
   <runtime>  
      <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">  
       <dependentAssembly>  
         <assemblyIdentity name="Microsoft.TeamFoundation.Build.Workflow"  
                           publicKeyToken="b03f5f7f11d50a3a"  
                           culture="neutral" />  
         <bindingRedirect oldVersion="10.0.0.0"  
                          newVersion="11.0.0.0"/>  
       </dependentAssembly>  
       <dependentAssembly>  
         <assemblyIdentity name="Microsoft.TeamFoundation.Build.Client"  
                           publicKeyToken="b03f5f7f11d50a3a"  
                           culture="neutral" />  
         <bindingRedirect oldVersion="10.0.0.0"  
                          newVersion="11.0.0.0"/>  
       </dependentAssembly>  
          <publisherPolicy apply="no">  
      </assemblyBinding>bu  
   </runtime>  
</configuration></blockquote>
<p>I first tried to add this file to the <strong>CustomActivities</strong> source control folder, where the custom activities are loaded from by the build agent, but that did not work. I could only get it to work if I put both the DLL and the config files in the  <strong>C:Program FilesMicrosoft team Foundation Server 1.0Tools</strong> folder on the build agent. This is not a way I like to work, too messy having to fiddle with the build agent file system.</p>
<p>Once this setting was made I tried a build again and got the build process to load, but the <strong>TypemockRegister</strong> activity failed as the <strong>Typemock</strong> settings argument was not set. Strangely Typemock have chosen to pass in their parameters as a complex type (of the type <strong>TypemockSettings</strong>) as opposed to four strings. Also you would expect this argument to be passed directly into their custom activities by getting activity properties to argument values, but this is not how it is done. The Typemock activities know to look directly for an argument called <strong>Typemock</strong>. This does make adding the activities easier, but not obvious if you are not expecting it. So I added this argument to the build definition in Visual Studio 2012 and checked it in, but when I tried to set the argument value for a specific build it gave the error that the DLL containing the type <strong>Typemock.TFS2010.TypemockSettings</strong> could not be loaded, again the TFS 2010/2012 API issue, this time within Visual  Studio 2012</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002_2.jpg"><img alt="clip_image002" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002_thumb_2.jpg" title="clip_image002"></a></p>
<p>At this point I gave up on binding redirection, I had wasted a lot more time than this post makes it sound. So I removed all the work I had previously done and thought again.</p>
<p><strong>What did work</strong></p>
<p>I decided that the only sensible option was to recreate the functionality of the Typemock activity against the 2012 API. So I used <a href="http://www.telerik.com/products/decompiler.aspx">Telerik JustDecompile</a> to open up the <strong>Typemock.Tfs2010.dll</strong> assembly and had a look inside. In Visual Studio 2012 I then created a new C# class library project called <strong>Typemock.BM.TFS20102</strong> targeting .NET 4. I then basically cut and pasted the classes read from JustDecompile into classes of the same name in the new project. I then added references to the TFS 2012 API assemblies and any other assemblies needed and compiled the project. The one class I had problems with the <strong>TypemockStart</strong>, specifically the unpacking of the properties in the <strong>InternalExecute</strong> method. The reflected code by JustDecompile was full of what looked to be duplicated array copying which did not compile. So I simplified this to map the properties to the right names.</p>
<p>I now had a TFS 2012 custom build activity. I took this new activity and put it the <strong>CustomActivities</strong> folder. Next I took an unedited version of the default 2012 build process template and added these new <strong>Typemockregister</strong>, <strong>TypemockStart</strong> (at the start of the test block) and <strong>TypemockStop</strong> (at the end of the test block) activities as well as a Typemock argument (of <strong>TypemockSettings</strong> type). I checked this new template into TFS, and then created a build setting the Typemock argument settings.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_58.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_58.png" title="image"></a></p>
<p>Now at this point it is worth mentioning the nice feature of AutoDeploy. This allows you to use Typemock without having it installed on the build agent, thus making build agent management easier. You copy the <strong>AutoDeploy</strong> folder from the Typemock installation folder into source control (though a rename might be sensible so you remember it is for Typemock auto deployment and not anything else). You can then set the four argument properties</p>
<ul>
<li>The location of the auto deployment folder in source control</li>
<li>A switch to enable auto deployment</li>
<li>Your Typemock license settings.</li>
</ul>
<p>By using the auto deployment feature I was able to uninstall Typemock on the build agent.</p>
<p>So I tried a build using these setting, all the build activities loaded Ok and the <strong>TypemockSettings</strong> was read, but my project compile failed. As I had uninstalled Typemock on my build agent all the references to Typemock assemblies in the GAC failed. These references were fine on a development PC which had Typemock installed not on the build agent which did not.</p>
<p>So I needed to point the references in my project to another location. Typemock have thought of this too and provide a tools to remap the references that you can find on the Typemock menu</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_59.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_59.png" title="image"></a></p>
<p>You can use this tool, or do it manually.</p>
<p>You could re-point the references to the same location you used for the AutoDeploy feature. However I prefer to keep my project references separate to my infrastructure (build activities etc.) as I use the same build templates cross project. For our projects we arrange source control so we have the structure in the general form (ignoring branch for simplicity)</p>
<blockquote>
<p>/$<br>
A team project<br>
      BuildProcessTemplate<br>
             CustomActivities<br>
             AutoDeploy<br>
      MyProject-1<br>
             Src <br>
                   Solution1A.sln<br>
             Lib <br>
                   [Nuget packages]<br>
                   AutoDeploy<br>
                   other assemblies<br>
      MyProject-2<br>
             Src <br>
                   Solution2a.sln<br>
             Lib <br>
                   [Nuget packages]<br>
                   AutoDeploy<br>
                   other assemblies</p></blockquote>
<p>I make sure we put all assemblies referenced in the lib folder, including those from Nuget using a nuget.config file in the src folder with the SLN file e.g.</p>
<blockquote>
<settings>  
  <repositoryPath>..Lib</repositoryPath>  
</settings></blockquote>
<p>This structure might not be to your taste, but I like it as it means all projects are independent, and so is the build process. The downside is you have to manage the references for the projects and build separately, but I see this as good practice. You probably don’t share want to reference and Nuget packages between separate projects/solutions.</p>
<p>So now we have a 2012 build process that can start Typemock Isolator, and a sample project that contains Typemock based tests, some using MSTest and some using XUnit (remember Visual Studio 2012 support multiple unit testing frameworks not just MSTest, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/27/Unit-testing-in-VS11Beta-and-getting-your-tests-to-run-on-the-new-TFSPreview-build-service.aspx">see here on how to set this up for TFS build</a>). When the build is run I can see all my unit tests pass to Typemock isolator must be starting correctly</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_60.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_60.png" title="image"></a></p>
<p>So for me this is a reasonable work around until Typemock ship a TFS 2012 specific version.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Audio problem on Windows 8 RP and Lenovo W520 with Lync 2013</title>
      <link>https://blog.richardfennell.net/posts/audio-problem-on-windows-8-rp-and-lenovo-w520-with-lync-2013/</link>
      <pubDate>Tue, 31 Jul 2012 09:33:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/audio-problem-on-windows-8-rp-and-lenovo-w520-with-lync-2013/</guid>
      <description>&lt;p&gt;I have been really pleased with Windows 8 RP on my Lenovo W520, I have had no major problems. I have seen the issues with slow start-up of networking after a sleep as others have seen, but nothing else.&lt;/p&gt;
&lt;p&gt;However today I tried to do a Lync audio call with Lync 2013 Beta and found I had no audio. Up to now I had just used the drivers Windows 8 installed which had seemed OK. It turns out I had to install the &lt;a href=&#34;http://download.lenovo.com/ibmdl/pub/pc/pccbbs/mobiles/8aad11ww.exe&#34;&gt;Conexant Audio Software&lt;/a&gt;  8.32.23.5 from the Lenovo site. Once I did this and Lync was restarted the audio leapt into life. As I remember I had a similar issue with Windows 7 and getting the audio to work correctly on my Lenovo base station.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been really pleased with Windows 8 RP on my Lenovo W520, I have had no major problems. I have seen the issues with slow start-up of networking after a sleep as others have seen, but nothing else.</p>
<p>However today I tried to do a Lync audio call with Lync 2013 Beta and found I had no audio. Up to now I had just used the drivers Windows 8 installed which had seemed OK. It turns out I had to install the <a href="http://download.lenovo.com/ibmdl/pub/pc/pccbbs/mobiles/8aad11ww.exe">Conexant Audio Software</a>  8.32.23.5 from the Lenovo site. Once I did this and Lync was restarted the audio leapt into life. As I remember I had a similar issue with Windows 7 and getting the audio to work correctly on my Lenovo base station.</p>
<p>Top tip: use the up to date drivers</p>
]]></content:encoded>
    </item>
    <item>
      <title>Two problems editing TFS2012 build workflows with the same solution</title>
      <link>https://blog.richardfennell.net/posts/two-problems-editing-tfs2012-build-workflows-with-the-same-solution/</link>
      <pubDate>Mon, 30 Jul 2012 20:27:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/two-problems-editing-tfs2012-build-workflows-with-the-same-solution/</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Updated 30th Aug 2012:&lt;/strong&gt; This post is specific to TFS/VS 2012 RC - &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/30/Type-InArgument%28mtbwaBuildSettings%29-of-property-BuildSettings-errors-in-TFS-2012-RTM-builds.aspx&#34;&gt;for details on the RTM see this updated post&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Whilst moving over to our new TFS2012 system I have been editing build templates, pulling the best bits from the selection of templates we used in 2010 into one master build process to be used for most future projects. Doing this I have hit a couple of problems, turns out the cure is the same for both&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em><strong>Updated 30th Aug 2012:</strong> This post is specific to TFS/VS 2012 RC - <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/30/Type-InArgument%28mtbwaBuildSettings%29-of-property-BuildSettings-errors-in-TFS-2012-RTM-builds.aspx">for details on the RTM see this updated post</a></em></p>
<p>Whilst moving over to our new TFS2012 system I have been editing build templates, pulling the best bits from the selection of templates we used in 2010 into one master build process to be used for most future projects. Doing this I have hit a couple of problems, turns out the cure is the same for both</p>
<p><strong>Problem 1 : When adding custom activities to the toolbox Visual Studio crashes</strong></p>
<p><a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;referringTitle=Documentation">See the community activities documentation for the process to add a items to the toolbox</a>, when you get to step to browse for the custom assembly you get a crash.</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002_1.jpg"><img alt="clip_image002" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002_thumb_1.jpg" title="clip_image002"></a></p>
<p><strong>Problem 2: When editing a process template in any way the process is corrupted and the build fails</strong></p>
<p>When the build runs you get the error (amongst others)</p>
<p><em>The build process failed validation. Details:<br>
Validation Error: The private implementation of activity &lsquo;1: DynamicActivity&rsquo; has the following validation error:   Compiler error(s) encountered processing expression &ldquo;BuildDetail.BuildNumber&rdquo;.<br>
Type &lsquo;IBuildDetail&rsquo; is not defined.</em></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_55.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_55.png" title="image"></a></p>
<p><strong>The Solution</strong></p>
<p>Turns out the issue that caused both these problems was that the Visual Studio class library project I was using to host the XAML workflow for editing was targeting .NET 4.5, the default for VS2012. I changed the project to target .NET 4.0, rolled back the XAML file back to an unedited version and reapplied my changes and all was OK.</p>
<p>Yes I know it is strange, as you never build the containing project, but the targeted .NET version is passed around VS for building lists and the like, hence the problem.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SUR-40 (what used to be a Surface) event at Leeds Sharp User group</title>
      <link>https://blog.richardfennell.net/posts/sur-40-what-used-to-be-a-surface-event-at-leeds-sharp-user-group/</link>
      <pubDate>Mon, 30 Jul 2012 11:19:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sur-40-what-used-to-be-a-surface-event-at-leeds-sharp-user-group/</guid>
      <description>&lt;p&gt;I am pleased to say that Black Marble will be doing a session at the &lt;a href=&#34;http://leeds-sharp.org/&#34;&gt;Leeds Sharp User group on the evening of the 30th of August&lt;/a&gt; on the [Samsung SUR40 with Microsoft PixelSense (what used to be call a Surface2)](&lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Specialisations&amp;amp;subsection=Samsung&#34;&gt;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Specialisations&amp;subsection=Samsung&lt;/a&gt; SUR40 (Surface 2)).&lt;/p&gt;
&lt;p&gt;Should be an interesting session as we will be bringing one of our &lt;a href=&#34;http://www.blackmarble.co.uk/SurfaceRental.aspx&#34;&gt;rental units&lt;/a&gt; along for everyone to have a go on.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am pleased to say that Black Marble will be doing a session at the <a href="http://leeds-sharp.org/">Leeds Sharp User group on the evening of the 30th of August</a> on the [Samsung SUR40 with Microsoft PixelSense (what used to be call a Surface2)](<a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Specialisations&amp;subsection=Samsung">http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Specialisations&subsection=Samsung</a> SUR40 (Surface 2)).</p>
<p>Should be an interesting session as we will be bringing one of our <a href="http://www.blackmarble.co.uk/SurfaceRental.aspx">rental units</a> along for everyone to have a go on.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD10 registration open</title>
      <link>https://blog.richardfennell.net/posts/ddd10-registration-open/</link>
      <pubDate>Mon, 30 Jul 2012 11:10:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd10-registration-open/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://developerdeveloperdeveloper.com/ddd10/Default.aspx&#34;&gt;DDD10 registration opens today&lt;/a&gt;, and probably closes about 10 minutes later (server overloads allowing) from past experience. If you can’t make the date, or the don’t fancy the trip down south to TVP remember &lt;a href=&#34;http://developerdeveloperdeveloper.com/north2/&#34;&gt;DDDNorth in October at Bradford University&lt;/a&gt; has many of the same speakers and is located in a far more convenient Yorkshire location.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://developerdeveloperdeveloper.com/ddd10/Default.aspx">DDD10 registration opens today</a>, and probably closes about 10 minutes later (server overloads allowing) from past experience. If you can’t make the date, or the don’t fancy the trip down south to TVP remember <a href="http://developerdeveloperdeveloper.com/north2/">DDDNorth in October at Bradford University</a> has many of the same speakers and is located in a far more convenient Yorkshire location.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems I had to address when setting up TFS 2012 Lab Environments with existing Hyper-V VMs</title>
      <link>https://blog.richardfennell.net/posts/problems-i-had-to-address-when-setting-up-tfs-2012-lab-environments-with-existing-hyper-v-vms/</link>
      <pubDate>Mon, 23 Jul 2012 23:23:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-i-had-to-address-when-setting-up-tfs-2012-lab-environments-with-existing-hyper-v-vms/</guid>
      <description>&lt;p&gt;Whilst moving all our older test Hyper-V VMs into a new TFS 2012 Lab Management instance I have had to address a few problems. I already &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/20/Getting-TFS-2012-Agents-to-communicate-cross-domain.aspx&#34;&gt;posted about the main one of cross domain communications&lt;/a&gt;. This post aims to list the other workaround I have used.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;MTM can’t communicate with the VMs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;When setting up an environment that includes existing VMs it is vital that the PC running MTM |(Lab Center) can communicate with all the VMs involved. The best indication I have found that you will not have problems is to use a simple Ping. If you are creating a SCVMM environment you need to be able to Ping the fully qualified machine name as it has been picked up by Hyper-V e.g: server1.test.local. If creating a standard environment you only need to be able to Ping the name you specify for the machine e.g: server1 or maybe server.corp.com.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst moving all our older test Hyper-V VMs into a new TFS 2012 Lab Management instance I have had to address a few problems. I already <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/07/20/Getting-TFS-2012-Agents-to-communicate-cross-domain.aspx">posted about the main one of cross domain communications</a>. This post aims to list the other workaround I have used.</p>
<p><strong>MTM can’t communicate with the VMs</strong></p>
<p>When setting up an environment that includes existing VMs it is vital that the PC running MTM |(Lab Center) can communicate with all the VMs involved. The best indication I have found that you will not have problems is to use a simple Ping. If you are creating a SCVMM environment you need to be able to Ping the fully qualified machine name as it has been picked up by Hyper-V e.g: server1.test.local. If creating a standard environment you only need to be able to Ping the name you specify for the machine e.g: server1 or maybe server.corp.com.</p>
<p>If Ping fails then you can be sure that the MTM create environment verify step will also fail. The most likely reasons both are failing are</p>
<ul>
<li>There are DNS issues, the VM names are missing, leases have expired, they are not in the domains expected or are just plain wrong. I found the best solution for me is to edit the local hosts file on the PC running MTM. Just add the name and fully qualified name as well as the correct IP address. You should then be able Ping the VM (unless there is a firewall issue, see below). The host file is only needed on the MTM PC whist the environment is created, once the environment is setup the hosts file is not needed.</li>
<li>File and print sharing needs to be opened through the firewall on the VM (control panel &gt; firewall &gt; allow applications through firewall)</li>
<li>Missing/out of date Hyper-V extensions on the VM. This only matters if it is a SCVMM environment being created as this is how the fully qualified is found. This is best spotted in MTM as you get a error on the <strong>Machine properties</strong> tab. The fix is to reinstall the extensions via the Hyper-V Manager  (Actions &gt; Insert Integration Services Disk, and maybe run the setup on the VM if it does not start)</li>
</ul>
<p><strong>Can’t see a running VM in the list of available VMs</strong></p>
<p>When composing an environment from running VMs one problem I had was that though a VM was running it did not appear in the list in MTM. This turned out to be due to the fact that the VM had meta data associating it with an different environment (in my case a dating back to our TFS2010 instance).</p>
<p>This is easy to fix, in SCVMM or Hyper-V Manager open the VM settings and make sure the name/ note field (red box below) is empty.</p>
<p> <a href="/wp-content/uploads/sites/2/historic/image_54.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_54.png" title="image"></a></p>
<p>Once the settings are saved you will have to wait a little while before SCVMM picks up the changes and lets you copy of MTM know the VM is available.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting TFS 2012 Agents to communicate cross domain</title>
      <link>https://blog.richardfennell.net/posts/getting-tfs-2012-agents-to-communicate-cross-domain/</link>
      <pubDate>Fri, 20 Jul 2012 16:27:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-tfs-2012-agents-to-communicate-cross-domain/</guid>
      <description>&lt;p&gt;I don’t know about your systems but historically we have had VMs running in test domains that are connected to our corporate LAN. Thus allowing our staff and external testers to access them from their development PCs or through our firewall after providing suitable &lt;strong&gt;test&lt;/strong&gt; domain credentials. These test setups are great candidates for the new TFS Lab Management 2012 feature Standard environments. It does not matter if they are hosted as physical devices, or on Hyper-V or VMware.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I don’t know about your systems but historically we have had VMs running in test domains that are connected to our corporate LAN. Thus allowing our staff and external testers to access them from their development PCs or through our firewall after providing suitable <strong>test</strong> domain credentials. These test setups are great candidates for the new TFS Lab Management 2012 feature Standard environments. It does not matter if they are hosted as physical devices, or on Hyper-V or VMware.</p>
<p>However, the use of separate domains raises issues of cross domain authentication, irrespective of the virtualisation technology. It is always a potentially confusing area. If we want the ability to use the deployment and testing features of Lab Management, what we need to achieve is Test Agents on each VM, that talks to a Test Controller which is registered to a TFS Team Project Collection. Not too easy when spread across multiple domains.</p>
<p>With TSF2012 the whole process of getting agents to talk to their controller was greatly eased. Lab Management does it for you much of the time if you provide it with a <strong>corptfslab</strong> domain account who is a member of the <strong>Project collection test service accounts</strong> group in TFS.</p>
<p>The summary of the scenarios is as follows</p>
<p><strong>Scenario</strong></p>
<p><strong>How to achieve it</strong></p>
<p>If your test VMs are in either a SCVMM managed or standard environment but are joined to your <strong>corp</strong> domain</p>
<p>Lab Management wires it all up automatically using your <strong>corptfslab</strong> account</p>
<p>If your test VMs are in either a SCVMM managed or standard environment that is not domain joined i.e: just in a workgroup</p>
<p>Lab Management wires it all up automatically using your <strong>corptfslab</strong> account</p>
<p>If your test VMs are in a SCVMM managed network isolated environment</p>
<p>Lab Management wires it all up automatically using your <strong>corptfslab</strong> account</p>
<p>If your test VMs are in either a SCVMM managed (not network isolated) or standard environment and are in their own <strong>test</strong> domain</p>
<p>You have to do some work</p>
<p>If like me you end up with the fourth scenario, the key is to provide a test controller within the <strong>test</strong> domain. This must be configured to talk back to TFS on the <strong>corp</strong> domain. This can all done with local machine accounts on the test controller and TFS server with matching names and passwords, what I think of as shadow accounts.</p>
<p>So for example, we have the following scenario of a <strong>corp</strong> domain with a DC and various TFS servers and controllers and a <strong>test</strong> domain containing three servers.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_52.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_52.png" title="image"></a></p>
<p>So the process to get the test agents on the <strong>test</strong> domain talking to TFS on the <strong>corp</strong> domain is as follows:</p>
<ol>
<li>On the TFS server (called <strong>tfsserver.corp.com</strong> in above graphic)
<ol>
<li>Open the Control Panel &gt; Computer Manager and create a new local user called <strong>tfslabshadow</strong>. Set the password and that the user does not need to change it on first login and that it does not expire</li>
<li>In the TFS administration console add the new user <strong>tfsservertfslabshadow</strong> to the <strong>Project collection test service accounts</strong> group</li>
</ol>
</li>
<li>On a machine (called <strong>server.test.local</strong> in above graphic) within the test domain (this cab be any VM in the domain running Windows other than the DC)
<ol>
<li>
<p>Open the Control Panel &gt; Computer Manager and create a new local user called <strong>tfslabshadow</strong> with the same password as on the same account on the <strong>tfsserver</strong></p>
</li>
<li>
<p>Add this user to the local administrators group for that server.</p>
</li>
<li>
<p>Login as this user</p>
</li>
<li>
<p>Install the Visual Studio 2012 Test controller</p>
</li>
<li>
<p>When the installation is complete the configuration tool will launch. Set the service to run as the <strong>tfslabshadow</strong> and register it to connect to the TFS server with this account too.<br>
Note - When you first load the configuration tool you need to browse for the TFS server and enter its URL. If you have your shadow accounts working correctly you should not need to enter any other credentials at this point.<br>
Note - You can enter the local user name in either the <strong>.tfslabshadow</strong> or <strong>servertfslabshadow</strong> format</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_53.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_53.png" title="image"></a></p>
</li>
<li>
<p>If you have all the settings correct then you should be able to apply the changes without any errors and the new test controller should be registered. If you get any errors they usually are fairly clear at this point when you look in the log, you probably forgot to place a user in some group somewhere.</p>
</li>
</ol>
</li>
<li>From a PC running Test Manager 2012 (MTM) on the <strong>corp</strong> domain
<ol>
<li>Go into the Lab Center</li>
<li>Create a new environment (can be SCVMM or Standard) containing the machines in the <strong>test</strong> domain (or open an existing environment if you have one that was not correctly configured)</li>
<li>On the Advanced tab you should be able to select the new test controller <strong>server</strong> that is hosted within the <strong>test</strong> domain</li>
<li>You can make any other setting changes you require (remember on the machines tab to enter the <strong>test</strong> domain login credentials, they will have defaulted to your current ones). When you are done you can select Verify. I had problem here due to DNS entries. From the PC running MTM I could ping <strong>server</strong>, but MTM was trying to communicate using the name <strong>server.test.local</strong>. To get around this I added an entry in my local host files. I have also a seen VMs that are not registered in DNS at all, again a local hosts file fixes the problem. This is only required for the initial verification and deployment/configuration once this is done the host entries can be removed if you want.</li>
<li>Once verification has passed save the changes and after a short wait the environment should finish configuring itself showing no errors</li>
</ol>
</li>
</ol>
<p>So I hope I have provided a step by step to help you get around issues with cross domain testing in Lab Management. However, it is still important to remember the exceptions</p>
<ol>
<li>As we are using local machine accounts you cannot have the TFS server or the Test controller running on a domain controller (as a DC cannot have local machine accounts). If your environment is a single box that is a DC then you either have to setup a cross domain two way trust between <strong>test</strong> and <strong>corp</strong> or rebuild the environment as a workgroup or network isolated environment.</li>
<li>The shadow account cannot have the same name as the <strong>corptfslab</strong> account i.e: <strong>tfslab</strong>. If you try to use the same name for the local machine and domain accounts the matching of the two local machine accounts will fails as on the TFS server end it will not be able to decide whether to use <strong>corptfslab</strong> or <strong>tfsserverrfslab</strong></li>
</ol>
<p><a href="http://msdn.microsoft.com/en-us/library/hh546460%28v=vs.110%29.aspx">For more details on this general area see MSDN</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Where did my custom Word templates go in 2013?</title>
      <link>https://blog.richardfennell.net/posts/where-did-my-custom-word-templates-go-in-2013/</link>
      <pubDate>Wed, 18 Jul 2012 11:43:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-did-my-custom-word-templates-go-in-2013/</guid>
      <description>&lt;p&gt;I installed Office 2013 customer preview yesterday and all seemed good. Today I needed to create a new document using one of our company standard templates. I opened Word, went to the new document section and was presented with a list of great looking templates, but not my custom ones. The page suggested I used search to find more templates, it did find more templates from the Internet, but did not find my locally stored ones.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I installed Office 2013 customer preview yesterday and all seemed good. Today I needed to create a new document using one of our company standard templates. I opened Word, went to the new document section and was presented with a list of great looking templates, but not my custom ones. The page suggested I used search to find more templates, it did find more templates from the Internet, but did not find my locally stored ones.</p>
<p>Turns out the issue is that the path to my custom templates had been removed as part of the upgrade (from memory in previous versions of Word this path was always default to your local {USER}AppDataroamingmicrosofttemplate you did not usually need to set it).</p>
<p>So I opened the options, added the path to my existing templates folder</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_49.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_49.png" title="image"></a></p>
<p>Restarted Word and tried to create a new document and this time a got a personal tab on the new document page</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_50.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_50.png" title="image"></a></p>
<p>When I clicked this I got my list of templates, I really must get round to resaving them with previews so they look better in the list</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_51.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_51.png" title="image"></a></p>
<p><strong>Update 22 Jan 2014</strong>: And if you are looking for the Workgroup templates location as opposed to the personal one they are in now set in Word &gt; File &gt; Options &gt; Advanced &gt; In the General section (near the bottom) press the file locations button.</p>
]]></content:encoded>
    </item>
    <item>
      <title>_atomic_fetch_sub_4 error running VS2012RC after Office 2013 Customer Preview is installed</title>
      <link>https://blog.richardfennell.net/posts/_atomic_fetch_sub_4-error-running-vs2012rc-after-office-2013-customer-preview-is-installed/</link>
      <pubDate>Tue, 17 Jul 2012 12:59:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/_atomic_fetch_sub_4-error-running-vs2012rc-after-office-2013-customer-preview-is-installed/</guid>
      <description>&lt;p&gt;When I installed &lt;a href=&#34;http://www.microsoft.com/office/preview/en?WT.mc_id=MSCOM_EN_US_HP_FEATUREWORK_131L1ENUS21367&#34;&gt;Office 2013 customer preview&lt;/a&gt; all seemed good, loads of new metro look Office features. However when I tried to load my previously working &lt;a href=&#34;http://www.microsoft.com/visualstudio/11/en-us/downloads&#34;&gt;Visual Studio 2012RC&lt;/a&gt; I got the error&lt;/p&gt;
&lt;p&gt;“The procedure entry point _atomic_fetch_sub_4 could not be located in the dynamic link library devenv.exe”.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_48.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_48.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is &lt;a href=&#34;http://support.microsoft.com/kb/2703187&#34;&gt;a known issues with the C++ runtime&lt;/a&gt; and a &lt;a href=&#34;http://www.microsoft.com/en-us/download/details.aspx?id=30178&#34;&gt;patch was release last week&lt;/a&gt;, install this and all should be OK&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When I installed <a href="http://www.microsoft.com/office/preview/en?WT.mc_id=MSCOM_EN_US_HP_FEATUREWORK_131L1ENUS21367">Office 2013 customer preview</a> all seemed good, loads of new metro look Office features. However when I tried to load my previously working <a href="http://www.microsoft.com/visualstudio/11/en-us/downloads">Visual Studio 2012RC</a> I got the error</p>
<p>“The procedure entry point _atomic_fetch_sub_4 could not be located in the dynamic link library devenv.exe”.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_48.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_48.png" title="image"></a></p>
<p>This is <a href="http://support.microsoft.com/kb/2703187">a known issues with the C++ runtime</a> and a <a href="http://www.microsoft.com/en-us/download/details.aspx?id=30178">patch was release last week</a>, install this and all should be OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot access the ‘site setting’ on a reporting services instance using IE</title>
      <link>https://blog.richardfennell.net/posts/cannot-access-the-site-setting-on-a-reporting-services-instance-using-ie/</link>
      <pubDate>Fri, 13 Jul 2012 12:28:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-access-the-site-setting-on-a-reporting-services-instance-using-ie/</guid>
      <description>&lt;p&gt;On site recently I had a problem that I could not access the site settings in reporting services if I used Internet Explorer from an client PC. IE worked fine on the server and other browsers were OK on the client, just not IE. Initially I though it was &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/21/stupid-gotchas-on-a-sql-2008-reporting-services-are-why-i-cannot-see-the-report-builder-button.aspx&#34;&gt;just rights&lt;/a&gt;, but that was not the case&lt;/p&gt;
&lt;p&gt;Turns out this is down to Kerberos negotiation as discussed in the &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/cc281253.aspx&#34;&gt;MSDN&lt;/a&gt; article. To fix the issue, on this site where we did not need Kerberos, we just disabled Kerberos negotiation in the &lt;em&gt;[Program files]Microsoft SQL ServerMSRS10.MSSQLSERVERReporting ServicesReportServerRSreportServer.config&lt;/em&gt; file, e.g.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On site recently I had a problem that I could not access the site settings in reporting services if I used Internet Explorer from an client PC. IE worked fine on the server and other browsers were OK on the client, just not IE. Initially I though it was <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/21/stupid-gotchas-on-a-sql-2008-reporting-services-are-why-i-cannot-see-the-report-builder-button.aspx">just rights</a>, but that was not the case</p>
<p>Turns out this is down to Kerberos negotiation as discussed in the <a href="http://msdn.microsoft.com/en-us/library/cc281253.aspx">MSDN</a> article. To fix the issue, on this site where we did not need Kerberos, we just disabled Kerberos negotiation in the <em>[Program files]Microsoft SQL ServerMSRS10.MSSQLSERVERReporting ServicesReportServerRSreportServer.config</em> file, e.g.</p>
<p>   <AuthenticationTypes><br>
              <RSWindowsNegotiate /><br>
              <RSWindowsKerberos /><br>
              <RSWindowsNTLM />  <br>
     </AuthenticationTypes></p>
<p>If you need Kerberos you need to sort out the SPNs as detail in the MSDN post</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS build service cannot connect to a TFS 2012 server - seeing EventID 206 MessageQueue in error log</title>
      <link>https://blog.richardfennell.net/posts/tfs-build-service-cannot-connect-to-a-tfs-2012-server-seeing-eventid-206-messagequeue-in-error-log/</link>
      <pubDate>Thu, 12 Jul 2012 21:03:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-build-service-cannot-connect-to-a-tfs-2012-server-seeing-eventid-206-messagequeue-in-error-log/</guid>
      <description>&lt;p&gt;Whilst setting up our new TFS 2012 instance I had a problem getting the build box to connect to the TFS server.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;When I started the build service (on a dedicated VM configured as a controller and single agent, connected to a TPC on the server on another VM). All appears OK, the controller and agents said they were running and state icons went green&lt;/li&gt;
&lt;li&gt;About 5 seconds later the state icons go red, but message says the controller and agents are still running, from past experience I know this means it is all dead.&lt;/li&gt;
&lt;li&gt;On the build service section a new ‘details’ link appears, but if you try to click it get a 404 error (see below)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_47.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_47.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst setting up our new TFS 2012 instance I had a problem getting the build box to connect to the TFS server.</p>
<ol>
<li>When I started the build service (on a dedicated VM configured as a controller and single agent, connected to a TPC on the server on another VM). All appears OK, the controller and agents said they were running and state icons went green</li>
<li>About 5 seconds later the state icons go red, but message says the controller and agents are still running, from past experience I know this means it is all dead.</li>
<li>On the build service section a new ‘details’ link appears, but if you try to click it get a 404 error (see below)</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_47.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_47.png" title="image"></a></p>
<p>In the windows event log (TFS/Build-Service/Operational section) I got the error</p>
<blockquote>
<p><em>Build machine build2012 lost connectivity to message queue tfsmq://buildservicehost-2/.</em><br>
<em>Reason: HTTP code 404: Not Found</em></p></blockquote>
<p>It is recorded as EventID 206  in the category MessageQueue</p>
<p>I tried reinstalling the build VM and checked the firewalls on the build VM and the TFS Server VM, all to no effect.</p>
<p>The issue turned out to be that the TFS URL I had used. I had used a TFS URL on the build service VM to connect to the TFS server that used HTTPS/SSL. As soon as I changed it to an HTTP URL the build service started to work. This was OK for me as the build VM and server VM were in the same machine room, so I did not really need SLL. I had just used it out of habit as this is what our developer PCs use.</p>
<p>However, if you did want to keep using SSL you need to do the following</p>
<ul>
<li>Open the following configuration file: C:Program FilesMicrosoft Team Foundation Server 2012Application TierMessage Queueweb.config</li>
<li>Find a section like the bindings section below</li>
<li>Alter httpTransport to say httpsTransport</li>
</ul>
<blockquote>
<p>    <bindings><br>
      <customBinding><br>
        <binding name="TfsSoapBinding"><br>
          <textMessageEncoding messageVersion="Soap12WSAddressing10" /><br>
<httpTransport authenticationScheme="Ntlm" manualAddressing="true" /><br>
          <httpsTransport authenticationScheme="Ntlm" manualAddressing="true" /><br>
        </binding><br>
      </customBinding><br>
    </bindings></p></blockquote>
<ul>
<li>Save the file</li>
<li>Recycle the IIS app pool</li>
<li>Restart the build service on the build VM</li>
</ul>
<p>Thanks to <a href="http://blogs.msdn.com/b/patcarna/">Patrick on the TFS team</a> for helping me get to the bottom of this.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Re-awarded as a MVP for Visual Studio ALM</title>
      <link>https://blog.richardfennell.net/posts/re-awarded-as-a-mvp-for-visual-studio-alm/</link>
      <pubDate>Sun, 01 Jul 2012 19:47:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/re-awarded-as-a-mvp-for-visual-studio-alm/</guid>
      <description>&lt;p&gt;I am really happy to say that I have had my &lt;a href=&#34;http://mvp.microsoft.com/en-US/Pages/default.aspx&#34;&gt;Microsoft MVP for Visual Studio ALM Re-awarded&lt;/a&gt;, it is a privilege to get to work with such a great group of people as a have met via the MVP programme.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really happy to say that I have had my <a href="http://mvp.microsoft.com/en-US/Pages/default.aspx">Microsoft MVP for Visual Studio ALM Re-awarded</a>, it is a privilege to get to work with such a great group of people as a have met via the MVP programme.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft Cloud Day slides</title>
      <link>https://blog.richardfennell.net/posts/microsoft-cloud-day-slides/</link>
      <pubDate>Tue, 26 Jun 2012 14:56:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-cloud-day-slides/</guid>
      <description>&lt;p&gt;My slides and those of other presenters at last weeks &lt;a href=&#34;http://www.elastacloud.com/Community/Details/2&#34;&gt;Microsoft cloud day have been published&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My slides and those of other presenters at last weeks <a href="http://www.elastacloud.com/Community/Details/2">Microsoft cloud day have been published</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>2012 Editions of TFS books are out</title>
      <link>https://blog.richardfennell.net/posts/2012-editions-of-tfs-books-are-out/</link>
      <pubDate>Sat, 23 Jun 2012 10:14:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/2012-editions-of-tfs-books-are-out/</guid>
      <description>&lt;p&gt;As &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2012/06/22/a-couple-of-great-books-on-2012.aspx&#34;&gt;Brian Harry announced on his blog there are new 2012 editions of the two main TFS books on the way&lt;/a&gt;. I have added links to them from &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/page/Reading-List.aspx&#34;&gt;my reading list page&lt;/a&gt; but as yet you can only pre-order them as the release dates are not available.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As <a href="http://blogs.msdn.com/b/bharry/archive/2012/06/22/a-couple-of-great-books-on-2012.aspx">Brian Harry announced on his blog there are new 2012 editions of the two main TFS books on the way</a>. I have added links to them from <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/page/Reading-List.aspx">my reading list page</a> but as yet you can only pre-order them as the release dates are not available.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why Typemock Isolator does not work on TFS 2012 Build and what you can do about it</title>
      <link>https://blog.richardfennell.net/posts/why-typemock-isolator-does-not-work-on-tfs-2012-build-and-what-you-can-do-about-it/</link>
      <pubDate>Sat, 23 Jun 2012 09:58:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-typemock-isolator-does-not-work-on-tfs-2012-build-and-what-you-can-do-about-it/</guid>
      <description>&lt;p&gt;Update 4/Aug/2012: &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx&#34;&gt;See post on my implementation of a 2012 version of this activity&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;If you are using Typemock Isolator in your unit testing then you be wanting to include then in your automated build process. Unlike other mocking frameworks you have to do a bit of work to achieve this with Isolator, this is because to enable its advanced features of mocking items like sealed private classes you have to start the interceptor that does the magic prior to running the tests and the switch it off afterwards.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Update 4/Aug/2012: <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/08/04/Getting-Typemock-Isolator-running-within-a-TFS-2012-build.aspx">See post on my implementation of a 2012 version of this activity</a></p>
<p>If you are using Typemock Isolator in your unit testing then you be wanting to include then in your automated build process. Unlike other mocking frameworks you have to do a bit of work to achieve this with Isolator, this is because to enable its advanced features of mocking items like sealed private classes you have to start the interceptor that does the magic prior to running the tests and the switch it off afterwards.</p>
<p>In the past [I posted on how you could use a build activity I wrote to wrapper MStest in a TFS 2010 build using TMockRunner](<a href="http://tfsbuildextensions.codeplex.com/wikipage?title=TFS">http://tfsbuildextensions.codeplex.com/wikipage?title=TFS</a> Build 2010 Activity to run Typemock Isolator based tests&amp;referringTitle=Home&amp;ProjectName=tfsbuildextensions). Since I wrote this <a href="http://docs.typemock.com/Isolator/#%23typemock.chm/Documentation/TFSBuild.html">Typemock released a set of build activities to do the starting and stopping of the inceptor as separate activities</a>, a much more flexible solution which I would always recommend for TFS.</p>
<p>However when you try to use either with TFS 2012 you get a problem, the activities fail to load. This is a problem we saw on the <a href="http://tfsbuildextensions.codeplex.com/">TFS Extensions Codeplex project</a>; you have to build you activities against either the 2010 TFS API or the 2012 TFS API. You don’t need to alter the code in your activities, but you do need to make a specific build.</p>
<p>So at this time there is no solution, one or both of these activities need to be rebuilt. For the MSTest wrapper I wrote the [source is available](<a href="http://tfsbuildextensions.codeplex.com/wikipage?title=TFS">http://tfsbuildextensions.codeplex.com/wikipage?title=TFS</a> Build 2010 Activity to run Typemock Isolator based tests&amp;referringTitle=Home&amp;ProjectName=tfsbuildextensions) so you can do it yourself if you want to, but the way Typemock have implemented their activities is a better solution. This is because it is not reliant on TMockRunner and MStest, it can wrapper any activities. This is important as to be able to use the new ‘any framework’ unit testing features in VS2012 you want to use Microsoft’s new test running activities and not just the old MSTest activity.</p>
<p>I understand that Typemock are looking at releasing a TFS2010 version of their activities soon, but I know of no release date as yet. If you want an immediate solution you will need to do a bit of work.</p>
<ul>
<li>You could rebuild my MSTest based activity</li>
<li>You could use the standard InvokeMethod activity and put the contents of my MStest activity’s generated command line into this</li>
<li>But the one I favour is to use a dissembler such as <a href="http://www.telerik.com/products/decompiler.aspx">Telerik JustDecompile</a> to get the code from the Typemock.TFS2010.DLL and build a new activity.</li>
</ul>
<p>However, it must be said I see this as just a temporary measure until the official Typemock 2012 activity is released. I am not sure I will get around to doing this before the Typemock release, we shall see.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Next Leeds Sharp user group meeting is a coding kata session</title>
      <link>https://blog.richardfennell.net/posts/next-leeds-sharp-user-group-meeting-is-a-coding-kata-session/</link>
      <pubDate>Thu, 21 Jun 2012 08:43:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/next-leeds-sharp-user-group-meeting-is-a-coding-kata-session/</guid>
      <description>&lt;p&gt;The next Leeds Sharp user group meeting on the 28th of June is a Kata session using Conway’s game of life. &lt;a href=&#34;http://leeds-sharp.org/events/2012/6&#34;&gt;For more details see their site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Not sure if I can make it yet have to keep an eye on my ever filling diary&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The next Leeds Sharp user group meeting on the 28th of June is a Kata session using Conway’s game of life. <a href="http://leeds-sharp.org/events/2012/6">For more details see their site</a>.</p>
<p>Not sure if I can make it yet have to keep an eye on my ever filling diary</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDDNorth session submission is open</title>
      <link>https://blog.richardfennell.net/posts/dddnorth-session-submission-is-open/</link>
      <pubDate>Mon, 18 Jun 2012 09:47:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dddnorth-session-submission-is-open/</guid>
      <description>&lt;p&gt;Session submission has opened for the next &lt;a href=&#34;http://developerdeveloperdeveloper.com/North2/Default.aspx&#34;&gt;DDDNorth on the 13th October at Bradford University&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Why not submit a session?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Session submission has opened for the next <a href="http://developerdeveloperdeveloper.com/North2/Default.aspx">DDDNorth on the 13th October at Bradford University</a>.</p>
<p>Why not submit a session?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where did my Visual Studio 2010 link go from my Windows 8 desktop after I installed SSDT?</title>
      <link>https://blog.richardfennell.net/posts/where-did-my-visual-studio-2010-link-go-from-my-windows-8-desktop-after-i-installed-ssdt/</link>
      <pubDate>Wed, 13 Jun 2012 17:59:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-did-my-visual-studio-2010-link-go-from-my-windows-8-desktop-after-i-installed-ssdt/</guid>
      <description>&lt;p&gt;I have been finding the ‘hit the windows key and type’ means to launch desktop applications in Windows 8 quite nice. It means I get used to the same behaviour in Windows or Ubuntu to launch things, no need to remember menu locations just type a name, all very &lt;a href=&#34;http://www.bayden.com/SlickRun/&#34;&gt;slickrun&lt;/a&gt; . However, I hit a problem today, I hit the windows key, typed &lt;strong&gt;Visual&lt;/strong&gt; and expected to see Visual Studio 2012 and 2010, but I only saw Visual Studio 2012&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been finding the ‘hit the windows key and type’ means to launch desktop applications in Windows 8 quite nice. It means I get used to the same behaviour in Windows or Ubuntu to launch things, no need to remember menu locations just type a name, all very <a href="http://www.bayden.com/SlickRun/">slickrun</a> . However, I hit a problem today, I hit the windows key, typed <strong>Visual</strong> and expected to see Visual Studio 2012 and 2010, but I only saw Visual Studio 2012</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_45.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_45.png" title="image"></a></p>
<p>But both were there yesterday!</p>
<p>The issue was I had install <a href="http://msdn.microsoft.com/en-us/data/hh297027">SSDT (SQL Server Data Tools)</a>, this is hosted within the Visual Studio 2010 shell and had renamed my Metro desktop Visual Studio 2010 App to <strong>Microsoft SQL Server Data Tools.</strong> If I typed this the app was found and it launched Visual Studio 2010. I then could choose whether to use SSDT or Ultimate features as you would expect. This is the same behaviour as on Windows 7, it is just on Windows 7 you would have two menu items, one for SSDT and one for VS2010 both pointing to the same place.</p>
<p>Now I am a creature of habit, even if it is a newly formed one, and I like to just type <strong>Vis</strong>, so this is how I got the link back into the Metro desktop, might be other ways but this is the one that worked for me</p>
<ol>
<li>Found the Visual Studio 2010 devenv.exe file in C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDE</li>
<li>Right click and create a shortcut on the desktop</li>
<li>Rename the new desktop shortcut to ‘Visual Studio 2010’</li>
<li>Right clicked on the renamed shortcut and selected ‘pin to start’</li>
<li>Deleted the desktop shortcut, it is no longer needed as it has been copied, yes I found this a bit strange too, but I do like a clean desktop so delete it I did. You don’t have to delete it if you want a desktop shortcut.</li>
</ol>
<p>I can now press the Windows key, type <strong>Vis</strong> and I can see both Vs 2010 and 2012, and I can still type <strong>SQL</strong> and get to SSDT</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_46.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_46.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at NEBytes on TFS2010</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-nebytes-on-tfs2010/</link>
      <pubDate>Tue, 12 Jun 2012 23:23:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-nebytes-on-tfs2010/</guid>
      <description>&lt;p&gt;I am speaking at NEBytes next week on VS/TFS2012 with Andy Westgarth, check the &lt;a href=&#34;http://www.nebytes.net/post/NEBytes-June-2012-Enough-About-the-Colour-Tell-Me-About-VS2012.aspx&#34;&gt;NEBytes site for details&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking at NEBytes next week on VS/TFS2012 with Andy Westgarth, check the <a href="http://www.nebytes.net/post/NEBytes-June-2012-Enough-About-the-Colour-Tell-Me-About-VS2012.aspx">NEBytes site for details</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TFSPreview.com no longer needs an invite code</title>
      <link>https://blog.richardfennell.net/posts/tfspreview-com-no-longer-needs-an-invite-code/</link>
      <pubDate>Tue, 12 Jun 2012 14:51:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfspreview-com-no-longer-needs-an-invite-code/</guid>
      <description>&lt;p&gt;At &lt;a href=&#34;http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/FDN02&#34;&gt;TechEd USA 2012 in the keynote and the ALM session&lt;/a&gt; it was announced that you no longer need an invite code to access the Azure hosted &lt;a href=&#34;http://www.tfspreview.com&#34;&gt;TFSPreview.com&lt;/a&gt;, it is now open to all.&lt;/p&gt;
&lt;p&gt;There are no final details as yet over the pricing when it goes fully to production but they did say there will be some form of free offering going forward. We will have to wait for more details on that front.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At <a href="http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/FDN02">TechEd USA 2012 in the keynote and the ALM session</a> it was announced that you no longer need an invite code to access the Azure hosted <a href="http://www.tfspreview.com">TFSPreview.com</a>, it is now open to all.</p>
<p>There are no final details as yet over the pricing when it goes fully to production but they did say there will be some form of free offering going forward. We will have to wait for more details on that front.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot update local workspace in TFS 2012RC due to UAC</title>
      <link>https://blog.richardfennell.net/posts/cannot-update-local-workspace-in-tfs-2012rc-due-to-uac/</link>
      <pubDate>Mon, 11 Jun 2012 14:50:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-update-local-workspace-in-tfs-2012rc-due-to-uac/</guid>
      <description>&lt;p&gt;Whilst preparing the demos for my [TFS 2012 ALM session on Wednesday (still places available of you are in the Yorkshire area)](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=How&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=How&lt;/a&gt; to Plan your Application Lifecycle Implementation with TFS11), I got the following error whey trying to add a new solution of sample code to TFS&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Access to the path &amp;lsquo;C:ProgramDataMicrosoft Team Foundation Local Workspaces43519e4-72cf-4cd0-a711-f4bb8b817f30TYPHOON;432cfb95-26d6-4a68-af26-b102c950a90dproperties.tf1&amp;rsquo; is denied.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;It seems that the ..4535… folder had been created when I was running VS2012 using ‘run as administrator’. When I tried to browse it I was prompted for UAC elevated privileges. So if I was not running VS2012 as administrator I could not access the folder. For me the solution was to just delete the folder and let VS/TFS recreate it. I had no checked out files so it did not matter.  Once done all was OK&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst preparing the demos for my [TFS 2012 ALM session on Wednesday (still places available of you are in the Yorkshire area)](<a href="http://www.blackmarble.co.uk/events.aspx?event=How">http://www.blackmarble.co.uk/events.aspx?event=How</a> to Plan your Application Lifecycle Implementation with TFS11), I got the following error whey trying to add a new solution of sample code to TFS</p>
<blockquote>
<p><em>Access to the path &lsquo;C:ProgramDataMicrosoft Team Foundation Local Workspaces43519e4-72cf-4cd0-a711-f4bb8b817f30TYPHOON;432cfb95-26d6-4a68-af26-b102c950a90dproperties.tf1&rsquo; is denied.</em></p></blockquote>
<p>It seems that the ..4535… folder had been created when I was running VS2012 using ‘run as administrator’. When I tried to browse it I was prompted for UAC elevated privileges. So if I was not running VS2012 as administrator I could not access the folder. For me the solution was to just delete the folder and let VS/TFS recreate it. I had no checked out files so it did not matter.  Once done all was OK</p>
<p>One to key an eye on</p>
]]></content:encoded>
    </item>
    <item>
      <title>Drivers needed on my Lenovo W520 with Windows 8 RP</title>
      <link>https://blog.richardfennell.net/posts/drivers-needed-on-my-lenovo-w520-with-windows-8-rp/</link>
      <pubDate>Thu, 07 Jun 2012 18:27:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/drivers-needed-on-my-lenovo-w520-with-windows-8-rp/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/06/01/Windows-8-RP-and-my-Lenovo-W520.aspx&#34;&gt;I posted recently on running Win8 RP on my Lenovo W520&lt;/a&gt;, well a few days on it is still working fine. I have adopted by usual practice of installing as few hardware vendor drivers/tool as possible, as I tend to find these make matters worse  most of the time. However, I as expected I could not only rely on just what came with Windows 8.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;On installing Windows the only device not discovered was the single device ‘Intel Chipset’, this needed the &lt;a href=&#34;http://support.lenovo.com/en_US/downloads/detail.page?DocID=HT072084&#34;&gt;Intel Chipset Support driver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;I noticed today that when on my docking station my speakers did not work. I expected to have to install the Lenovo provided audio drivers, but a check for ‘updated drivers’ on the audio device in device manager got the right ones from Windows Update.&lt;/li&gt;
&lt;li&gt;The fingerprint reader just work had to do nothing other than enrol my fingerprint in Control Panel –&amp;gt; Biometics, and unlike with the Lenovo version of the UI I had on Windows 7, I can now user a fingerprint to elevate my privileges via UAC. One issue seem to be the green LED by the reader does not light up, but as it works I can live with that.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I will report more things I need to do as I find them.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/06/01/Windows-8-RP-and-my-Lenovo-W520.aspx">I posted recently on running Win8 RP on my Lenovo W520</a>, well a few days on it is still working fine. I have adopted by usual practice of installing as few hardware vendor drivers/tool as possible, as I tend to find these make matters worse  most of the time. However, I as expected I could not only rely on just what came with Windows 8.</p>
<ul>
<li>On installing Windows the only device not discovered was the single device ‘Intel Chipset’, this needed the <a href="http://support.lenovo.com/en_US/downloads/detail.page?DocID=HT072084">Intel Chipset Support driver</a></li>
<li>I noticed today that when on my docking station my speakers did not work. I expected to have to install the Lenovo provided audio drivers, but a check for ‘updated drivers’ on the audio device in device manager got the right ones from Windows Update.</li>
<li>The fingerprint reader just work had to do nothing other than enrol my fingerprint in Control Panel –&gt; Biometics, and unlike with the Lenovo version of the UI I had on Windows 7, I can now user a fingerprint to elevate my privileges via UAC. One issue seem to be the green LED by the reader does not light up, but as it works I can live with that.</li>
</ul>
<p>I will report more things I need to do as I find them.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF215097 when running a build</title>
      <link>https://blog.richardfennell.net/posts/tf215097-when-running-a-build/</link>
      <pubDate>Thu, 07 Jun 2012 18:16:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf215097-when-running-a-build/</guid>
      <description>&lt;p&gt;I have been working on one of our build boxes today restructuring our &lt;a href=&#34;http://www.blackmarble.co.uk/surface&#34;&gt;Surface solutions&lt;/a&gt; to make better of Nuget. This involved upgraded the Azure SDK on the build box to the new &lt;a href=&#34;http://www.windowsazure.com/en-us/develop/downloads/&#34;&gt;June release&lt;/a&gt;, which needed a reboot halfway through the process. After the reboot and tried a new build I got the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TF215097: An error occurred while initializing a build for build definition SurfaceRetailApplication.Main CI: Could not load file or assembly &amp;lsquo;System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a&amp;rsquo; or one of its dependencies. The located assembly&amp;rsquo;s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working on one of our build boxes today restructuring our <a href="http://www.blackmarble.co.uk/surface">Surface solutions</a> to make better of Nuget. This involved upgraded the Azure SDK on the build box to the new <a href="http://www.windowsazure.com/en-us/develop/downloads/">June release</a>, which needed a reboot halfway through the process. After the reboot and tried a new build I got the error</p>
<blockquote>
<p><em>TF215097: An error occurred while initializing a build for build definition SurfaceRetailApplication.Main CI: Could not load file or assembly &lsquo;System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a&rsquo; or one of its dependencies. The located assembly&rsquo;s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)</em></p></blockquote>
<p>Basically the build did not start at all. Various forum posts point to corrupt build template .XAML or missing assemblies. But as it was working, the assembly name is in the core framework it all seem a bit strange.</p>
<p>The fix was the old favourite, stop and restart the build service from with TFS Admin console on the build box. Once this was done all as fine, so I guess some rubbish was cached.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD North to be held at Bradford University on 13th October</title>
      <link>https://blog.richardfennell.net/posts/ddd-north-to-be-held-at-bradford-university-on-13th-october/</link>
      <pubDate>Thu, 07 Jun 2012 10:53:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-north-to-be-held-at-bradford-university-on-13th-october/</guid>
      <description>&lt;p&gt;This years DDD North is to be held at Bradford University on 13th October, keep an eye open for the call for speakers&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This years DDD North is to be held at Bradford University on 13th October, keep an eye open for the call for speakers</p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows 8 RP and my Lenovo W520</title>
      <link>https://blog.richardfennell.net/posts/windows-8-rp-and-my-lenovo-w520/</link>
      <pubDate>Fri, 01 Jun 2012 22:49:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-8-rp-and-my-lenovo-w520/</guid>
      <description>&lt;p&gt;I tried to install the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/14/first-try-with-windows8-and-it-won-t-boot.aspx&#34;&gt;Windows 8 CP on my Lenovo W520 with no success as I posted about in the past&lt;/a&gt;.I was only able to install Windows 8 CP only if I disabled the Nvidia graphics card and just used the integrated Intel controller. This was of no real use as I need dual monitors.&lt;/p&gt;
&lt;p&gt;With the release of Windows 8 RP I though I would try again.&lt;/p&gt;
&lt;p&gt;The first attempt failed with the same problem, it hung detecting devices. I check the BIOS and noticed I was set to discrete only (the Nvidia setting). I knew the RP should work on the W520 as other at the office have got it working. So I changed the BIOS to Nvidia Optimus (the auto swap system between Intel and Nivdia) and tried again – and it worked. So this is the first post from Windows 8.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I tried to install the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/14/first-try-with-windows8-and-it-won-t-boot.aspx">Windows 8 CP on my Lenovo W520 with no success as I posted about in the past</a>.I was only able to install Windows 8 CP only if I disabled the Nvidia graphics card and just used the integrated Intel controller. This was of no real use as I need dual monitors.</p>
<p>With the release of Windows 8 RP I though I would try again.</p>
<p>The first attempt failed with the same problem, it hung detecting devices. I check the BIOS and noticed I was set to discrete only (the Nvidia setting). I knew the RP should work on the W520 as other at the office have got it working. So I changed the BIOS to Nvidia Optimus (the auto swap system between Intel and Nivdia) and tried again – and it worked. So this is the first post from Windows 8.</p>
<p>Now to find drivers for the other devices in the W520</p>
]]></content:encoded>
    </item>
    <item>
      <title>ALM Rangers sim-ship guidance with the VS11 RC</title>
      <link>https://blog.richardfennell.net/posts/alm-rangers-sim-ship-guidance-with-the-vs11-rc/</link>
      <pubDate>Fri, 01 Jun 2012 08:50:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alm-rangers-sim-ship-guidance-with-the-vs11-rc/</guid>
      <description>&lt;p&gt;I am really proud to have been involved in the team of ALM Rangers who have &lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2012/05/31/welcome-to-visual-studio-2012-alm-rangers-readiness-rc-wave.aspx&#34;&gt;&lt;strong&gt;SIM&lt;/strong&gt;ultaneous-&lt;strong&gt;SHIP&lt;/strong&gt;ped best practice guidance with Visual Studio 11 RC&lt;/a&gt;, which became available last night**.**&lt;/p&gt;
&lt;p&gt;I am sure anyone working with Visual Studio and TFS will find the guidance of value, I have certainly learned a lot whilst helping produce the material. It has been a great experience working with a great crowd of people both inside and outside of Microsoft.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really proud to have been involved in the team of ALM Rangers who have <a href="http://blogs.msdn.com/b/visualstudioalm/archive/2012/05/31/welcome-to-visual-studio-2012-alm-rangers-readiness-rc-wave.aspx"><strong>SIM</strong>ultaneous-<strong>SHIP</strong>ped best practice guidance with Visual Studio 11 RC</a>, which became available last night**.**</p>
<p>I am sure anyone working with Visual Studio and TFS will find the guidance of value, I have certainly learned a lot whilst helping produce the material. It has been a great experience working with a great crowd of people both inside and outside of Microsoft.</p>
<p><img alt="Visual Studio" loading="lazy" src="http://i.microsoft.com/visualstudio/11/images/visual_studio_logo.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>First meeting of the Leeds-Sharp user group</title>
      <link>https://blog.richardfennell.net/posts/first-meeting-of-the-leeds-sharp-user-group/</link>
      <pubDate>Fri, 01 Jun 2012 07:57:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-meeting-of-the-leeds-sharp-user-group/</guid>
      <description>&lt;p&gt;Last night I attended the first meeting of the &lt;a href=&#34;http://leeds-sharp.org/&#34;&gt;new .NET user group Leeds Sharp&lt;/a&gt;. This is a flavour of user group we don’t have in Leeds; we have SQL, Ruby, Agile user groups, the list goes on, but not a dedicated .NET one. So hopefully this .NET usergroup will take off. Especially as Microsoft conveniently decided to launch &lt;a href=&#34;http://windows.microsoft.com/en-US/windows-8/download&#34;&gt;Windows  8 Release Preview&lt;/a&gt; and &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2012/05/31/visual-studio-tfs-2012-release-candidate-available-today.aspx&#34;&gt;Visual Studio 11 RC&lt;/a&gt; during our opening session, so there are loads of new subjects to talk about.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Last night I attended the first meeting of the <a href="http://leeds-sharp.org/">new .NET user group Leeds Sharp</a>. This is a flavour of user group we don’t have in Leeds; we have SQL, Ruby, Agile user groups, the list goes on, but not a dedicated .NET one. So hopefully this .NET usergroup will take off. Especially as Microsoft conveniently decided to launch <a href="http://windows.microsoft.com/en-US/windows-8/download">Windows  8 Release Preview</a> and <a href="http://blogs.msdn.com/b/bharry/archive/2012/05/31/visual-studio-tfs-2012-release-candidate-available-today.aspx">Visual Studio 11 RC</a> during our opening session, so there are loads of new subjects to talk about.</p>
<p>A nice idea is the use of <a href="http://leedssharp.uservoice.com/forums/162867-topic-suggestions">Uservoice to pick sessions</a>, I would encourage anyone who thinks they might attend to get onto uservoice, see the sessions proposed, add your own suggestions and then vote.</p>
<p>Also if you are looking for an opportunity to speak at usergroups then why not suggest a session?</p>
<p>Well done to everyone trying to get this this group going. I hope to be able to attend, work allowing. Keep and eye on the  <a href="http://leeds-sharp.org/">user group web site</a>, <a href="https://twitter.com/#!/LeedsSharp">Twitter @LeedsSharp</a> and <a href="https://groups.google.com/forum/?fromgroups#!forum/leeds-sharp">discussion group</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Solution to “We couldn&#39;t get your developer Licence for Windows 8 Consumer Preview” on Win8 Server beta</title>
      <link>https://blog.richardfennell.net/posts/solution-to-we-couldnt-get-your-developer-licence-for-windows-8-consumer-preview-on-win8-server-beta/</link>
      <pubDate>Tue, 29 May 2012 10:38:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/solution-to-we-couldnt-get-your-developer-licence-for-windows-8-consumer-preview-on-win8-server-beta/</guid>
      <description>&lt;p&gt;Whilst preparing for the &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=A%20Day%20with%20Windows%208&#34;&gt;Black Marble Windows 8 event tomorrow (still places available for this free event)&lt;/a&gt; I hit problem with VS11Beta and Metro projects.&lt;/p&gt;
&lt;p&gt;I was sorting out a demonstration of remote debugging, I had a Samsung tablet PC running WIn8CP  and intended to use a WIn8Server CP running inside a VirtualBox VM on Windows 7 PC.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;I installed VS11 Ultimate 11 Beta on both devices&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the Win8 Tablet I loaded the VS11 remote debugger monitor&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst preparing for the <a href="http://www.blackmarble.co.uk/events.aspx?event=A%20Day%20with%20Windows%208">Black Marble Windows 8 event tomorrow (still places available for this free event)</a> I hit problem with VS11Beta and Metro projects.</p>
<p>I was sorting out a demonstration of remote debugging, I had a Samsung tablet PC running WIn8CP  and intended to use a WIn8Server CP running inside a VirtualBox VM on Windows 7 PC.</p>
<ol>
<li>
<p>I installed VS11 Ultimate 11 Beta on both devices</p>
</li>
<li>
<p>On the Win8 Tablet I loaded the VS11 remote debugger monitor</p>
</li>
<li>
<p>On the Win8 Server I loaded VS11</p>
</li>
<li>
<p>I created a new Metro application project</p>
</li>
<li>
<p>I set the project properties to point use remote debugging and target my tablet</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_44.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_44.png" title="image"></a></p>
</li>
<li>
<p>I pressed F5 to debug</p>
</li>
<li>
<p>And got the error <em>&ldquo;We couldn&rsquo;t get your developer Licence for Windows 8 Consumer Preview. Please check your internet connection and try again.&rdquo;</em></p>
</li>
</ol>
<p>I had a good look around for a solution, most posts pointing to issues with upgrading Win7 to Win8, talk of hotfix, patches and waiting 2 days for licenses to expire. Turns out it was none of these. I eventually found the <a href="http://social.msdn.microsoft.com/Forums/nl-NL/Vsexpressinstall/thread/7fee6518-97a7-4619-9bd4-bb6bdfe04dda">answer on the MSDN forums</a>, you just can’t do this at present with Win8Server beta. Seems it did work with the build conference release, but not now. You have to use the desktop build of Win8CP for remote debugging.</p>
<p>So I created another VM using the desktop Win8CP, followed  same process and it was all fine. When I got to the place I was getting a warning dialog it asked for a LiveID and all proceeded as expect. I can now do a nice remote debugging demo.</p>
]]></content:encoded>
    </item>
    <item>
      <title>On the panel at Tech.Days: Visual Studio 11 Online Event</title>
      <link>https://blog.richardfennell.net/posts/on-the-panel-at-tech-days-visual-studio-11-online-event/</link>
      <pubDate>Mon, 28 May 2012 21:08:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/on-the-panel-at-tech-days-visual-studio-11-online-event/</guid>
      <description>&lt;p&gt;A while ago I recorded a video &lt;a href=&#34;http://channel9.msdn.com/Events/TechDays/UK-Tech-Days/Visual-Studio-Team-Foundation-for-Everyone&#34;&gt;Visual Studio Team Foundation for Everyone&lt;/a&gt;, this forms part of &lt;a href=&#34;http://blogs.msdn.com/b/ukmsdn/archive/2012/05/21/tech-days-visual-studio-11-online-event-28th-june-2012-1pm-to-3pm.aspx&#34;&gt;Tech.Days: Visual Studio 11 Online Event, 28th June 2012, 1pm to 3pm&lt;/a&gt;. To lift the agenda from the MSDN site&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This event will cover the key new features and capabilities that Visual Studio 11 offers software development teams, and the opportunity to ask questions to the UK Developer Tools team and partners. There’ll be something for almost anyone involved in software development, from Project Managers &amp;amp; Scrum Masters to developers and testers.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A while ago I recorded a video <a href="http://channel9.msdn.com/Events/TechDays/UK-Tech-Days/Visual-Studio-Team-Foundation-for-Everyone">Visual Studio Team Foundation for Everyone</a>, this forms part of <a href="http://blogs.msdn.com/b/ukmsdn/archive/2012/05/21/tech-days-visual-studio-11-online-event-28th-june-2012-1pm-to-3pm.aspx">Tech.Days: Visual Studio 11 Online Event, 28th June 2012, 1pm to 3pm</a>. To lift the agenda from the MSDN site</p>
<p><em>This event will cover the key new features and capabilities that Visual Studio 11 offers software development teams, and the opportunity to ask questions to the UK Developer Tools team and partners. There’ll be something for almost anyone involved in software development, from Project Managers &amp; Scrum Masters to developers and testers.</em></p>
<p>So if you have any questions on TEE or any of the new features VS/TFS11 why not <a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032513917&amp;Culture=en-GB">register</a>?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Skydrive pushes me over my broadband usage</title>
      <link>https://blog.richardfennell.net/posts/skydrive-pushes-me-over-my-broadband-usage/</link>
      <pubDate>Mon, 28 May 2012 21:00:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/skydrive-pushes-me-over-my-broadband-usage/</guid>
      <description>&lt;p&gt;I got back from a trip away to find an unexpected bill for broadband  through the letterbox. I have paid about the same each quarter for broadband for a good while now, I don’t see much variation as I rarely use my home phone, then again who does?&lt;/p&gt;
&lt;p&gt;This bill was nearly double, why?&lt;/p&gt;
&lt;p&gt;I think it was mostly due to setting up &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/04/26/Thoughts-on-the-new-Skydrive.aspx&#34;&gt;Skydrive&lt;/a&gt; to mirror my family photos and video as a backup, though this can’t explain it all, but then again my son as found &lt;a href=&#34;http://www.roblox.com/&#34;&gt;Roblox&lt;/a&gt;. In each month I went over my &lt;a href=&#34;http://bt.custhelp.com/app/answers/detail/a_id/10495/c/346/?s_cid=con_FURL_broadbandusagepolicy&#34;&gt;usage&lt;/a&gt; it was costing me £5 a 5Gb block. It adds up fast.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I got back from a trip away to find an unexpected bill for broadband  through the letterbox. I have paid about the same each quarter for broadband for a good while now, I don’t see much variation as I rarely use my home phone, then again who does?</p>
<p>This bill was nearly double, why?</p>
<p>I think it was mostly due to setting up <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/04/26/Thoughts-on-the-new-Skydrive.aspx">Skydrive</a> to mirror my family photos and video as a backup, though this can’t explain it all, but then again my son as found <a href="http://www.roblox.com/">Roblox</a>. In each month I went over my <a href="http://bt.custhelp.com/app/answers/detail/a_id/10495/c/346/?s_cid=con_FURL_broadbandusagepolicy">usage</a> it was costing me £5 a 5Gb block. It adds up fast.</p>
<p>On calling BT I found I could upgrade my package to a larger allowance and it worked out less than £1 more. The most irritating thing was they had been emailing me about my usage on my BT provided email address, an address I have never used.</p>
<p>So the top tip is make sure your usage notifications go to an address you actually get .</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD South West session on Unit testing in VS11</title>
      <link>https://blog.richardfennell.net/posts/ddd-south-west-session-on-unit-testing-in-vs11/</link>
      <pubDate>Sat, 26 May 2012 13:04:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-south-west-session-on-unit-testing-in-vs11/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my session at &lt;a href=&#34;http://dddsouthwest.com&#34;&gt;DDDSW&lt;/a&gt; today. The session was completely demo driven so no slides to share, but the contents of the session is covered in the blog posts&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/27/Unit-testing-in-VS11Beta-and-getting-your-tests-to-run-on-the-new-TFSPreview-build-service.aspx&#34;&gt;Unit testing in VS11Beta and getting your tests to run on the new TFSPreview build service&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4 id=&#34;now-that-vs11-has-a-fake-library-do-i-still-need-typemock-isolator-to-fake-out-sharepoint&#34;&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx&#34;&gt;Now that VS11 has a fake library do I still need Typemock Isolator to fake out SharePoint?&lt;/a&gt;&lt;/h4&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;h4 id=&#34;more-on-using-the-vs11-fake-library-to-fake-out-sharepoint&#34;&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/05/05/More-on-using-the-VS11-fake-library-to-fake-out-SharePoint.aspx&#34;&gt;More on using the VS11 fake library to fake out SharePoint&lt;/a&gt;&lt;/h4&gt;
&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my session at <a href="http://dddsouthwest.com">DDDSW</a> today. The session was completely demo driven so no slides to share, but the contents of the session is covered in the blog posts</p>
<ul>
<li>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/27/Unit-testing-in-VS11Beta-and-getting-your-tests-to-run-on-the-new-TFSPreview-build-service.aspx">Unit testing in VS11Beta and getting your tests to run on the new TFSPreview build service</a></p>
</li>
<li>
<h4 id="now-that-vs11-has-a-fake-library-do-i-still-need-typemock-isolator-to-fake-out-sharepoint"><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx">Now that VS11 has a fake library do I still need Typemock Isolator to fake out SharePoint?</a></h4>
</li>
<li>
<h4 id="more-on-using-the-vs11-fake-library-to-fake-out-sharepoint"><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/05/05/More-on-using-the-VS11-fake-library-to-fake-out-SharePoint.aspx">More on using the VS11 fake library to fake out SharePoint</a></h4>
</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Links for todays Typemock webinar</title>
      <link>https://blog.richardfennell.net/posts/links-for-todays-typemock-webinar/</link>
      <pubDate>Tue, 22 May 2012 13:39:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/links-for-todays-typemock-webinar/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my Typemock Isolator and SharePoint session. The links I mentioned were&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx&#34;&gt;Mocking Sharepoint for Design with Typemock Isolator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/06/update-on-using-typemock-isolator-to-allow-webpart-development-without-a-sharepoint-server.aspx&#34;&gt;Mocking SP2010 64bit Assemblies with Typemock Isolator&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I will post a link to the recording as soon as it is made available&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my Typemock Isolator and SharePoint session. The links I mentioned were</p>
<ul>
<li><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx">Mocking Sharepoint for Design with Typemock Isolator</a></li>
<li><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/06/update-on-using-typemock-isolator-to-allow-webpart-development-without-a-sharepoint-server.aspx">Mocking SP2010 64bit Assemblies with Typemock Isolator</a> </li>
</ul>
<p>I will post a link to the recording as soon as it is made available</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading our blog server from BlogEngine 2.5 to 2.6</title>
      <link>https://blog.richardfennell.net/posts/upgrading-our-blog-server-from-blogengine-2-5-to-2-6/</link>
      <pubDate>Mon, 21 May 2012 20:51:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-our-blog-server-from-blogengine-2-5-to-2-6/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.dotnetblogengine.net/post/The-Next-Chapter-of-BlogEngineNET-Version-26.aspx&#34;&gt;A week ago version 2.6 of BlogEngine.net was released&lt;/a&gt;. This has plenty of new features such as a new &lt;a href=&#34;http://dotnetblogengine.net/image.axd?picture=2012%2f4%2ffilemanager.png&#34;&gt;file manager&lt;/a&gt;  and &lt;a href=&#34;http://dotnetblogengine.net/image.axd?picture=2012%2f4%2fimagetools.png&#34;&gt;image tools&lt;/a&gt;, but for us the most important was &lt;a href=&#34;http://allben.net/post/2012/04/15/New-Multiple-Blog-Feature-Site-Aggregation.aspx&#34;&gt;site aggregation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/08/My-experiences-moving-to-BlogEngineNET.aspx&#34;&gt;I posted about previously, we moved to BlogEngine from Community Server&lt;/a&gt; because we need multi blog hosting, but with BlogEngine 2.5 we had to write our own basic site aggregation by creating a custom theme that managed some RSS feed merging behind the scenes. Now with BlogEngine 2.6 this type of feature available &lt;a href=&#34;http://allben.net/post/2012/04/15/New-Multiple-Blog-Feature-Site-Aggregation.aspx&#34;&gt;out the box&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.dotnetblogengine.net/post/The-Next-Chapter-of-BlogEngineNET-Version-26.aspx">A week ago version 2.6 of BlogEngine.net was released</a>. This has plenty of new features such as a new <a href="http://dotnetblogengine.net/image.axd?picture=2012%2f4%2ffilemanager.png">file manager</a>  and <a href="http://dotnetblogengine.net/image.axd?picture=2012%2f4%2fimagetools.png">image tools</a>, but for us the most important was <a href="http://allben.net/post/2012/04/15/New-Multiple-Blog-Feature-Site-Aggregation.aspx">site aggregation</a>.</p>
<p>As <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/08/My-experiences-moving-to-BlogEngineNET.aspx">I posted about previously, we moved to BlogEngine from Community Server</a> because we need multi blog hosting, but with BlogEngine 2.5 we had to write our own basic site aggregation by creating a custom theme that managed some RSS feed merging behind the scenes. Now with BlogEngine 2.6 this type of feature available <a href="http://allben.net/post/2012/04/15/New-Multiple-Blog-Feature-Site-Aggregation.aspx">out the box</a>.</p>
<p>The <a href="http://blogengine.codeplex.com/wikipage?title=Upgrading%20to%20BlogEngine.NET%202.6">upgrade process was OK</a>, replace the contents of the IIS site folder with the new bits, set the SQL connection string, copy in our App_data and custom Themes and run an upgrade SQL script.</p>
<p>The only issue I had was that in this process it seemed I lost all our user accounts. A quick check showed the issue was our 2.5 setup was using the blogs[blogname]users.xml file to hold the user IDs for each blog, 2.6 was using the be_users SQL table. Now I think this was a by-product of our import process from Community Server.</p>
<p>The fix was not too bad</p>
<ul>
<li>
<p>Complete the BlogEngine 2.5-2.6 upgrade</p>
</li>
<li>
<p>I now had a be_users table with a row for each blogs admin user, but no password set</p>
</li>
<li>
<p>Selecting a blog I opened the [IISroot]App_datablogs[blogname]users.xml file to find a couple of entries, one for the admin account and one for the blog’s owner</p>
<Users>  
  <User>  
    <UserName>Admin</UserName>  
    <Password>abababababababababab#</Password>  
    <Email>webmaster@blackmarble.co.uk</Email>  
    <LastLoginTime>2007-12-05 20:46:40</LastLoginTime>  
  </User>  
  <User>  
    <UserName>richard</UserName>  
    <Password>zyzyzyzyzyzyzyzyzyzyzyzyz#</Password>  
    <Email>richard@blackmarble.co.uk</Email>  
    <LastLoginTime>2011-12-05 14:37:59</LastLoginTime>  
  </User>  
</Users>
</li>
<li>
<p>Firstly you need to login to the blogs using the default admin account (password admin). It is of course a good idea to reset your admin password and contact  email at this point.</p>
</li>
<li>
<p>Next go to the blog’s control panel users section and add a user matching the missing account, in my case an admin user called  Richard , you can also set the email address and the password if you know what you want, and the job is done.<br>
However in my case, though I knew what to set the admin user’s password to for the blog, I don’t know the blog owners old passwords. However setting these back is easy as all I had to do was copy the password hash block from the XML file and pasted it into the be_users password column newly created user account.</p>
</li>
</ul>
<p>Once this was done we could all login with our existing accounts.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New .NET usergroup in Leeds</title>
      <link>https://blog.richardfennell.net/posts/new-net-usergroup-in-leeds/</link>
      <pubDate>Thu, 17 May 2012 20:17:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-net-usergroup-in-leeds/</guid>
      <description>&lt;p&gt;Please to say there is a &lt;a href=&#34;http://leeds-sharp.org/&#34;&gt;new .NET usergroup leeds-sharp.org&lt;/a&gt; in Leeds, their inaugural meeting is on the 31st of May. I hope to be able to make it if I am in Yorkshire that day.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Please to say there is a <a href="http://leeds-sharp.org/">new .NET usergroup leeds-sharp.org</a> in Leeds, their inaugural meeting is on the 31st of May. I hope to be able to make it if I am in Yorkshire that day.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A bit busy with upcoming presentations</title>
      <link>https://blog.richardfennell.net/posts/a-bit-busy-with-upcoming-presentations/</link>
      <pubDate>Tue, 15 May 2012 11:22:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-bit-busy-with-upcoming-presentations/</guid>
      <description>&lt;p&gt;I am a bit busy with upcoming  presentations, all of which are free to attend&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Monday 21st May - &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=How%20to%20Kickstart%20your%20ALM%20Process%20%28Online%29&#34;&gt;Black Marble Webinar ‘How to Kick start your ALM Process ‘&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Tuesday 22nd May – &lt;a href=&#34;http://www.typemock.com/using-typemock-isolator-to-speed-up-sharepoint-development-may-webinar&#34;&gt;Typemock Webinar ‘Using Typemock Isolator to speed up the development of SharePoint Web Parts’&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Saturday 26th May – &lt;a href=&#34;http://www.dddsouthwest.com/Agenda/tabid/55/Default.aspx&#34;&gt;DDD South West (Bristol) ‘Unit Testing with Visual Studio 11 Beta’&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Wednesday 30th May – &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=A%20Day%20with%20Windows%208&#34;&gt;Black Marble (Holiday Inn Leeds) ‘A day with Windows 8’&lt;/a&gt; – I’m talking on development in the afternoon&lt;/li&gt;
&lt;li&gt;Wednesday 13th  June  - &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=How%20to%20Plan%20your%20Application%20Lifecycle%20Implementation%20with%20TFS11&#34;&gt;Black Marble (BM Offices) ‘How to Plan your Application Lifecycle Implementation with TFS11’&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Wednesday 20th June – &lt;a href=&#34;http://www.nebytes.net/category/events.aspx&#34;&gt;NEBytes (Newcastle) ‘Overview of Visual Studio/TFS 11’&lt;/a&gt;  - Event To be confirmed&lt;/li&gt;
&lt;li&gt;Friday 22 June – &lt;a href=&#34;http://azureconference2012.eventbrite.com/?ebtv=C&amp;amp;ebtv=&#34;&gt;Black Marble Architect Track at the Microsoft Cloud Day (London)&lt;/a&gt; – My subject to be confirmed&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I think that is all for now; wow a looks worse when they are all written down in one place. I better get writing….&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am a bit busy with upcoming  presentations, all of which are free to attend</p>
<ul>
<li>Monday 21st May - <a href="http://www.blackmarble.co.uk/events.aspx?event=How%20to%20Kickstart%20your%20ALM%20Process%20%28Online%29">Black Marble Webinar ‘How to Kick start your ALM Process ‘</a></li>
<li>Tuesday 22nd May – <a href="http://www.typemock.com/using-typemock-isolator-to-speed-up-sharepoint-development-may-webinar">Typemock Webinar ‘Using Typemock Isolator to speed up the development of SharePoint Web Parts’</a></li>
<li>Saturday 26th May – <a href="http://www.dddsouthwest.com/Agenda/tabid/55/Default.aspx">DDD South West (Bristol) ‘Unit Testing with Visual Studio 11 Beta’</a></li>
<li>Wednesday 30th May – <a href="http://www.blackmarble.co.uk/events.aspx?event=A%20Day%20with%20Windows%208">Black Marble (Holiday Inn Leeds) ‘A day with Windows 8’</a> – I’m talking on development in the afternoon</li>
<li>Wednesday 13th  June  - <a href="http://www.blackmarble.co.uk/events.aspx?event=How%20to%20Plan%20your%20Application%20Lifecycle%20Implementation%20with%20TFS11">Black Marble (BM Offices) ‘How to Plan your Application Lifecycle Implementation with TFS11’</a></li>
<li>Wednesday 20th June – <a href="http://www.nebytes.net/category/events.aspx">NEBytes (Newcastle) ‘Overview of Visual Studio/TFS 11’</a>  - Event To be confirmed</li>
<li>Friday 22 June – <a href="http://azureconference2012.eventbrite.com/?ebtv=C&amp;ebtv=">Black Marble Architect Track at the Microsoft Cloud Day (London)</a> – My subject to be confirmed</li>
</ul>
<p>I think that is all for now; wow a looks worse when they are all written down in one place. I better get writing….</p>
]]></content:encoded>
    </item>
    <item>
      <title>Interested in Windows 8 Development?</title>
      <link>https://blog.richardfennell.net/posts/interested-in-windows-8-development/</link>
      <pubDate>Sun, 13 May 2012 16:15:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/interested-in-windows-8-development/</guid>
      <description>&lt;p&gt;Are you interested in Windows 8 Development?&lt;/p&gt;
&lt;p&gt;Well if you are Black Marble are running  &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=A%20Day%20with%20Windows%208&#34;&gt;a free event on Windows 8 on the 30th of May in Leeds&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Also Jon Fowler, one of our development leads has just started a series of &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/jfowler/post/2012/05/12/Converting-Prism-to-Net-for-Metro-Style-Apps.aspx&#34;&gt;blog posts on converting Prims to .NET for Windows 8 metro style apps&lt;/a&gt;. This is all tied to his port of &lt;a href=&#34;http://metroprism.codeplex.com/&#34;&gt;Prism to Metro which you can download from Codeplex&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Enjoy….&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Are you interested in Windows 8 Development?</p>
<p>Well if you are Black Marble are running  <a href="http://www.blackmarble.co.uk/events.aspx?event=A%20Day%20with%20Windows%208">a free event on Windows 8 on the 30th of May in Leeds</a>.</p>
<p>Also Jon Fowler, one of our development leads has just started a series of <a href="http://blogs.blackmarble.co.uk/blogs/jfowler/post/2012/05/12/Converting-Prism-to-Net-for-Metro-Style-Apps.aspx">blog posts on converting Prims to .NET for Windows 8 metro style apps</a>. This is all tied to his port of <a href="http://metroprism.codeplex.com/">Prism to Metro which you can download from Codeplex</a></p>
<p>Enjoy….</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD10 1st September 2012</title>
      <link>https://blog.richardfennell.net/posts/ddd10-1st-september-2012/</link>
      <pubDate>Sat, 12 May 2012 17:46:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd10-1st-september-2012/</guid>
      <description>&lt;p&gt;I see after a few days of issues the &lt;a href=&#34;http://developerdeveloperdeveloper.com/ddd10/&#34;&gt;DDD10 site&lt;/a&gt; is back up and ready for submission. I have submitted a session, why don’t you – it will be fun!&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;08/05 – Call for speakers opens&lt;/li&gt;
&lt;li&gt;02/07 – Call for speakers closes&lt;/li&gt;
&lt;li&gt;03/07 – Voting opens&lt;/li&gt;
&lt;li&gt;24/07 – Voting closes&lt;/li&gt;
&lt;li&gt;27/07 – Schedule opens&lt;/li&gt;
&lt;li&gt;30/07 – Registration opens&lt;/li&gt;
&lt;li&gt;01/09 – DDD10&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>I see after a few days of issues the <a href="http://developerdeveloperdeveloper.com/ddd10/">DDD10 site</a> is back up and ready for submission. I have submitted a session, why don’t you – it will be fun!</p>
<ul>
<li>08/05 – Call for speakers opens</li>
<li>02/07 – Call for speakers closes</li>
<li>03/07 – Voting opens</li>
<li>24/07 – Voting closes</li>
<li>27/07 – Schedule opens</li>
<li>30/07 – Registration opens</li>
<li>01/09 – DDD10</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Doing a webinar on Typemock Isolator and Sharepoint</title>
      <link>https://blog.richardfennell.net/posts/doing-a-webinar-on-typemock-isolator-and-sharepoint/</link>
      <pubDate>Thu, 10 May 2012 23:05:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/doing-a-webinar-on-typemock-isolator-and-sharepoint/</guid>
      <description>&lt;p&gt;I am  presenting a webinar on Typemock Isolator and Sharepoint on the 22nd of May. &lt;a href=&#34;http://www.typemock.com/using-typemock-isolator-to-speed-up-sharepoint-development-may-webinar&#34;&gt;For more details and to register see the Typemock site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am  presenting a webinar on Typemock Isolator and Sharepoint on the 22nd of May. <a href="http://www.typemock.com/using-typemock-isolator-to-speed-up-sharepoint-development-may-webinar">For more details and to register see the Typemock site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>You do that that PS2 keyboard on that old PC, not a USB one</title>
      <link>https://blog.richardfennell.net/posts/you-do-that-that-ps2-keyboard-on-that-old-pc-not-a-usb-one/</link>
      <pubDate>Thu, 10 May 2012 12:52:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-do-that-that-ps2-keyboard-on-that-old-pc-not-a-usb-one/</guid>
      <description>&lt;p&gt;Whilst using &lt;a href=&#34;http://www.dban.org/&#34;&gt;DBAN&lt;/a&gt; to clean down some very old PCs for disposal to new homes at good causes I hit a stupid gotcha.&lt;/p&gt;
&lt;p&gt;I booted off the DBAN CDRom but could not continue beyond the first screen as it did not detect I had pressed the return key.&lt;/p&gt;
&lt;p&gt;Turns out the PCs were so old that, though their BIOS allowed USB keyboards (and I could enter setup and edit BIOS settings with a USB keyboard) the Linux kernel on the CDRom could not detect them. Once I switched to an old PS2 keyboard it all worked fine.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst using <a href="http://www.dban.org/">DBAN</a> to clean down some very old PCs for disposal to new homes at good causes I hit a stupid gotcha.</p>
<p>I booted off the DBAN CDRom but could not continue beyond the first screen as it did not detect I had pressed the return key.</p>
<p>Turns out the PCs were so old that, though their BIOS allowed USB keyboards (and I could enter setup and edit BIOS settings with a USB keyboard) the Linux kernel on the CDRom could not detect them. Once I switched to an old PS2 keyboard it all worked fine.</p>
<p>I found the same problem when putting Ubuntu onto same PCs; during the install I had to use a PS2 keyboard, but once the install completed it was perfectly happy with a USB keyboard.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A fix for my failure to login to TFSpreview.com problems</title>
      <link>https://blog.richardfennell.net/posts/a-fix-for-my-failure-to-login-to-tfspreview-com-problems/</link>
      <pubDate>Thu, 10 May 2012 10:49:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-fix-for-my-failure-to-login-to-tfspreview-com-problems/</guid>
      <description>&lt;p&gt;I use a number of site collections on the Azure hosted Team Foundations Service (&lt;a href=&#34;http://tfspreview.com&#34;&gt;http://tfspreview.com&lt;/a&gt;); I have just solved a problem that I could not login to one of them via Visual Studio (2010, Dev11 or also TEE 11, I tried then all), but I could login to my other collections. Also I could access the collection if I logged in via a browser, just not with VS; all very good for work item management, but not much help for source code check-ins.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I use a number of site collections on the Azure hosted Team Foundations Service (<a href="http://tfspreview.com">http://tfspreview.com</a>); I have just solved a problem that I could not login to one of them via Visual Studio (2010, Dev11 or also TEE 11, I tried then all), but I could login to my other collections. Also I could access the collection if I logged in via a browser, just not with VS; all very good for work item management, but not much help for source code check-ins.</p>
<p><strong>The Problem</strong></p>
<p>The problem was that when I loaded Visual Studio and tried to select the collection <a href="https://mycollection.tfspreview.com">https://mycollection.tfspreview.com</a> in Team Explorer the ‘Sign into Team Foundation Server’ form loaded and uploaded a few times whilst trying to redirect to an authentication provider. I then ended up with a TF31003 error. A retry or use of different credentials did not help</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_42.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_42.png" title="image"></a></p>
<p>If a deleted the server from the list and tried to re-add it I got similar results, but ended up at the LiveID sign in screen, but just an error message and no means to enter details.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image6.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image6_thumb.png" title="image"></a></p>
<p><strong>The Solution</strong></p>
<p>The problem was due to cached LiveID credentials. It was suggested I clear IE9 cookies but this did not help. In the end I found the solution in the Credential Manager (Control Panel &gt; User Accounts &gt; Manage Users &gt; Advanced &gt; Manage Passwords).</p>
<p>I had recently installed <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/04/26/Thoughts-on-the-new-Skydrive.aspx">Skydrive on my PC</a>. This had stored a cached LiveID, the issue was it seems this cached Skydrive LiveID was being used to access TFSpreview. Unfortunately this was my personal LiveID not my work one. This personal LiveID had no rights to access the problem site collection, but I could get into the other collections because both my personal and work LiveID both had access.</p>
<p>So I deleted the offending cached LiveID and tried Team Explorer again and this time I was prompted for a LiveID (though the user name field did contain the wrong LiveID, I could correct it) and I could login.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_43.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_43.png" title="image"></a></p>
<p>I then loaded SkyDrive (which I had exited) it prompted me to re-enter my credential. It recreated it cached credentials and seemed happy.</p>
<p>Interestingly they did not seem to cause a problem this time, maybe it is an entry order issue?</p>
<p>I need to keep an eye on it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Build error, &#39;Index (zero based) must be greater than or equal to zero and less than the size of the argument list” when building XAML projects</title>
      <link>https://blog.richardfennell.net/posts/tfs-build-error-index-zero-based-must-be-greater-than-or-equal-to-zero-and-less-than-the-size-of-the-argument-list-when-building-xaml-projects/</link>
      <pubDate>Wed, 09 May 2012 16:45:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-build-error-index-zero-based-must-be-greater-than-or-equal-to-zero-and-less-than-the-size-of-the-argument-list-when-building-xaml-projects/</guid>
      <description>&lt;p&gt;We had an interesting issue of late building a &lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&amp;amp;title=Rent%20your%20Samsung%20SUR40%20for%20Microsoft%20Surface%20from%20Black%20Marble&#34;&gt;Surface2 application solution&lt;/a&gt; within a TFS 2010 build system. The solution built fine in VS2010 on both my development PC and also using VS2010 on my TFS build box (both Windows 7 64bit PC), so I know I had all the right SDKs in place. However if I built it via TFS 2010 Team Build I got the error&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_40.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_40.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;App.xaml (136): Unknown build error, &amp;lsquo;Index (zero based) must be greater than or equal to zero and less than the size of the argument list…”&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We had an interesting issue of late building a <a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&amp;title=Rent%20your%20Samsung%20SUR40%20for%20Microsoft%20Surface%20from%20Black%20Marble">Surface2 application solution</a> within a TFS 2010 build system. The solution built fine in VS2010 on both my development PC and also using VS2010 on my TFS build box (both Windows 7 64bit PC), so I know I had all the right SDKs in place. However if I built it via TFS 2010 Team Build I got the error</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_40.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_40.png" title="image"></a></p>
<p><em>App.xaml (136): Unknown build error, &lsquo;Index (zero based) must be greater than or equal to zero and less than the size of the argument list…”</em></p>
<p>This error appeared after we added this new block of XAML code</p>
<blockquote>
<VisualTransition GeneratedDuration="0:0:0.2">  
  <VisualTransition.GeneratedEasingFunction>  
        <CircleEase EasingMode="EaseInOut"/>      \- this was the line the error was reported on  
  </VisualTransition.GeneratedEasingFunction>  
</VisualTransition></blockquote>
<p>I assumed the issue was that Visual Studio was somehow able to resolve an assembly reference that MSBuild could not.</p>
<p>so to try to resolve this I copied the MSBuild command line that was being run by the TFS build from the build log and ran it in a command prompt on my build box. Happily I got the same error, so at least it was repeatable. I then removed options on the command line until I had the minimum to give the errors. I ended up with</p>
<blockquote>
<p><em>C:WindowsMicrosoft.NETFramework64v4.0.30319MSBuild.exe &ldquo;C:Builds7SurfaceExternal Concierge CISourcesBlackMarble Concierge.sln&rdquo;</em></p></blockquote>
<p>If I changed to</p>
<blockquote>
<p><em>MSBuild.exe &ldquo;C:Builds7SurfaceExternal Concierge CISourcesBlackMarble Concierge.sln&rdquo;</em></p></blockquote>
<p>the error when away, so it had to be the version of MSBuild. When I used the 32bit version (picked up by default via the PATH) all was OK, the 64Bit gave the error.</p>
<p>So my fix was just to force the build to run x86 and all was OK</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_41.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_41.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>More on using the VS11 fake library to fake out SharePoint</title>
      <link>https://blog.richardfennell.net/posts/more-on-using-the-vs11-fake-library-to-fake-out-sharepoint/</link>
      <pubDate>Sat, 05 May 2012 15:35:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-on-using-the-vs11-fake-library-to-fake-out-sharepoint/</guid>
      <description>&lt;p&gt;I recently &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx&#34;&gt;posted&lt;/a&gt; on how you could use the new fakes tools in VS11 to fake out SharePoint for testing purposes. I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx#comment&#34;&gt;received comments&lt;/a&gt; on how I could make my Shim logic easier to read so though I would revisit the post. This led me down a bit of a complex trail, and to &lt;a href=&#34;http://www.peterprovost.org/&#34;&gt;Pete Provost&lt;/a&gt; for pointing the way out!&lt;/p&gt;
&lt;p&gt;When I did the previous post I had used SP2007, this was because I was comparing using Microsoft Fakes with a similar sample I had written ages ago for Typemock Isolator. There was no real plan to this choice, it was just what had to hand at the time. This time I decided to use SP2010, this was the process used that actually worked (more on my mistakes later) …&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx">posted</a> on how you could use the new fakes tools in VS11 to fake out SharePoint for testing purposes. I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx#comment">received comments</a> on how I could make my Shim logic easier to read so though I would revisit the post. This led me down a bit of a complex trail, and to <a href="http://www.peterprovost.org/">Pete Provost</a> for pointing the way out!</p>
<p>When I did the previous post I had used SP2007, this was because I was comparing using Microsoft Fakes with a similar sample I had written ages ago for Typemock Isolator. There was no real plan to this choice, it was just what had to hand at the time. This time I decided to use SP2010, this was the process used that actually worked (more on my mistakes later) …</p>
<ol>
<li>Using a Windows 7 PC that did not have SP2010 installed, I created a new C# Class Library project in VS11 Beta</li>
<li>I added a reference to Microsoft.SharePoint.DLL (this was referenced from a local folder that contained all the DLLs from the <a href="http://mysharepointwork.blogspot.co.uk/2010/02/sharepoint-14-hive-directory-structure.html">SP2010 14 hive</a> and also the GAC)</li>
<li>THIS IS THE IMPORTANT BIT – I changed the project to target .NET 4.0 not the default 4.5. Now, I could have changed to .NET 3.5 which is what SP2010 targets, but this would mean I could not use MSTest as, since VS2010, this has targeted .NET 4.0. I could of course have changed to another testing framework that can target .NET 3.5, such as nUnit, as discussed in my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/27/Unit-testing-in-VS11Beta-and-getting-your-tests-to-run-on-the-new-TFSPreview-build-service.aspx">previous post in the VS11 test Runner</a>.</li>
<li>You can now right click on the Microsoft.SharePoint.DLL reference and ‘add fakes assembly’. A warning here, adding this reference is a bit slow, it took well over a minute on my PC. If you look in the VS Output windows you see a message the process is starting then nothing until it finishes, be patient, you only have to do it once! I understand that you can edit the .fakes XML file to reduce the scope of what is faked, this might help reduce the generation time. I have not experimented here yet.</li>
<li>You should now see a new reference to the Microsoft.SharePoint.14.0.0.0.Fakes.DLL. and you can start to write your tests</li>
</ol>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_39.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_39.png" title="image"></a></p></blockquote>
<p>So why did I get lost? Well before I changed the targeted framework, I had tried to keep adding extra references to DLLs that were referenced by the DLL I was trying to fake, just as <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx">mentioned in my previous post</a>. This went on and on adding many SharePoint and supporting DLLs, and I still ended up with errors and no Microsoft.SharePoint.14.0.0.0.Fakes.DLL. In fact this is a really bad way to try to get out of the problem as it does  not help and you get strange warnings and errors about failures in faking the are not important or relevant e.g.</p>
<p><em>“ShimTestobjDebugFakesmspf.csproj&quot; (default target) (1) -&gt;1&gt;  (CoreCompile target) –&gt; “ShimTestf.cs (279923,32): error CS0544: &lsquo;Microsoft.SharePoint.ApplicationPages.WebControls.Fakes.StubAjaxCalendarView.ItemType&rsquo;: cannot override because &lsquo;Microsoft.SharePoint.WebControls.SPCalendarBase.ItemType&rsquo; is not a property</em></p>
<p>The key here is that you must be targeting a framework that the thing your are trying to fake targets. For SP2010 this should really be .NET 3.5 but you seem to get away .NET 4.0 but 4.5 is certainly a step too far. If you have the wrong framework you can end up in this chain of added dependency references that you don’t need and are confusing at best and maybe causing the errors nor fixing them. In my case it seem a reference to Microsoft.SharePoint.Library.DLL stops everything working, even if you then switch to the correct framework. When all is working you don’t need to add the dependant references this is all resolved behind the scenes, not by me adding then explicitly.</p>
<p>So once I had my new clean project, with the correct framework targeted and just the right assemblies referenced and faked I could write my tests, so now to experiment a bit more.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Thoughts on my Channel9 post</title>
      <link>https://blog.richardfennell.net/posts/thoughts-on-my-channel9-post/</link>
      <pubDate>Sat, 05 May 2012 09:49:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/thoughts-on-my-channel9-post/</guid>
      <description>&lt;p&gt;After hearing my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/04/24/My-video-on-using-Team-Explorer-Everywhere-is-published-on-Channel9.aspx&#34;&gt;TEE video on Channel9&lt;/a&gt; mentioned on &lt;a href=&#34;http://www.radiotfs.com/Show/42/WhatWasTheQuestion?&#34;&gt;Radio TFS&lt;/a&gt; I thought I should watch it through, I had only found time to do a quick look previously. This is all part of my on-going &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/07/25/once-more-with-feeling-watching-yourself-on-video.aspx&#34;&gt;self review process&lt;/a&gt;, a form of self torture.&lt;/p&gt;
&lt;p&gt;It seems the issues I mentioned last time are still there, I still have too many err’s. The thing that stood out the most was I looked like a very shifty newsreader. My movement behind the table and losing eye contact with the camera were too noticeable to me.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After hearing my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/04/24/My-video-on-using-Team-Explorer-Everywhere-is-published-on-Channel9.aspx">TEE video on Channel9</a> mentioned on <a href="http://www.radiotfs.com/Show/42/WhatWasTheQuestion?">Radio TFS</a> I thought I should watch it through, I had only found time to do a quick look previously. This is all part of my on-going <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/07/25/once-more-with-feeling-watching-yourself-on-video.aspx">self review process</a>, a form of self torture.</p>
<p>It seems the issues I mentioned last time are still there, I still have too many err’s. The thing that stood out the most was I looked like a very shifty newsreader. My movement behind the table and losing eye contact with the camera were too noticeable to me.</p>
<p>This said I am happy with how it came out. It was great working with a professional crew and you can see how good they can make the video look with good lights, camera and  editing.</p>
<p>On the whole I am very happy with it, just need to ‘love the camera’ bit more.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Thoughts on the new Skydrive</title>
      <link>https://blog.richardfennell.net/posts/thoughts-on-the-new-skydrive/</link>
      <pubDate>Thu, 26 Apr 2012 21:36:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/thoughts-on-the-new-skydrive/</guid>
      <description>&lt;p&gt;I have swapped to the &lt;a href=&#34;http://windows.microsoft.com/en-GB/skydrive/home&#34;&gt;new version of Microsoft Skydrive&lt;/a&gt;, replacing my old &lt;a href=&#34;http://windows.microsoft.com/en-US/windows-live/mesh-devices-sync-upgrade-ui&#34;&gt;Mesh&lt;/a&gt; setup. It is a nice slick experience, allowing easy viewing of files on Skydrive from Windows and WP7. However, I do have couple of issues&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I used Mesh to sync photos from my Window 7 Media Center up to cloud storage as a backup, don’t want to loose all the family photos due to a disk failure. This was simple with Mesh, just set up a sync. This is not so easy with the new Skydrive, which appears only as a folder in your user area. The only solution I can spot is to copy my photos into this folder e.g. xcopy d:photos c:usersrichardskydrivephotos. Once the copy is done this will be synced up to the cloud. With mesh if I added a file to my PC it sync’d without me doing anything, now I need to remember the xcopy (or whatever sync copy tool I am using), or have the copy being run on a regular basis via a timer.&lt;/li&gt;
&lt;li&gt;Letting Skydrive start automatically on a laptop Windows PC is dangerous. I was on site today using my Mifi and in about 10 minutes used a whole days credit. So I would recommend changing your tool tray setting to make sure you can see the Skydrive icon all the time, so you have a chance see when it is syncing and can stop it when on a connection that cost you money.&lt;/li&gt;
&lt;/ol&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_38.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_38.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have swapped to the <a href="http://windows.microsoft.com/en-GB/skydrive/home">new version of Microsoft Skydrive</a>, replacing my old <a href="http://windows.microsoft.com/en-US/windows-live/mesh-devices-sync-upgrade-ui">Mesh</a> setup. It is a nice slick experience, allowing easy viewing of files on Skydrive from Windows and WP7. However, I do have couple of issues</p>
<ol>
<li>I used Mesh to sync photos from my Window 7 Media Center up to cloud storage as a backup, don’t want to loose all the family photos due to a disk failure. This was simple with Mesh, just set up a sync. This is not so easy with the new Skydrive, which appears only as a folder in your user area. The only solution I can spot is to copy my photos into this folder e.g. xcopy d:photos c:usersrichardskydrivephotos. Once the copy is done this will be synced up to the cloud. With mesh if I added a file to my PC it sync’d without me doing anything, now I need to remember the xcopy (or whatever sync copy tool I am using), or have the copy being run on a regular basis via a timer.</li>
<li>Letting Skydrive start automatically on a laptop Windows PC is dangerous. I was on site today using my Mifi and in about 10 minutes used a whole days credit. So I would recommend changing your tool tray setting to make sure you can see the Skydrive icon all the time, so you have a chance see when it is syncing and can stop it when on a connection that cost you money.</li>
</ol>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_38.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_38.png" title="image"></a></p></blockquote>
<p>So any comments, or better ways to do the sync?</p>
]]></content:encoded>
    </item>
    <item>
      <title>My video on using Team Explorer Everywhere is published on  Channel9</title>
      <link>https://blog.richardfennell.net/posts/my-video-on-using-team-explorer-everywhere-is-published-on-channel9/</link>
      <pubDate>Tue, 24 Apr 2012 16:58:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-video-on-using-team-explorer-everywhere-is-published-on-channel9/</guid>
      <description>&lt;p&gt;I recently recorded a &lt;a href=&#34;http://channel9.msdn.com/Events/TechDays/UK-Tech-Days/Visual-Studio-Team-Foundation-for-Everyone&#34;&gt;video on using Visual Studio Team Explorer Everywhere&lt;/a&gt;, this has today been published in the &lt;a href=&#34;http://channel9.msdn.com/Events/TechDays/UK-Tech-Days&#34;&gt;UK Techdays section of Channel 9&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Hope you find it useful.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently recorded a <a href="http://channel9.msdn.com/Events/TechDays/UK-Tech-Days/Visual-Studio-Team-Foundation-for-Everyone">video on using Visual Studio Team Explorer Everywhere</a>, this has today been published in the <a href="http://channel9.msdn.com/Events/TechDays/UK-Tech-Days">UK Techdays section of Channel 9</a>.</p>
<p>Hope you find it useful.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Back from holiday to find my DDD Southwest session has been accepted</title>
      <link>https://blog.richardfennell.net/posts/back-from-holiday-to-find-my-ddd-southwest-session-has-been-accepted/</link>
      <pubDate>Mon, 16 Apr 2012 10:28:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/back-from-holiday-to-find-my-ddd-southwest-session-has-been-accepted/</guid>
      <description>&lt;p&gt;Got back from holiday today to find my &lt;a href=&#34;http://dddsouthwest.com/Agenda/tabid/55/Default.aspx&#34;&gt;DDDSW (26th May) session&lt;/a&gt; on unit testing in &lt;a href=&#34;http://dddsouthwest.com/ProposedSessions/tabid/69/Default.aspx&#34;&gt;Visual Studio 11 has been accepted&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I see the event is already full, but hope to see some of you there&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://www.dddsouthwest.com/Portals/0/DDDSW4/DDDSouthWest4BadgeSmall.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Got back from holiday today to find my <a href="http://dddsouthwest.com/Agenda/tabid/55/Default.aspx">DDDSW (26th May) session</a> on unit testing in <a href="http://dddsouthwest.com/ProposedSessions/tabid/69/Default.aspx">Visual Studio 11 has been accepted</a>.</p>
<p>I see the event is already full, but hope to see some of you there</p>
<p><img loading="lazy" src="http://www.dddsouthwest.com/Portals/0/DDDSW4/DDDSouthWest4BadgeSmall.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Fix for problem faking two SPLists in a single unit test with Typemock Isolator has been released</title>
      <link>https://blog.richardfennell.net/posts/fix-for-problem-faking-two-splists-in-a-single-unit-test-with-typemock-isolator-has-been-released/</link>
      <pubDate>Mon, 02 Apr 2012 16:00:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fix-for-problem-faking-two-splists-in-a-single-unit-test-with-typemock-isolator-has-been-released/</guid>
      <description>&lt;p&gt;A &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/09/10/problem-faking-multiple-splists-with-typemock-isolator-in-a-single-test.aspx&#34;&gt;blogged a while ago about a problem faking multiple SPList with Typemock Isolator in a single test&lt;/a&gt;. With the release of &lt;a href=&#34;http://www.typemock.com/&#34;&gt;Typemock Isolator 7.0.4.0&lt;/a&gt; you no longer have to use the workaround I documented.&lt;/p&gt;
&lt;p&gt;You can now use the code if the originally planned, and it works as expected&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt; 1: public partial class TestPage : System.Web.UI.Page 2:  { 3:     public TestPage() 4:     { 5:  var fakeWeb = Isolate.Fake.Instance&amp;lt;SPWeb&amp;gt;();  
 7:        Isolate.WhenCalled(() =&amp;gt; SPControl.GetContextWeb(null)).WillReturn(fakeWeb); 8:   9:        // return value for 1st call  
 10:        Isolate.WhenCalled(() =&amp;gt; fakeWeb.Lists\[&amp;#34;Centre Locations&amp;#34;\].Items).WillReturnCollectionValuesOf(CreateCentreList()); 11:        // return value for all other calls 12:        Isolate.WhenCalled(() =&amp;gt; fakeWeb.Lists\[&amp;#34;Map Zoom Areas&amp;#34;\].Items).WillReturnCollectionValuesOf(CreateZoomAreaList()); 13:     } 14:  15:     private static List&amp;lt;SPListItem&amp;gt; CreateZoomAreaList()  
 16:     { 17:        var fakeZoomAreas = new List&amp;lt;SPListItem&amp;gt;(); 18:        fakeZoomAreas.Add(CreateZoomAreaSPListItem(&amp;#34;London&amp;#34;, 51.49275, -0.137722222, 2, 14)); 19:        return fakeZoomAreas; 20:     } 21:  22:     private static List&amp;lt;SPListItem&amp;gt; CreateCentreList()  
 23:     { 24:        var fakeSites = new List&amp;lt;SPListItem&amp;gt;(); 25:        fakeSites.Add(CreateCentreSPListItem(&amp;#34;Aberdeen &amp;#34;, &amp;#34;1 The Road,  Aberdeen &amp;#34;, &amp;#34;Aberdeen@test.com&amp;#34;, &amp;#34;www.Aberdeen.test.com&amp;#34;, &amp;#34;1111&amp;#34;, &amp;#34;2222&amp;#34;, 57.13994444, -2.113333333)); 26:        fakeSites.Add(CreateCentreSPListItem(&amp;#34;Altrincham &amp;#34;, &amp;#34;1 The Road,  Altrincham &amp;#34;, &amp;#34;Altrincham@test.com&amp;#34;, &amp;#34;www.Altrincham.test.com&amp;#34;, &amp;#34;3333&amp;#34;, &amp;#34;4444&amp;#34;, 53.38977778, -2.349916667)); 27:        return fakeSites; 28:     } 29:  30:     private static SPListItem CreateCentreSPListItem(string title, string address, string email, string url, string telephone, string fax, double lat, double lng)  
 31:     { 32:         var fakeItem = Isolate.Fake.Instance&amp;lt;SPListItem&amp;gt;(); 33:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Title&amp;#34;\]).WillReturn(title); 34:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Address&amp;#34;\]).WillReturn(address); 35:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Email Address&amp;#34;\]).WillReturn(email); 36:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Site URL&amp;#34;\]).WillReturn(url); 37:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Telephone&amp;#34;\]).WillReturn(telephone); 38:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Fax&amp;#34;\]).WillReturn(fax); 39:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Latitude&amp;#34;\]).WillReturn(lat.ToString()); 40:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Longitude&amp;#34;\]).WillReturn(lng.ToString()); 41:         return fakeItem; 42:     } 43:  44:     private static SPListItem CreateZoomAreaSPListItem(string areaName, double lat, double lng, double radius, int zoom)  
 45:     { 46:         var fakeItem = Isolate.Fake.Instance&amp;lt;SPListItem&amp;gt;(); 47:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Title&amp;#34;\]).WillReturn(areaName); 48:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Latitude&amp;#34;\]).WillReturn(lat.ToString()); 49:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Longitude&amp;#34;\]).WillReturn(lng.ToString()); 50:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Radius&amp;#34;\]).WillReturn(radius.ToString()); 51:         Isolate.WhenCalled(() =&amp;gt; fakeItem\[&amp;#34;Zoom&amp;#34;\]).WillReturn(zoom.ToString()); 52:         return fakeItem; 53:     } 54:  55: }
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;A check of the returned values shows&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/09/10/problem-faking-multiple-splists-with-typemock-isolator-in-a-single-test.aspx">blogged a while ago about a problem faking multiple SPList with Typemock Isolator in a single test</a>. With the release of <a href="http://www.typemock.com/">Typemock Isolator 7.0.4.0</a> you no longer have to use the workaround I documented.</p>
<p>You can now use the code if the originally planned, and it works as expected</p>
<pre tabindex="0"><code> 1: public partial class TestPage : System.Web.UI.Page 2:  { 3:     public TestPage() 4:     { 5:  var fakeWeb = Isolate.Fake.Instance&lt;SPWeb&gt;();  
 7:        Isolate.WhenCalled(() =&gt; SPControl.GetContextWeb(null)).WillReturn(fakeWeb); 8:   9:        // return value for 1st call  
 10:        Isolate.WhenCalled(() =&gt; fakeWeb.Lists\[&#34;Centre Locations&#34;\].Items).WillReturnCollectionValuesOf(CreateCentreList()); 11:        // return value for all other calls 12:        Isolate.WhenCalled(() =&gt; fakeWeb.Lists\[&#34;Map Zoom Areas&#34;\].Items).WillReturnCollectionValuesOf(CreateZoomAreaList()); 13:     } 14:  15:     private static List&lt;SPListItem&gt; CreateZoomAreaList()  
 16:     { 17:        var fakeZoomAreas = new List&lt;SPListItem&gt;(); 18:        fakeZoomAreas.Add(CreateZoomAreaSPListItem(&#34;London&#34;, 51.49275, -0.137722222, 2, 14)); 19:        return fakeZoomAreas; 20:     } 21:  22:     private static List&lt;SPListItem&gt; CreateCentreList()  
 23:     { 24:        var fakeSites = new List&lt;SPListItem&gt;(); 25:        fakeSites.Add(CreateCentreSPListItem(&#34;Aberdeen &#34;, &#34;1 The Road,  Aberdeen &#34;, &#34;Aberdeen@test.com&#34;, &#34;www.Aberdeen.test.com&#34;, &#34;1111&#34;, &#34;2222&#34;, 57.13994444, -2.113333333)); 26:        fakeSites.Add(CreateCentreSPListItem(&#34;Altrincham &#34;, &#34;1 The Road,  Altrincham &#34;, &#34;Altrincham@test.com&#34;, &#34;www.Altrincham.test.com&#34;, &#34;3333&#34;, &#34;4444&#34;, 53.38977778, -2.349916667)); 27:        return fakeSites; 28:     } 29:  30:     private static SPListItem CreateCentreSPListItem(string title, string address, string email, string url, string telephone, string fax, double lat, double lng)  
 31:     { 32:         var fakeItem = Isolate.Fake.Instance&lt;SPListItem&gt;(); 33:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Title&#34;\]).WillReturn(title); 34:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Address&#34;\]).WillReturn(address); 35:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Email Address&#34;\]).WillReturn(email); 36:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Site URL&#34;\]).WillReturn(url); 37:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Telephone&#34;\]).WillReturn(telephone); 38:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Fax&#34;\]).WillReturn(fax); 39:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Latitude&#34;\]).WillReturn(lat.ToString()); 40:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Longitude&#34;\]).WillReturn(lng.ToString()); 41:         return fakeItem; 42:     } 43:  44:     private static SPListItem CreateZoomAreaSPListItem(string areaName, double lat, double lng, double radius, int zoom)  
 45:     { 46:         var fakeItem = Isolate.Fake.Instance&lt;SPListItem&gt;(); 47:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Title&#34;\]).WillReturn(areaName); 48:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Latitude&#34;\]).WillReturn(lat.ToString()); 49:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Longitude&#34;\]).WillReturn(lng.ToString()); 50:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Radius&#34;\]).WillReturn(radius.ToString()); 51:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Zoom&#34;\]).WillReturn(zoom.ToString()); 52:         return fakeItem; 53:     } 54:  55: }
</code></pre><p>A check of the returned values shows</p>
<ul>
<li>web.Lists[&ldquo;Centre Locations&rdquo;].Items.Count returns 2</li>
<li>web.Lists[&ldquo;Map Zoom Areas&rdquo;].Items.Count) returns 1</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>More community TFS build extensions documentation</title>
      <link>https://blog.richardfennell.net/posts/more-community-tfs-build-extensions-documentation/</link>
      <pubDate>Thu, 29 Mar 2012 22:27:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-community-tfs-build-extensions-documentation/</guid>
      <description>&lt;p&gt;As part of the on-going effort in documentation I have recently published more &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/documentation&#34;&gt;documentation for the TFS build extension project&lt;/a&gt; activities&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;AssemblyInfo&lt;/li&gt;
&lt;li&gt;CodeMetric (updated) and CodeMetricHistory&lt;/li&gt;
&lt;li&gt;File&lt;/li&gt;
&lt;li&gt;Twitter&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>As part of the on-going effort in documentation I have recently published more <a href="http://tfsbuildextensions.codeplex.com/documentation">documentation for the TFS build extension project</a> activities</p>
<ul>
<li>AssemblyInfo</li>
<li>CodeMetric (updated) and CodeMetricHistory</li>
<li>File</li>
<li>Twitter</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Unit testing in VS11Beta and getting your tests to run on the new TFSPreview build service</title>
      <link>https://blog.richardfennell.net/posts/unit-testing-in-vs11beta-and-getting-your-tests-to-run-on-the-new-tfspreview-build-service/</link>
      <pubDate>Tue, 27 Mar 2012 21:23:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/unit-testing-in-vs11beta-and-getting-your-tests-to-run-on-the-new-tfspreview-build-service/</guid>
      <description>&lt;p&gt;One of my favourite new features in VS11 is that the unit testing is pluggable. You don’t have to use MSTest, you can use any test framework that an adaptor is available for (at the release of the beta this meant &lt;a href=&#34;http://www.peterprovost.org/blog/post/Visual-Studio-11-Beta-Unit-Testing-Plugins-List.aspx&#34;&gt;the list of framworks on Peter Provost’s blog&lt;/a&gt;, but I am sure this will grow).&lt;/p&gt;
&lt;p&gt;So what does this mean and how do you use it?&lt;/p&gt;
&lt;h2 id=&#34;add-some-tests&#34;&gt;Add some tests&lt;/h2&gt;
&lt;p&gt;First it is worth noting that you no longer need to use a test project to contain your MSTest, you can if you want, but you don’t need to. So you can&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One of my favourite new features in VS11 is that the unit testing is pluggable. You don’t have to use MSTest, you can use any test framework that an adaptor is available for (at the release of the beta this meant <a href="http://www.peterprovost.org/blog/post/Visual-Studio-11-Beta-Unit-Testing-Plugins-List.aspx">the list of framworks on Peter Provost’s blog</a>, but I am sure this will grow).</p>
<p>So what does this mean and how do you use it?</p>
<h2 id="add-some-tests">Add some tests</h2>
<p>First it is worth noting that you no longer need to use a test project to contain your MSTest, you can if you want, but you don’t need to. So you can</p>
<ol>
<li>Add a new class library to your solution</li>
<li>Add a reference to Microsoft.VisualStudio.TestTools.UnitTesting and create an MStest test</li>
<li>Add a reference to xUnit (I used NuGet to add the reference) and create an XUnit test</li>
<li>Add a reference to XUnit extensions (NuGet again) and add a row based xUnit test</li>
<li>Add a reference to nUnit (you guessed it - via NuGet) and create a nUnit test</li>
</ol>
<p>All these test frameworks can live in the same assembly.</p>
<h2 id="add-extra-frameworks-to-the-test-runner">Add extra frameworks to the test runner</h2>
<p>By default the VS11 test runner will only run the MStest test, but by installing the <a href="http://visualstudiogallery.msdn.microsoft.com/463c5987-f82b-46c8-a97e-b1cde42b9099">xUnit.net runner for Visual Studio 11 Beta</a> and <a href="http://visualstudiogallery.msdn.microsoft.com/6ab922d0-21c0-4f06-ab5f-4ecd1fe7175d">NUnit Test Adapter (Beta)</a> either from the Visual Studio gallery or via the Tools –&gt; Extension Manager (and restarting VS) you can see all the test are run</p>
<h3 id="image"><a href="/wp-content/uploads/sites/2/historic/image_32.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_32.png" title="image"></a></h3>
<p>You can if you want set it so that every time you compile the test runner triggers (Unit Testing –&gt; Unit Test Settings –&gt; Run Test After Build). All very nice.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_33.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_33.png" title="image"></a></p>
<h2 id="running-the-tests-in-an-automated-build">Running the tests in an automated build</h2>
<p>However, what happens when you want to run these tests as part of your automated build?</p>
<p>The build box needs to have have a reference to the extensions. This can be done in <a href="http://blogs.msdn.com/b/aseemb/archive/2012/03/03/how-to-make-your-discoverer-executor-extension-visible-to-ute.aspx">three ways</a>. However if you are using the new <a href="http://blogs.msdn.com/b/bharry/archive/2012/03/27/announcing-a-build-service-for-team-foundation-service.aspx">TFSPreview hosted build services, as announced at VS Live</a>, only one method, the third, is open to you as you have not access to the VM running the build to upload files other than by source control.</p>
<p>By default, if you create a build and run it on the hosted build you will see it all compiles, but only the MStest test is run</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_34.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_34.png" title="image"></a></p>
<p>The fix is actually simple.</p>
<ol>
<li>
<p>First you need to download the <a href="http://visualstudiogallery.msdn.microsoft.com/463c5987-f82b-46c8-a97e-b1cde42b9099">xUnit.net runner for Visual Studio 11 Beta</a> and <a href="http://visualstudiogallery.msdn.microsoft.com/6ab922d0-21c0-4f06-ab5f-4ecd1fe7175d">NUnit Test Adapter (Beta)</a> .VSIX packages from Visual Studio Gallery.</p>
</li>
<li>
<p>Rename the downloaded files as a .ZIP file and unpack them</p>
</li>
<li>
<p>In TFSPreview source control create a folder under the <strong>BuildProcessTemplates</strong> for your team project. I called mine <strong>CustomActivities</strong> (the same folder can be used for custom build extensions hence the name, <a href="http://tfsbuildextensions.codeplex.com/documentation">see Custom Build Extensions for more details</a>)</p>
</li>
<li>
<p>Copy the .DLLs from the renamed .VSIX files into this folder and check them in. You should have a list as below</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_35.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_35.png" title="image"></a></p>
</li>
<li>
<p>In the Team Explorer –&gt; Build hub, select the Actions menu option –&gt; Manage Build Controllers. Set the <strong>Version control path for  custom assemblies</strong> to the new folder.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_36.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_36.png" title="image"></a></p>
</li>
</ol>
<p>You do not need to add any extra files to enable xUnit or nUnit tests as long as you checked in the runtime xUnit and nUnit assemblies from the Nuget package at the solution level. This should have been default behaviour with NuGet in VS11 (i.e. there should be a package folder structure in source control as shown in source explorer graphic above)</p>
<p>You can now queue a build and you should see all the tests are run (in my case MStest, XUnit and nUnit). The only difference from a local run is that the xUnit row based tests appear as separate lines in the report</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_37.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_37.png" title="image"></a></p>
<p>So now you can run tests for any type on a standard TFSPreview hosted build box, a great solution for many projects where just a build and test is all that is required.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How I try to keep up to date</title>
      <link>https://blog.richardfennell.net/posts/how-i-try-to-keep-up-to-date/</link>
      <pubDate>Mon, 26 Mar 2012 12:16:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-i-try-to-keep-up-to-date/</guid>
      <description>&lt;p&gt;I have just added a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/page/How-I-try-to-keep-up-to-date.aspx&#34;&gt;page to the blog&lt;/a&gt; that lists some of the podcasts I try to listen to, in an attempt to keep up to date.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just added a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/page/How-I-try-to-keep-up-to-date.aspx">page to the blog</a> that lists some of the podcasts I try to listen to, in an attempt to keep up to date.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Keeping up to date</title>
      <link>https://blog.richardfennell.net/posts/keeping-up-to-date/</link>
      <pubDate>Mon, 26 Mar 2012 09:25:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/keeping-up-to-date/</guid>
      <description>&lt;p&gt;Ours is a fast moving industry and as times goes on I find the need to specialise more and more. However, I believe there is a great value in at least having a passing knowledge of as much of our field as possible, IT pros can learn from developers and .NET team from Java and of course the other way around. &lt;/p&gt;
&lt;p&gt;I find the best way to try to keep abreast of new ideas is to listen to podcasts; I personally find it hard read enough blogs, but I can find time to listen to a podcast whist travelling or working around the house. Here is a list of a few I find give me at least a feel for what is going on, I still feel that they are a bit Microsoft focused, so interested to hear any other suggestions&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Ours is a fast moving industry and as times goes on I find the need to specialise more and more. However, I believe there is a great value in at least having a passing knowledge of as much of our field as possible, IT pros can learn from developers and .NET team from Java and of course the other way around. </p>
<p>I find the best way to try to keep abreast of new ideas is to listen to podcasts; I personally find it hard read enough blogs, but I can find time to listen to a podcast whist travelling or working around the house. Here is a list of a few I find give me at least a feel for what is going on, I still feel that they are a bit Microsoft focused, so interested to hear any other suggestions</p>
<p>**Podcasts                            **</p>
<p>**RSS **</p>
<p><strong>Description</strong></p>
<p><a href="http://www.dotnetrocks.com/">.NET Rocks</a></p>
<p><a href="http://www.pwop.com/feed.aspx?show=dotnetrocks&amp;filetype=master&amp;tags="><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>Development on the .NET Platform (and the occasional random &lsquo;geek out&rsquo; show)</p>
<p><a href="http://www.runasradio.com/">RunAs Radio</a></p>
<p><a href="http://feeds.feedburner.com/RunasRadio"><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>IT Pro discussion for Microsoft platforms</p>
<p><a href="http://herdingcode.com/">Herding Code</a></p>
<p><a href="http://feeds.feedburner.com/herdingcode"><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>General development topics on all platforms and languages</p>
<p><a href="http://radiotfs.com">RadioTFS </a></p>
<p><a href="http://feeds.feedburner.com/radiotfs"><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>OK TFS specific, but a good discussion of new features and issues</p>
<p><a href="http://hanselminutes.com/">Hanselminutes</a></p>
<p><a href="http://feeds.feedburner.com/HanselminutesCompleteMP3"><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>A diverse selection of subject with a bit of a web focus</p>
<p><a href="http://www.thetabletshow.com/">Tablet Show</a></p>
<p><a href="http://www.pwop.com/feed.aspx?show=thetabletshow"><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>All things to do with tablet and mobile development</p>
<p><a href="http://thisdeveloperslife.com/">This Developer&rsquo;s Life</a></p>
<p><a href="http://feeds.feedburner.com/thisdeveloperslife"><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>More of a magazine show on all things about being developer, not just technology</p>
<p><a href="http://www.ted.com/">TED</a></p>
<p><a href="http://feeds.feedburner.com/tedtalks_video"><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>OK TED is not IT, but often gets you thinking about something new that relates to work, always eye opening</p>
<p><a href="http://thebuglepodcast.com/">The Bugle</a></p>
<p><a href="http://feeds.feedburner.com/thebuglefeed"><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/pics/rssButton.png"></a></p>
<p>The audio newspaper for the visual age</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems editing TFS11 build templates in VS11Beta</title>
      <link>https://blog.richardfennell.net/posts/problems-editing-tfs11-build-templates-in-vs11beta/</link>
      <pubDate>Sat, 24 Mar 2012 15:12:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-editing-tfs11-build-templates-in-vs11beta/</guid>
      <description>&lt;p&gt;Whilst writing documentation for &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/&#34;&gt;TFS community build extensions&lt;/a&gt; (just published the &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=Getting%20started%20with%20the%20Zip%20activity&amp;amp;referringTitle=Documentation&#34;&gt;Zip activity documentation&lt;/a&gt;) I hit upon a problem working with TFS11. The TFS community build extensions support both TFS2010 and TFS11beta, unfortunately the two versions need to be built separately (once against TFS2010 DLLs and once against TFS11 ones). As of version &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/releases/view/79301&#34;&gt;1.3 of the extensions both versions are shipped in the download&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In the past in the past I have tended to work in TFS2010 on this community project, but since the VS/TFS11 beta release I am trying to move over to the new build. So to write the documentation for the ZIP activity I started in TFS11. I followed the usual method to use a custom activity (there is no improvement over this frankly &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;amp;referringTitle=Documentation&#34;&gt;horrible process&lt;/a&gt; in VS11) so within VS11 I added the ZIP activity to a copy of the defaultprocesstemplate.xaml. All appeared OK but when I ran a build with this new template. I got the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst writing documentation for <a href="http://tfsbuildextensions.codeplex.com/">TFS community build extensions</a> (just published the <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=Getting%20started%20with%20the%20Zip%20activity&amp;referringTitle=Documentation">Zip activity documentation</a>) I hit upon a problem working with TFS11. The TFS community build extensions support both TFS2010 and TFS11beta, unfortunately the two versions need to be built separately (once against TFS2010 DLLs and once against TFS11 ones). As of version <a href="http://tfsbuildextensions.codeplex.com/releases/view/79301">1.3 of the extensions both versions are shipped in the download</a>.</p>
<p>In the past in the past I have tended to work in TFS2010 on this community project, but since the VS/TFS11 beta release I am trying to move over to the new build. So to write the documentation for the ZIP activity I started in TFS11. I followed the usual method to use a custom activity (there is no improvement over this frankly <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;referringTitle=Documentation">horrible process</a> in VS11) so within VS11 I added the ZIP activity to a copy of the defaultprocesstemplate.xaml. All appeared OK but when I ran a build with this new template. I got the error</p>
<blockquote>
<p><em>The build process failed validation. Details:</em></p>
<p><em>Validation Error: The private implementation of activity &lsquo;1: DynamicActivity&rsquo; has the following validation error:   Compiler error(s) encountered processing expression &ldquo;BuildDetail.BuildNumber&rdquo;.</em></p>
<p><em>Type &lsquo;IBuildDetail&rsquo; is not defined.</em></p></blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_30.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_30.png" title="image"></a></p>
<p>On checking the .XAML file you can see there is duplication in the namespaces</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_31.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_31.png" title="image"></a></p>
<p>[Note the greatly improved compare tooling in VS11]</p>
<p>This it  turns out is a known issue logged on Microsoft Connect. The answer, at this time, is to do you build process .XAML editing in a text editor like Notepad, not a good story, but a workaround.</p>
]]></content:encoded>
    </item>
    <item>
      <title>More thoughts on Typemock Isolator, Microsoft Fakes and Sharepoint</title>
      <link>https://blog.richardfennell.net/posts/more-thoughts-on-typemock-isolator-microsoft-fakes-and-sharepoint/</link>
      <pubDate>Sat, 24 Mar 2012 10:01:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-thoughts-on-typemock-isolator-microsoft-fakes-and-sharepoint/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx&#34;&gt;I posted yesterday on using Typemock and Microsoft Fakes with SharePoint&lt;/a&gt;.  After a bit more thought I realised the key thing in using Typemock I found easier was the construction of my SPListItem dataset. Typemock allowed me to fake SPListItems and put them in a generic List&lt;SPListItem&gt; then just make this the return value for the Item collection using the magic &lt;strong&gt;.WillReturnCollectionValuesOf()&lt;/strong&gt; method that converts my List to the required collection type. With the Microsoft Fakes I had think about a delegate that constructed my test data at runtime. This is not a problem, just a different way of working.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/23/Now-that-VS11-has-a-fake-library-do-I-still-need-Typemock-Isolator-to-fake-out-SharePoint.aspx">I posted yesterday on using Typemock and Microsoft Fakes with SharePoint</a>.  After a bit more thought I realised the key thing in using Typemock I found easier was the construction of my SPListItem dataset. Typemock allowed me to fake SPListItems and put them in a generic List<SPListItem> then just make this the return value for the Item collection using the magic <strong>.WillReturnCollectionValuesOf()</strong> method that converts my List to the required collection type. With the Microsoft Fakes I had think about a delegate that constructed my test data at runtime. This is not a problem, just a different way of working.</p>
<p>A side effect of using the Typemock <strong>.WillReturnCollectionValuesOf()</strong> method is that if I check the number of SPListItems in the return SPListColection I have a real collection so I can use the collection’s own <strong>.Count</strong> method, I don’t have fake it out. With the Microsoft Fakes as there is no collection returned so I must Fake its return value.</p>
<p>This is a trend common across <a href="http://www.typemock.com/">Typemock Isolator</a>, it does much  of the work for you. Microsoft Fakes, like Moles, required you to do the work. In <a href="http://research.microsoft.com/en-us/projects/pex/">Moles</a> this was addressed by the use of behaviour packs to get you started with standard items you need in SharePoint.</p>
<p>I would say again that there may be other ways of using the Microsoft Fakes library, so there maybe ways to address these initial comments of mine, I am keen to see if this is the case</p>
]]></content:encoded>
    </item>
    <item>
      <title>Now that VS11 has a fake library do I still need Typemock Isolator to fake out SharePoint?</title>
      <link>https://blog.richardfennell.net/posts/now-that-vs11-has-a-fake-library-do-i-still-need-typemock-isolator-to-fake-out-sharepoint/</link>
      <pubDate>Fri, 23 Mar 2012 17:16:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/now-that-vs11-has-a-fake-library-do-i-still-need-typemock-isolator-to-fake-out-sharepoint/</guid>
      <description>&lt;p&gt;Updated 5 May 2012 Also see my follow up &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/05/05/More-on-using-the-VS11-fake-library-to-fake-out-SharePoint.aspx&#34;&gt;post&lt;/a&gt;, this corrects some of the information of faking SharePoint &lt;/p&gt;
&lt;p&gt;I have done posts in the past about how you can use &lt;a href=&#34;http://www.typemock.com/&#34;&gt;Typemock Isolator&lt;/a&gt; to fake out SharePoint to speed &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx&#34;&gt;design&lt;/a&gt; and &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2008/12/04/developer-testing-of-sharepoint-webparts-using-typemock-isolator-and-ivonna.aspx&#34;&gt;testing&lt;/a&gt;. The  reason you need special tooling, beyond standard mocking frameworks like Rhino or MOQ, is that SharePoint has many sealed private classes with no public constructors. So in the past you only had two options: Typemock Isolator and &lt;a href=&#34;http://research.microsoft.com/en-us/projects/pex/documentation.aspx&#34;&gt;Moles from Microsoft research&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Updated 5 May 2012 Also see my follow up <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/05/05/More-on-using-the-VS11-fake-library-to-fake-out-SharePoint.aspx">post</a>, this corrects some of the information of faking SharePoint </p>
<p>I have done posts in the past about how you can use <a href="http://www.typemock.com/">Typemock Isolator</a> to fake out SharePoint to speed <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx">design</a> and <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2008/12/04/developer-testing-of-sharepoint-webparts-using-typemock-isolator-and-ivonna.aspx">testing</a>. The  reason you need special tooling, beyond standard mocking frameworks like Rhino or MOQ, is that SharePoint has many sealed private classes with no public constructors. So in the past you only had two options: Typemock Isolator and <a href="http://research.microsoft.com/en-us/projects/pex/documentation.aspx">Moles from Microsoft research</a>.</p>
<p>With the release of the Visual Studio 11 beta we now have a means to fake out ‘non mockable classes’ <a href="http://msdn.microsoft.com/en-us/library/hh549176%28v=vs.110%29.aspx">(shim) classes that is shipped in the box</a>. This tooling I understand has it roots in Moles, but is all new. So with the advent of fakes in VS11 you have to ask ‘do I still need Typemock isolator?”</p>
<p>To answer this question I have tried to perform the same basic mocking exercise as I did in my previous posts. Create a fake SharePoint list and make sure I can access the faked out content in test asserts.</p>
<h2 id="in-typemock-isolator">In Typemock Isolator</h2>
<p>To perform the test in Isolator, assuming Isolated is installed on your PC, is add a reference to Typemock.dll and Typemock.ArrangeActAssert.dll and use the following code</p>
<pre tabindex="0"><code>\[TestMethod\]         public void FakeSharePointWithIsolator()         {              // Arrange // build the dataset var fakeItemList = new List&lt;SPListItem\&gt;();             for (int i = 0; i &lt; 3; i++)             {                 var fakeItem = Isolate.Fake.Instance&lt;SPListItem\&gt;();                 Isolate.WhenCalled(() =&gt; fakeItem.Title).WillReturn(String.Format(&#34;The Title {0}&#34;, i));                 Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Email&#34;\]).WillReturn(String.Format(&#34;email{0}@fake.url&#34;, i));                  fakeItemList.Add(fakeItem);             }              // fake the SPWeb and attach the data var fakeWeb = Isolate.Fake.Instance&lt;SPWeb\&gt;();             Isolate.WhenCalled(() =&gt; fakeWeb.Url).WillReturn(&#34;http://fake.url&#34;);             Isolate.WhenCalled(() =&gt; fakeWeb.Lists\[&#34;fakelistname&#34;\].Items).WillReturnCollectionValuesOf(fakeItemList);               // act // not actually doing an operation // assert Assert.AreEqual(&#34;http://fake.url&#34;, fakeWeb.Url);             Assert.AreEqual(3, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items.Count);              Assert.AreEqual(&#34;The Title 0&#34;, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items\[0\].Title);             Assert.AreEqual(&#34;email0@fake.url&#34;, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items\[0\]\[&#34;Email&#34;\]);             Assert.AreEqual(&#34;The Title 1&#34;, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items\[1\].Title);             Assert.AreEqual(&#34;email1@fake.url&#34;, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items\[1\]\[&#34;Email&#34;\]);                      }
</code></pre><h2 id="using-microsoft-faking">Using Microsoft Faking</h2>
<h3 id="adding-the-fake">Adding the Fake</h3>
<p>The process to added a fake in VS11 is to right click on an assembly reference (in our case Microsoft.SharePoint) and select the ‘add fake assembly’ option.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_28.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_28.png" title="image"></a></p>
<p>You should see a Fake reference created and an entry in the fakes folder</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_29.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_29.png" title="image"></a></p>
<p><strong>Gotcha Warning ‘Can’t generate the fake reference’ –</strong> When I tried to generate a fake for the Microsoft.SharePoint assembly within a classlibrary project that had a reference to only the Microsoft.Sharepoint assembly (and the default assemblies references added for any classlibrary project) the entry is made in the Fakes folder but no .Fakes assembly is created. After a delay (30 seconds?)  you see an error message in the Visual Studio output window. This tells you a reference cannot be resolved, if you delete the entry in the Fakes folder, add the missing reference listed in the output windows and repeat the process you get the same problem but another assembly is named as missing, add this and repeat this process. Eventually the .Fakes assembly is created.</p>
<p>In the case of this SharePoint sample I had to manually add Microsoft.SharePoint.Dsp, Microsoft.SharePoint.Library, Microsoft.SharePoint.Search, System.Web, System.Web.ApplicationServices. Remember these entries are ONLY required to allow the fake creation/registration, they are not needed for your assembly to work in production or for Typemock.  [5 May 2012 See my follow up <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/05/05/More-on-using-the-VS11-fake-library-to-fake-out-SharePoint.aspx">post</a>] </p>
<p>You should now have a generated assembly that you can use to create your fakes shims</p>
<h3 id="writing-the-fakes-logic">Writing the fakes logic</h3>
<p>The logic to create the fake behaviour is as follows. Now there might be easier ways to do this, but this does work and is reasonably readable</p>
<pre tabindex="0"><code> \[TestMethod\] public void FakeSharePointWithShims()   
{ using (ShimsContext.Create()) // required to tidy up the shim system { // arrange var fakeWebShim = new Microsoft.SharePoint.Fakes.ShimSPWeb()   
 {   
 UrlGet = () =&gt; [http://fake.url](http://fake.url),   
 ListsGet = () =&gt; new Microsoft.SharePoint.Fakes.ShimSPListCollection()   
 {   
 ItemGetString = (listname) =&gt; new Microsoft.SharePoint.Fakes.ShimSPList()   
 {   
 ItemsGet = () =&gt; new Microsoft.SharePoint.Fakes.ShimSPListItemCollection()   
 { // we have to fake the count, as we not returning a list SPListCollection CountGet = () =&gt; 3,   
 ItemGetInt32 = (index) =&gt; new Microsoft.SharePoint.Fakes.ShimSPListItem()   
 {   
 TitleGet = () =&gt; string.Format(&#34;The Title {0}&#34;, index),   
 ItemGetString = (fieldname) =&gt; string.Format(&#34;email{0}@fake.url&#34;, index) // note we don&#39;t check the field name }   
 }   
 }   
 }   
 }; // act   
// not actually doing an operation   
// assert var fakeWeb = fakeWebShim.Instance; Assert.AreEqual(&#34;http://fake.url&#34;, fakeWeb.Url); Assert.AreEqual(3, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items.Count); Assert.AreEqual(&#34;The Title 0&#34;, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items\[0\].Title); Assert.AreEqual(&#34;email0@fake.url&#34;, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items\[0\]\[&#34;Email&#34;\]); Assert.AreEqual(&#34;The Title 1&#34;, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items\[1\].Title); Assert.AreEqual(&#34;email1@fake.url&#34;, fakeWeb.Lists\[&#34;fakelistname&#34;\].Items\[1\]\[&#34;Email&#34;\]); }
</code></pre><h2 id="comments">Comments</h2>
<p>Which which do you find the most readable?</p>
<p>I guess it is down to familiarity really. You can see I end up using the same test asserts, so the logic I am testing is the same.</p>
<p>I do think the Typemock remains the easier to use. There is the gotcha I found with having to add extra references to allow the fakes to be create in the VS11 faking system, and just the simple fact of having to manually create the fakes in Visual Studio, as opposed to it just being handled behind the scenes by Typemock. There is also the issue of having to refer to the ShimSPWeb class and using the .Instance property as opposed to just using the SPWeb in Typemock.</p>
<p>The down side of Typemock is the cost and the need to have it installed on any development and build machines, neither an issue for the Microsoft fake system, the tests just being standard .NET code that can be wrappered in any unit testing fame work (and remember VS11’s Unit Test Explorer and TFS11 build allow you to use any testing framework not just MStest.</p>
<p>So which tool am I going to use? I think for now I will be staying with Typemock, but the VS11 faking system is well worth keeping an eye on.</p>
<p>[Updates 24 March 12 - <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2012/03/24/More-thoughts-on-Typemock-Isolator-Microsoft-Fakes-and-Sharepoint.aspx">More comments added in a second post</a>]</p>
]]></content:encoded>
    </item>
    <item>
      <title>Typemock Isolator Version 7</title>
      <link>https://blog.richardfennell.net/posts/typemock-isolator-version-7/</link>
      <pubDate>Thu, 22 Mar 2012 10:54:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/typemock-isolator-version-7/</guid>
      <description>&lt;p&gt;Whilst travelling I had neglected to post about the new release of &lt;a href=&#34;http://www.typemock.com/&#34;&gt;Typemock Isolator Version 7&lt;/a&gt;. This new release extends the range of the product to provide a variety of ‘test support tools’ that help you track down bugs sooner.&lt;/p&gt;
&lt;p&gt;However, the best new feature for me is that it allows support of historic versions of Isolator, this means you don’t have to upgrade all projects to V7 at the same time. The V7 engine will use the older assemblies  e.g. V6 perfectly happily. Make managing build boxes far easier&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst travelling I had neglected to post about the new release of <a href="http://www.typemock.com/">Typemock Isolator Version 7</a>. This new release extends the range of the product to provide a variety of ‘test support tools’ that help you track down bugs sooner.</p>
<p>However, the best new feature for me is that it allows support of historic versions of Isolator, this means you don’t have to upgrade all projects to V7 at the same time. The V7 engine will use the older assemblies  e.g. V6 perfectly happily. Make managing build boxes far easier</p>
]]></content:encoded>
    </item>
    <item>
      <title>Dropping build output to source control in TFS11</title>
      <link>https://blog.richardfennell.net/posts/dropping-build-output-to-source-control-in-tfs11/</link>
      <pubDate>Tue, 13 Mar 2012 15:18:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dropping-build-output-to-source-control-in-tfs11/</guid>
      <description>&lt;p&gt;One of the nice new feature of TFS11 is that you get a third option for what to do with your build output&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Don’t copy output anywhere – good for CI builds that nobody will ever consume, just used to run tests&lt;/li&gt;
&lt;li&gt;Drop to a UNC share e.g. &lt;em&gt;\server1drops&lt;/em&gt; – the default and used 9 times out 10&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The new one&lt;/strong&gt; - drop to source control e.g. &lt;em&gt;$/myproject/drops.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The advantage of this new third option is your build agents can place the files they create in a location that can be accessed by any TFS client i.e. in the source control repository. A user no longer needs to be on a VPN or corporate LAN to be able to see a UNC share.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One of the nice new feature of TFS11 is that you get a third option for what to do with your build output</p>
<ul>
<li>Don’t copy output anywhere – good for CI builds that nobody will ever consume, just used to run tests</li>
<li>Drop to a UNC share e.g. <em>\server1drops</em> – the default and used 9 times out 10</li>
<li><strong>The new one</strong> - drop to source control e.g. <em>$/myproject/drops.</em></li>
</ul>
<p>The advantage of this new third option is your build agents can place the files they create in a location that can be accessed by any TFS client i.e. in the source control repository. A user no longer needs to be on a VPN or corporate LAN to be able to see a UNC share.</p>
<p>But remember, just because the builds are in source control does not mean that the build don’t still follow the normal build retention policies, so they will not accumulate forever, unless you want them to.</p>
<p>Now some teams will have good reasons as to why the don’t want builds going into source control. Deployments to a NuGet server and the like will be a far better system for them. This is still all possible, it is just down to build process template customisation. You have not lost any options, just gained another one out the box</p>
<p>But what about building Java via Ant or Maven within TFS using the Build Extensions? Well at this time the process template used to create this type of build from within Eclipse has not caught up with this new feature. However if you really want it you can do the following</p>
<ol>
<li>
<p>Create a TFS build in Eclipse that drops to a UNC share</p>
</li>
<li>
<p>Open the build definition in VS11</p>
</li>
<li>
<p>Edit the drops location to point to a location in source control and save the build</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_26.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_26.png" title="image"></a></p>
</li>
<li>
<p>When you trigger a new build and you should get you drops in source control. Note in the confirmation dialog you can see the source control based path but you can’t edit it (if you try you get an invalid path error)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_27.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_27.png" title="image"></a></p>
</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>Team Explorer Everywhere is now free</title>
      <link>https://blog.richardfennell.net/posts/team-explorer-everywhere-is-now-free/</link>
      <pubDate>Fri, 09 Mar 2012 09:24:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/team-explorer-everywhere-is-now-free/</guid>
      <description>&lt;p&gt;It was &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2012/03/08/even-better-access-to-team-foundation-server.aspx&#34;&gt;announced overnight that TEE is now free&lt;/a&gt;. What does this mean?&lt;/p&gt;
&lt;p&gt;It means if you do not have to buy TEE as some extra software if you already have a TFS CAL. This removed a barrier to adoption for developers who work in heterogeneous systems, there is no extra cost to integrate Eclipse as well a Visual Studio with TFS .&lt;/p&gt;
&lt;p&gt;If you want to find out more about TEE why not come &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Visual%20Studio%20Team%20Foundation%20for%20Everyone%20%28Online%29&#34;&gt;Black Marble’s free webinar I am delivering on the 19th&lt;/a&gt;?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It was <a href="http://blogs.msdn.com/b/bharry/archive/2012/03/08/even-better-access-to-team-foundation-server.aspx">announced overnight that TEE is now free</a>. What does this mean?</p>
<p>It means if you do not have to buy TEE as some extra software if you already have a TFS CAL. This removed a barrier to adoption for developers who work in heterogeneous systems, there is no extra cost to integrate Eclipse as well a Visual Studio with TFS .</p>
<p>If you want to find out more about TEE why not come <a href="http://www.blackmarble.co.uk/events.aspx?event=Visual%20Studio%20Team%20Foundation%20for%20Everyone%20%28Online%29">Black Marble’s free webinar I am delivering on the 19th</a>?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Free webinar on Team Explorer Everywhere on the 19th of March</title>
      <link>https://blog.richardfennell.net/posts/free-webinar-on-team-explorer-everywhere-on-the-19th-of-march/</link>
      <pubDate>Mon, 05 Mar 2012 20:09:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/free-webinar-on-team-explorer-everywhere-on-the-19th-of-march/</guid>
      <description>&lt;p&gt;I recently did a &lt;a href=&#34;http://blogs.msdn.com/b/visualstudiouk/archive/2012/02/02/an-introduction-to-agile-development-with-team-foundation-server-but-i-m-not-a-net-developer.aspx&#34;&gt;guest post for the VSTS UK Team Blog on Team Explorer Everywhere&lt;/a&gt;. On the 19th of March I will be doing a free webinar on the same subject. &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Visual%20Studio%20Team%20Foundation%20for%20Everyone%20%28Online%29&#34;&gt;To register for this event&lt;/a&gt; please see the Black Marble web site.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently did a <a href="http://blogs.msdn.com/b/visualstudiouk/archive/2012/02/02/an-introduction-to-agile-development-with-team-foundation-server-but-i-m-not-a-net-developer.aspx">guest post for the VSTS UK Team Blog on Team Explorer Everywhere</a>. On the 19th of March I will be doing a free webinar on the same subject. <a href="http://www.blackmarble.co.uk/events.aspx?event=Visual%20Studio%20Team%20Foundation%20for%20Everyone%20%28Online%29">To register for this event</a> please see the Black Marble web site.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Propose a session for DDD South West</title>
      <link>https://blog.richardfennell.net/posts/propose-a-session-for-ddd-south-west/</link>
      <pubDate>Fri, 02 Mar 2012 14:25:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/propose-a-session-for-ddd-south-west/</guid>
      <description>&lt;p&gt;I have &lt;a href=&#34;http://www.dddsouthwest.com/&#34;&gt;proposed a session for DDD South West&lt;/a&gt;, why don’t you?&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.dddsouthwest.com/&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://www.dddsouthwest.com/Portals/0/DDDSW4/DDDSouthWest4BadgeSmall.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have <a href="http://www.dddsouthwest.com/">proposed a session for DDD South West</a>, why don’t you?</p>
<p><a href="http://www.dddsouthwest.com/"><img loading="lazy" src="http://www.dddsouthwest.com/Portals/0/DDDSW4/DDDSouthWest4BadgeSmall.png"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Two Team Explorers in TEE 11 Beta – twice as good?</title>
      <link>https://blog.richardfennell.net/posts/two-team-explorers-in-tee-11-beta-twice-as-good/</link>
      <pubDate>Thu, 01 Mar 2012 16:41:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/two-team-explorers-in-tee-11-beta-twice-as-good/</guid>
      <description>&lt;p&gt;When you install TEE11 Beta in Eclipse you will notice their are two Team Explorer windows&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Team Explorer 2010 – The old style window&lt;/li&gt;
&lt;li&gt;Team Explorer – The new VS11 style window&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_25.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_25.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is an artefact of the beta as TEE transitions to the new VS UI.&lt;/p&gt;
&lt;p&gt;I would recommend you use the new one as this will be the experience going forward. I certainly &lt;strong&gt;would not&lt;/strong&gt; recommend having both open as I have shown in this screenshot&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you install TEE11 Beta in Eclipse you will notice their are two Team Explorer windows</p>
<ul>
<li>Team Explorer 2010 – The old style window</li>
<li>Team Explorer – The new VS11 style window</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_25.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_25.png" title="image"></a></p>
<p>This is an artefact of the beta as TEE transitions to the new VS UI.</p>
<p>I would recommend you use the new one as this will be the experience going forward. I certainly <strong>would not</strong> recommend having both open as I have shown in this screenshot</p>
]]></content:encoded>
    </item>
    <item>
      <title>A bit of an edge case – Using Git-TFS to get the best (or worst?) of both worlds</title>
      <link>https://blog.richardfennell.net/posts/a-bit-of-an-edge-case-using-git-tfs-to-get-the-best-or-worst-of-both-worlds/</link>
      <pubDate>Wed, 29 Feb 2012 21:24:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-bit-of-an-edge-case-using-git-tfs-to-get-the-best-or-worst-of-both-worlds/</guid>
      <description>&lt;h1 id=&#34;background&#34;&gt;Background&lt;/h1&gt;
&lt;p&gt;Whilst at the Microsoft MVP summit there are a number of MVP2MVP sessions, these are similar to DDD style sessions with MVPs presenting as opposed to Microsoft staff. One I found really interesting was one by &lt;a href=&#34;http://www.richard-banks.org/2010/04/git-tfs-working-together-version-2.html&#34;&gt;Richard Banks based on his post on using GIT with TFS&lt;/a&gt;. Now this was a usage of source control tools I had not considered, a mixture of Git and TFS (or could be Git to SVN, similar tools are available)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="background">Background</h1>
<p>Whilst at the Microsoft MVP summit there are a number of MVP2MVP sessions, these are similar to DDD style sessions with MVPs presenting as opposed to Microsoft staff. One I found really interesting was one by <a href="http://www.richard-banks.org/2010/04/git-tfs-working-together-version-2.html">Richard Banks based on his post on using GIT with TFS</a>. Now this was a usage of source control tools I had not considered, a mixture of Git and TFS (or could be Git to SVN, similar tools are available)</p>
<p>Why do you want this usage? Especially with <a href="http://blogs.msdn.com/b/bharry/archive/2011/08/02/version-control-model-enhancements-in-tfs-11.aspx">local workspaces coming in TFS11</a>?</p>
<p>The simple answer is it allows a developer to have the advantage of Git’s multiple local versions of a given file, that they can branch, merge and rollback to as required. All prior to pushing all the changes up to a central TFS server (as opposed to GitHub or a company central Git repository).</p>
<p>OK lets face it this is an edge case, and it is not helped by the usage being command line driven, as opposed to be integrated with the IDE (real developers don’t use a UI or mouse, so that is OK – right?). So to try to make life a it easier I would suggest also installing Posh Git.</p>
<h1 id="setup">Setup</h1>
<p>So what is required to get this running, if you like me a fairly new to Git there are a couple of gotcha’s. Here is the process I followed</p>
<p>I used <a href="http://chocolatey.org/">Chocolaty</a> (think Nuget for applications) to install <a href="https://github.com/git-tfs/git-tfs">tfsgit</a>, this handles the dependency for the Git client</p>
<blockquote>
<p>cinst tfsgit</p></blockquote>
<p>Next I install <a href="https://github.com/dahlbyk/posh-git">poshgit</a></p>
<blockquote>
<p>cinst poshgit</p></blockquote>
<p>It is essential that you edit your Windows PATH environment variable to point to both the Git and the TFSGit folders as this is how Git picks up the extra Tfs commands, it should be something similar too this</p>
<blockquote>
<p>PATH= $PATH;C:Program Files (x86)Gitcmd;C:toolsgittfs</p></blockquote>
<p>Finally for poshgit  you need runs its install script (in a PowerShell windows with elevated privileges), so it can report the number of file changes in the command prompt (note the prompt only changes when you are in a Git folder)</p>
<blockquote>
<p>c:toolspostgit..some versioninstall.ps1</p></blockquote>
<p>So hopefully this will get you going, so you can try this interesting edge case.</p>
<p><a href="http://herdingcode.com/?p=384">For more general chat on Git and distributed source control try this recent Herding Code podcast</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>ALM Rangers VS11 Beta Guidance</title>
      <link>https://blog.richardfennell.net/posts/alm-rangers-vs11-beta-guidance/</link>
      <pubDate>Wed, 29 Feb 2012 20:22:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alm-rangers-vs11-beta-guidance/</guid>
      <description>&lt;p&gt;Today, as well as the new &lt;a href=&#34;http://blogs.msdn.com/b/jasonz/archive/2012/02/29/welcome-to-the-beta-of-visual-studio-11-and-net-framework-4-5.aspx&#34;&gt;VS11 Beta bits from Microsoft&lt;/a&gt;, the ALM Rangers also shipped best practice guidance to get you started with the beta. This is a project I am very proud to have been involved with.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/visualstudioalm/archive/2012/02/29/welcome-to-visual-studio-11-alm-rangers-readiness-beta-wave.aspx&#34;&gt;The full details of the supporting guidance shipped can be found here&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today, as well as the new <a href="http://blogs.msdn.com/b/jasonz/archive/2012/02/29/welcome-to-the-beta-of-visual-studio-11-and-net-framework-4-5.aspx">VS11 Beta bits from Microsoft</a>, the ALM Rangers also shipped best practice guidance to get you started with the beta. This is a project I am very proud to have been involved with.</p>
<p><a href="http://blogs.msdn.com/b/visualstudioalm/archive/2012/02/29/welcome-to-visual-studio-11-alm-rangers-readiness-beta-wave.aspx">The full details of the supporting guidance shipped can be found here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Installing the TEE11 Beta as an upgrade to the plug-in in Eclipse</title>
      <link>https://blog.richardfennell.net/posts/installing-the-tee11-beta-as-an-upgrade-to-the-plug-in-in-eclipse/</link>
      <pubDate>Wed, 29 Feb 2012 19:26:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/installing-the-tee11-beta-as-an-upgrade-to-the-plug-in-in-eclipse/</guid>
      <description>&lt;p&gt;The big news today is is that &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2012/02/29/vs-11-beta-and-windows-8-consumer-previews-available.aspx&#34;&gt;Microsoft released the VS11 Beta&lt;/a&gt;, part of which is Team Explorer Everywhere (TEE). &lt;em&gt;(Oh they also release something called Windows 8 too – whatever that is)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Whilst upgrading my TEE instance in Eclipse (Indigo) &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/07/18/solution-to-missing-requirement-shared-profile-1-0-0-1308118925849-error-when-installation-tee-sp1-on-eclipse-indigo.aspx&#34;&gt;I hit the same gotcha as I had when I originally installed TEE (in Eclipse is in your ‘c:programs files’)&lt;/a&gt;. On Windows, if UAC is enabled you have to run Eclipse as administrator to do the plug-in else you get the error message.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The big news today is is that <a href="http://blogs.msdn.com/b/bharry/archive/2012/02/29/vs-11-beta-and-windows-8-consumer-previews-available.aspx">Microsoft released the VS11 Beta</a>, part of which is Team Explorer Everywhere (TEE). <em>(Oh they also release something called Windows 8 too – whatever that is)</em></p>
<p>Whilst upgrading my TEE instance in Eclipse (Indigo) <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/07/18/solution-to-missing-requirement-shared-profile-1-0-0-1308118925849-error-when-installation-tee-sp1-on-eclipse-indigo.aspx">I hit the same gotcha as I had when I originally installed TEE (in Eclipse is in your ‘c:programs files’)</a>. On Windows, if UAC is enabled you have to run Eclipse as administrator to do the plug-in else you get the error message.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_24.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_24.png" title="image"></a></p>
<p>As soon as you start Eclipse as administrator the upgrade works perfectly, you can then restart Eclipse as normal and all is OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>(Not) Using a Huawei E585 MIFI in the USA</title>
      <link>https://blog.richardfennell.net/posts/not-using-a-huawei-e585-mifi-in-the-usa/</link>
      <pubDate>Mon, 27 Feb 2012 00:46:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/not-using-a-huawei-e585-mifi-in-the-usa/</guid>
      <description>&lt;p&gt;I have an unlocked Hauwei E585 MIFI that I use around europe, avoiding roaming charges for my UK mobile contract. I buy a local pay as you go SIM for the appropriate country and off I go.&lt;/p&gt;
&lt;p&gt;I thought I would try the same here in the USA, where I am for the MVP Summit. I bought a T-Mobile data SIM, but it did not work so well.&lt;/p&gt;
&lt;p&gt;Basically the issue is one of aerials it seems. The E585 does not have the aerials it needs to connect for data in the USA, best it can do is a 2G connection, and even this seems to have issues, as &lt;a href=&#34;http://www.howardforums.com/showthread.php/1672611-Huawei-E585-%28MIFI%29-from-UK-s-three-co-uk&#34;&gt;mentioned in this post&lt;/a&gt;, that you need a phone SIM and not a data SIM. The bottom line seems to be the E585 is 2100MHz/900MHz UMTS only, AT&amp;amp;T are 850Mhz UMTS, T-Mobile will work on EDGE (2G) only. Verizon is CDMA. So it just just not going to work.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have an unlocked Hauwei E585 MIFI that I use around europe, avoiding roaming charges for my UK mobile contract. I buy a local pay as you go SIM for the appropriate country and off I go.</p>
<p>I thought I would try the same here in the USA, where I am for the MVP Summit. I bought a T-Mobile data SIM, but it did not work so well.</p>
<p>Basically the issue is one of aerials it seems. The E585 does not have the aerials it needs to connect for data in the USA, best it can do is a 2G connection, and even this seems to have issues, as <a href="http://www.howardforums.com/showthread.php/1672611-Huawei-E585-%28MIFI%29-from-UK-s-three-co-uk">mentioned in this post</a>, that you need a phone SIM and not a data SIM. The bottom line seems to be the E585 is 2100MHz/900MHz UMTS only, AT&amp;T are 850Mhz UMTS, T-Mobile will work on EDGE (2G) only. Verizon is CDMA. So it just just not going to work.</p>
<p>So next we tried it in other devices</p>
<ul>
<li>In a LG E900 Windows Phone 7 it worked fine as a MIFI, but again only 2G/Edge so a bit sloooow, OK for email, but that was all.</li>
<li>Next it was a Samsung Windows 8 table form the Build conference, this was better, seemed to be 3G speeds (the icons did not mention the network type), but could not share its network connection</li>
</ul>
<p>So the top tip? I think I need a US Mifi</p>
]]></content:encoded>
    </item>
    <item>
      <title>More on new TFS 11 announcements</title>
      <link>https://blog.richardfennell.net/posts/more-on-new-tfs-11-announcements/</link>
      <pubDate>Fri, 24 Feb 2012 15:19:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-on-new-tfs-11-announcements/</guid>
      <description>&lt;p&gt;There is a good discussion of the new TFS 11 announcements in the new &lt;a href=&#34;http://www.radiotfs.com/Show/38/ChristmasinFebruary&#34;&gt;Radio TFS podcast&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is a good discussion of the new TFS 11 announcements in the new <a href="http://www.radiotfs.com/Show/38/ChristmasinFebruary">Radio TFS podcast</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 11 and VS 11 Announcements</title>
      <link>https://blog.richardfennell.net/posts/tfs-11-and-vs-11-announcements/</link>
      <pubDate>Thu, 23 Feb 2012 19:54:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-11-and-vs-11-announcements/</guid>
      <description>&lt;p&gt;Microsoft have made a few announcements today&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2012/02/23/coming-soon-tfs-express.aspx&#34;&gt;On Brian Harry’s Blog&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In the TFS 11 range there will be a new download of TFS, called Team Foundation Server Express, that includes core developer features:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Source Code Control&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Work Item Tracking&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build Automation&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Agile Taskboard&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;and is free for 5 users (you buy CALs to add more)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Visual Studio Express will support TFS&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/jasonz/archive/2012/02/23/sneak-preview-of-visual-studio-11-and-net-framework-4-5-beta.aspx&#34;&gt;On Jason Zander’s Blog&lt;/a&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A sneak peak of details of the upcoming  VS11 and TFS 11 beta&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more details read the full posts I have linked to, and look out for the beta that will out on the 29th&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Microsoft have made a few announcements today</p>
<p><a href="http://blogs.msdn.com/b/bharry/archive/2012/02/23/coming-soon-tfs-express.aspx">On Brian Harry’s Blog</a></p>
<ul>
<li>
<p>In the TFS 11 range there will be a new download of TFS, called Team Foundation Server Express, that includes core developer features:</p>
</li>
<li>
<p>Source Code Control</p>
</li>
<li>
<p>Work Item Tracking</p>
</li>
<li>
<p>Build Automation</p>
</li>
<li>
<p>Agile Taskboard</p>
</li>
<li>
<p>and is free for 5 users (you buy CALs to add more)</p>
</li>
<li>
<p>Visual Studio Express will support TFS</p>
</li>
</ul>
<p><a href="http://blogs.msdn.com/b/jasonz/archive/2012/02/23/sneak-preview-of-visual-studio-11-and-net-framework-4-5-beta.aspx">On Jason Zander’s Blog</a></p>
<ul>
<li>A sneak peak of details of the upcoming  VS11 and TFS 11 beta</li>
</ul>
<p>For more details read the full posts I have linked to, and look out for the beta that will out on the 29th</p>
]]></content:encoded>
    </item>
    <item>
      <title>Free webinar on Typemock Isolator V7</title>
      <link>https://blog.richardfennell.net/posts/free-webinar-on-typemock-isolator-v7/</link>
      <pubDate>Mon, 20 Feb 2012 13:54:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/free-webinar-on-typemock-isolator-v7/</guid>
      <description>&lt;p&gt;Next Wednesday (the 22nd of February) Typemock are running a &lt;a href=&#34;https://www2.gotomeeting.com/register/723421498&#34;&gt;free webinar ‘Isolator V7 Preview: A New Perspective on Unit Testing’&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It will be showcasing&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Immediate feedback of newly introduced bugs with a new autorunner.&lt;/li&gt;
&lt;li&gt;Pinpoint identification of the bug&amp;rsquo;s location with the failed-test analyzer.&lt;/li&gt;
&lt;li&gt;Visual coverage of which part of your code is covered&lt;/li&gt;
&lt;li&gt;Powerful mocking, guaranteeing that you can write tests for any code, whether new code or legacy code&lt;/li&gt;
&lt;li&gt;Industry integration with major development tools&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All attendees get a free beta license for the new V7 product.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Next Wednesday (the 22nd of February) Typemock are running a <a href="https://www2.gotomeeting.com/register/723421498">free webinar ‘Isolator V7 Preview: A New Perspective on Unit Testing’</a>.</p>
<p>It will be showcasing</p>
<ul>
<li>Immediate feedback of newly introduced bugs with a new autorunner.</li>
<li>Pinpoint identification of the bug&rsquo;s location with the failed-test analyzer.</li>
<li>Visual coverage of which part of your code is covered</li>
<li>Powerful mocking, guaranteeing that you can write tests for any code, whether new code or legacy code</li>
<li>Industry integration with major development tools</li>
</ul>
<p>All attendees get a free beta license for the new V7 product.</p>
<p><strong>Updated 23 Feb 2012</strong> You can view a recording of the webinar at <a href="http://typemock.us2.list-manage1.com/track/click?u=e888bab428ec5dcaea84550e3&amp;id=7ceac94351&amp;e=04ccd09983">http://www.typemock.com/isolator-v7-preview/</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Follow up to the North East Imagine Cup event</title>
      <link>https://blog.richardfennell.net/posts/follow-up-to-the-north-east-imagine-cup-event/</link>
      <pubDate>Sun, 19 Feb 2012 11:15:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-up-to-the-north-east-imagine-cup-event/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/sspencer/post/2012/02/19/Imagine-Cup-NE.aspx&#34;&gt;Steve did a nice post&lt;/a&gt; on the fun we had at the event for Universities and Colleges in the North East of England to help them to develop their ideas for their &lt;a href=&#34;http://www.imaginecup.com/&#34;&gt;Imagine Cup&lt;/a&gt; entries.&lt;/p&gt;
&lt;p&gt;I won’t bother to just repeat what he wrote, other than it was very nice to see students so engaged with our industry. I hope some of the attendees will be able to make it to other &lt;a href=&#34;http://www.nebytes.net/&#34;&gt;community events&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/sspencer/post/2012/02/19/Imagine-Cup-NE.aspx">Steve did a nice post</a> on the fun we had at the event for Universities and Colleges in the North East of England to help them to develop their ideas for their <a href="http://www.imaginecup.com/">Imagine Cup</a> entries.</p>
<p>I won’t bother to just repeat what he wrote, other than it was very nice to see students so engaged with our industry. I hope some of the attendees will be able to make it to other <a href="http://www.nebytes.net/">community events</a>.</p>
<p>I mentioned a good number of books to various people that provide background on best practice. You can find links to all I mentioned, and a good few more, on the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/page/Reading-List.aspx">reading list section</a> of this blog</p>
]]></content:encoded>
    </item>
    <item>
      <title>You don’t need to keep that old TFS server to support older versions of Visual Studio</title>
      <link>https://blog.richardfennell.net/posts/you-dont-need-to-keep-that-old-tfs-server-to-support-older-versions-of-visual-studio/</link>
      <pubDate>Thu, 16 Feb 2012 17:57:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-dont-need-to-keep-that-old-tfs-server-to-support-older-versions-of-visual-studio/</guid>
      <description>&lt;p&gt;I have recently spoken to a number of people who were under the impression that older versions of Visual Studio could not connect to TFS2010. This is not the case. So for example you do not need to keep a TFS2005 running for your VS2005 clients.&lt;/p&gt;
&lt;p&gt;Why you might ask does this question even come up? VS2010 can build any .NET 2.0 –&amp;gt; 4.0 project so why not upgrade all your projects to VS2010? The answer is that products such as Biztalk and SQL Business Intelligence use older versions of the Visual Studio Shell e.g. so for SQL 2008 BI you are using in effect VS/Team Explorer 2008. Though it must be said this issue is getting better currently a BI developer still ends up having to use VS 2008 (until &lt;a href=&#34;http://blogs.msdn.com/b/ssdt/&#34;&gt;SSDT arrives&lt;/a&gt; with SQL 2012)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently spoken to a number of people who were under the impression that older versions of Visual Studio could not connect to TFS2010. This is not the case. So for example you do not need to keep a TFS2005 running for your VS2005 clients.</p>
<p>Why you might ask does this question even come up? VS2010 can build any .NET 2.0 –&gt; 4.0 project so why not upgrade all your projects to VS2010? The answer is that products such as Biztalk and SQL Business Intelligence use older versions of the Visual Studio Shell e.g. so for SQL 2008 BI you are using in effect VS/Team Explorer 2008. Though it must be said this issue is getting better currently a BI developer still ends up having to use VS 2008 (until <a href="http://blogs.msdn.com/b/ssdt/">SSDT arrives</a> with SQL 2012)</p>
<p>Also some companies may just have a policy to stay on a version of VS for their own reasons.</p>
<p>Either way this is not a barrier to using TFS 2010. The key to getting these older versions of Visual Studio to talk to TFS2010 is only a matter of applying the correct patch sets, so for</p>
<ul>
<li>2005 use <a href="http://www.microsoft.com/download/en/details.aspx?id=3263">Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010</a></li>
<li>2008 use <a href="http://www.microsoft.com/download/en/details.aspx?id=10834">Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010</a></li>
<li>2010 obviously this just works, but remember to keep it patched up to date.</li>
</ul>
<p>All the products can be installed side by side.</p>
<p>Another point to note is that if you are using any of the TFS 2010 PowerTools and want the same features in old versions of Team Explorer you must <strong>also</strong> install the 2005 and/or 2008 PowerTools versions. Even if the 2010 PowerTools are installed, they will not be found by the 2005 or 2008 clients. The most common time you see this issue is when using check in policies.</p>
<p>For those of you working with VS2003 or VB6 all is not lost, you too can use TFS 2010, you just need Team explorer 2010 installed and the <a href="http://visualstudiogallery.msdn.microsoft.com/bce06506-be38-47a1-9f29-d3937d3d88d6">MSSCCI  provider</a></p>
<p>Hope this post clears up a bit of confusion.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Thanks to everyone who came to my session at NEBytes last night</title>
      <link>https://blog.richardfennell.net/posts/thanks-to-everyone-who-came-to-my-session-at-nebytes-last-night/</link>
      <pubDate>Thu, 16 Feb 2012 10:57:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/thanks-to-everyone-who-came-to-my-session-at-nebytes-last-night/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my session at &lt;a href=&#34;http://www.nebytes.net/&#34;&gt;NEBytes&lt;/a&gt; last night, sorry I had to rush away. As my session was demo based I don’t have much in the way of slides to upload, but if you want to find out more have a look at my &lt;a href=&#34;http://blogs.msdn.com/b/visualstudiouk/archive/2012/02/02/an-introduction-to-agile-development-with-team-foundation-server-but-i-m-not-a-net-developer.aspx&#34; title=&#34;guest blog post on the UK Visual Studio blog I did&#34;&gt;guest blog post on the UK Visual Studio blog on ‘TFS for Everyone’&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Also keep an eye on the [Black Marble site for upcoming free webinar](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=Visual&#34;&gt;http://www.blackmarble.com/events.aspx?event=Visual&lt;/a&gt; Studio Team Foundation for Everyone (Online)) sessions on the same subject&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my session at <a href="http://www.nebytes.net/">NEBytes</a> last night, sorry I had to rush away. As my session was demo based I don’t have much in the way of slides to upload, but if you want to find out more have a look at my <a href="http://blogs.msdn.com/b/visualstudiouk/archive/2012/02/02/an-introduction-to-agile-development-with-team-foundation-server-but-i-m-not-a-net-developer.aspx" title="guest blog post on the UK Visual Studio blog I did">guest blog post on the UK Visual Studio blog on ‘TFS for Everyone’</a>.</p>
<p>Also keep an eye on the [Black Marble site for upcoming free webinar](<a href="http://www.blackmarble.com/events.aspx?event=Visual">http://www.blackmarble.com/events.aspx?event=Visual</a> Studio Team Foundation for Everyone (Online)) sessions on the same subject</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking on the 15th Feb at the NE Bytes user group on “TFS for Everyone”</title>
      <link>https://blog.richardfennell.net/posts/speaking-on-the-15th-feb-at-the-ne-bytes-user-group-on-tfs-for-everyone/</link>
      <pubDate>Wed, 08 Feb 2012 10:23:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-on-the-15th-feb-at-the-ne-bytes-user-group-on-tfs-for-everyone/</guid>
      <description>&lt;p&gt;I will be speaking at the &lt;a href=&#34;http://www.nebytes.net/category/events.aspx&#34;&gt;NE Bytes user group&lt;/a&gt; next Wednesday (the 15th of Feb) on ‘TFS for Everyone’.&lt;/p&gt;
&lt;p&gt;This a session based on the &lt;a href=&#34;http://blogs.msdn.com/b/visualstudiouk/archive/2012/02/02/an-introduction-to-agile-development-with-team-foundation-server-but-i-m-not-a-net-developer.aspx&#34;&gt;guest blog post on the UK Visual Studio blog  I did&lt;/a&gt; on how TFS is not just for .NET developers. It can be used from a whole range of development platforms and operating systems. I will be including demos using Eclipse and Ubuntu.&lt;/p&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking at the <a href="http://www.nebytes.net/category/events.aspx">NE Bytes user group</a> next Wednesday (the 15th of Feb) on ‘TFS for Everyone’.</p>
<p>This a session based on the <a href="http://blogs.msdn.com/b/visualstudiouk/archive/2012/02/02/an-introduction-to-agile-development-with-team-foundation-server-but-i-m-not-a-net-developer.aspx">guest blog post on the UK Visual Studio blog  I did</a> on how TFS is not just for .NET developers. It can be used from a whole range of development platforms and operating systems. I will be including demos using Eclipse and Ubuntu.</p>
<p>Hope to see you there.</p>
<p><strong>Updated 9 Feb 2012</strong> – here is the registration link <a href="http://nebytesfeb2012.eventbrite.co.uk">http://nebytesfeb2012.eventbrite.co.uk</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Webinar on migration from VSS to TFS</title>
      <link>https://blog.richardfennell.net/posts/webinar-on-migration-from-vss-to-tfs/</link>
      <pubDate>Tue, 07 Feb 2012 20:14:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/webinar-on-migration-from-vss-to-tfs/</guid>
      <description>&lt;p&gt;I will be giving a free webinar presentation on the morning of Monday  the 20th February on ‘Why migrate from Visual SourceSafe to Visual Studio Team Foundation Server’.&lt;/p&gt;
&lt;p&gt;[To register  for this event please check out the Black Marble site](&lt;a href=&#34;http://blackmarble.co.uk/events.aspx?event=Why&#34;&gt;http://blackmarble.co.uk/events.aspx?event=Why&lt;/a&gt; migrate from Visual SourceSafe to Visual Studio Team Foundation Server (Online))&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be giving a free webinar presentation on the morning of Monday  the 20th February on ‘Why migrate from Visual SourceSafe to Visual Studio Team Foundation Server’.</p>
<p>[To register  for this event please check out the Black Marble site](<a href="http://blackmarble.co.uk/events.aspx?event=Why">http://blackmarble.co.uk/events.aspx?event=Why</a> migrate from Visual SourceSafe to Visual Studio Team Foundation Server (Online))</p>
]]></content:encoded>
    </item>
    <item>
      <title>More Community TFS Build Extension Documentation</title>
      <link>https://blog.richardfennell.net/posts/more-community-tfs-build-extension-documentation/</link>
      <pubDate>Mon, 06 Feb 2012 20:49:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-community-tfs-build-extension-documentation/</guid>
      <description>&lt;p&gt;Just published &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20VB6%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;documentation of the VB6 activity&lt;/a&gt; on the Community TFS Build extensions site&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just published <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20VB6%20build%20activity&amp;referringTitle=Documentation">documentation of the VB6 activity</a> on the Community TFS Build extensions site</p>
]]></content:encoded>
    </item>
    <item>
      <title>Filtering in MDX Calculated Members</title>
      <link>https://blog.richardfennell.net/posts/filtering-in-mdx-calculated-members/</link>
      <pubDate>Sat, 04 Feb 2012 10:28:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/filtering-in-mdx-calculated-members/</guid>
      <description>&lt;p&gt;BI development is not something I do that often, but from time to time you need to develop a custom report in TFS. I recently had to battle a MDX problem that someone who does more BI development I am sure why have had no issue with; but as with most of these blog posts (or my long term memory as I think of it) I thought it worth a post in case it helps anyone else&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>BI development is not something I do that often, but from time to time you need to develop a custom report in TFS. I recently had to battle a MDX problem that someone who does more BI development I am sure why have had no issue with; but as with most of these blog posts (or my long term memory as I think of it) I thought it worth a post in case it helps anyone else</p>
<p>What I was trying to do was produce a table, as shown below, that would allow me to report for a project on the total estimated time in the requirements work items (an estimate made in the requirements planning phase of the project), the total original estimate in the tasks work items (the estimate made by the developers during iteration planning) and compare both with the actual completed time from the task work items.</p>
<p><strong>Project</strong></p>
<p><strong>Estimated Effort (from Requirements)</strong></p>
<p><strong>Estimated Effort (from Task)</strong></p>
<p><strong>Actual Effort (from Tasks)</strong></p>
<p>Proj A</p>
<p>10</p>
<p>11</p>
<p>12</p>
<p>Proj B</p>
<p>15</p>
<p>14</p>
<p>21</p>
<p>Proj C</p>
<p>23</p>
<p>20</p>
<p>24</p>
<p>Proj D</p>
<p>9</p>
<p>10</p>
<p>10</p>
<p>The problem is that in the TFS warehouse both the requirement and task work item estimate in stored in the</p>
<blockquote>
<p>[Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate]</p></blockquote>
<p>measure. In the MDX query I needed to add a pair of calculated measures that would filter for the two work item type.</p>
<p>This is where I stumbled, should I use <a href="http://msdn.microsoft.com/en-us/library/ms145994.aspx">IIF()</a> or <a href="http://msdn.microsoft.com/en-us/library/ms146037.aspx">FILTER().</a> So I tried both. However, working in Report Builder 3 neither seemed to work. I seemed to end up with either an empty column or not filtering at all and showing the sum of all the work items estimates irrespective of the filter.</p>
<p>The first tip is stop work in Report Builder, this is great for making the report look good, but not the best for resolving MDX issues. Use the query tool within SQL Management Studio. As soon as I did this I saw some of my efforts were returning #Err. This explained my empty columns, Report Builder seemed to just swallow the #Err and give me an empty column.</p>
<p>A a bit more digging I found the form that did what I needed, and ended up with the following form for the MDX in the calculated measures</p>
<blockquote>
<p>MEMBER [Measures].[EstimatedWorkForTasks] AS &lsquo;([Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate], [Work Item].[System_WorkItemType].[Task])&rsquo;</p></blockquote>
<p>By editing the MDX in SQL Management Studio it was fair quicker to develop and debug</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_23.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_23.png" title="image"></a></p>
<p>Once I was happy with the MDX, I could cut and pasted it back into Report Builder and fix the layout of the report. And all without using either IIF() or FILTER().</p>
]]></content:encoded>
    </item>
    <item>
      <title>Enjoyed our SDL event last week?</title>
      <link>https://blog.richardfennell.net/posts/enjoyed-our-sdl-event-last-week/</link>
      <pubDate>Fri, 03 Feb 2012 20:56:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/enjoyed-our-sdl-event-last-week/</guid>
      <description>&lt;p&gt;If you enjoyed Black Marble’s SDL event last week there is more info on the subject of &lt;a href=&#34;http://www.dotnetrocks.com/default.aspx?showNum=738&#34;&gt;SDL on this weeks .NET Rocks podcast&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you enjoyed Black Marble’s SDL event last week there is more info on the subject of <a href="http://www.dotnetrocks.com/default.aspx?showNum=738">SDL on this weeks .NET Rocks podcast</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>My guest post on Microsoft Visual Studio UK Team Blog published</title>
      <link>https://blog.richardfennell.net/posts/my-guest-post-on-microsoft-visual-studio-uk-team-blog-published/</link>
      <pubDate>Thu, 02 Feb 2012 22:48:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-guest-post-on-microsoft-visual-studio-uk-team-blog-published/</guid>
      <description>&lt;p&gt;A &lt;a href=&#34;http://blogs.msdn.com/b/visualstudiouk/archive/2012/02/02/an-introduction-to-agile-development-with-team-foundation-server-but-i-m-not-a-net-developer.aspx&#34;&gt;guest post&lt;/a&gt; I wrote on TFS for the non-.NET developer has just been published on the &lt;a href=&#34;http://blogs.msdn.com/b/visualstudiouk/&#34;&gt;Microsoft Visual Studio UK Team Blog&lt;/a&gt;. Why not pop over and have a look.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A <a href="http://blogs.msdn.com/b/visualstudiouk/archive/2012/02/02/an-introduction-to-agile-development-with-team-foundation-server-but-i-m-not-a-net-developer.aspx">guest post</a> I wrote on TFS for the non-.NET developer has just been published on the <a href="http://blogs.msdn.com/b/visualstudiouk/">Microsoft Visual Studio UK Team Blog</a>. Why not pop over and have a look.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Update released to fix issue for TFS 2010 with SQL Enterprise Page Compression</title>
      <link>https://blog.richardfennell.net/posts/update-released-to-fix-issue-for-tfs-2010-with-sql-enterprise-page-compression/</link>
      <pubDate>Thu, 02 Feb 2012 09:51:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/update-released-to-fix-issue-for-tfs-2010-with-sql-enterprise-page-compression/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2012/02/01/make-sure-your-sql-server-enterprise-edition-is-up-to-date.aspx&#34;&gt;Brian Harry has just posted&lt;/a&gt; on his blog about a bug in SQL page compression (found in SQL Enterprise) that can cause problems with TFS 2010 or TFS1(preview), previous version of TFS did not support page compression.&lt;/p&gt;
&lt;p&gt;So if you are using TFS and SQL Enterprise have a read and make sure the correct updates are in place.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.msdn.com/b/bharry/archive/2012/02/01/make-sure-your-sql-server-enterprise-edition-is-up-to-date.aspx">Brian Harry has just posted</a> on his blog about a bug in SQL page compression (found in SQL Enterprise) that can cause problems with TFS 2010 or TFS1(preview), previous version of TFS did not support page compression.</p>
<p>So if you are using TFS and SQL Enterprise have a read and make sure the correct updates are in place.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Have you tried switching it on and off again? Go on be aggressive!</title>
      <link>https://blog.richardfennell.net/posts/have-you-tried-switching-it-on-and-off-again-go-on-be-aggressive/</link>
      <pubDate>Tue, 31 Jan 2012 12:43:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/have-you-tried-switching-it-on-and-off-again-go-on-be-aggressive/</guid>
      <description>&lt;p&gt;We have been building ‘standard’ environments for our TFS Lab Management system. Environments that can be used for most of the projects we are involved in without too much extra setup e.g. a small domain controller VM and a Server VM with SQL and SharePoint. These environments have a series of snapshots so it can be used in a number of ways e.g if we just want SQL and IIS we just go back to a snapshot prior to SharePoint being installed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have been building ‘standard’ environments for our TFS Lab Management system. Environments that can be used for most of the projects we are involved in without too much extra setup e.g. a small domain controller VM and a Server VM with SQL and SharePoint. These environments have a series of snapshots so it can be used in a number of ways e.g if we just want SQL and IIS we just go back to a snapshot prior to SharePoint being installed.</p>
<p>When trying to deploy one of these environments we saw a couple issues.</p>
<h2 id="capacity">Capacity</h2>
<p>First we got the error that there was not a suitable host with enough capacity to host the environment (remember all the VMs in a network isolated environment need to be on the same Hyper-V host). This can be a bit of a red herring as with dynamic memory and other new Hyper-V features there is often the capacity there (<a href="http://pascoal.net/tag/labmanagement/">see Tiago’s post on this for more details</a>). The fix here was to set TFS to allow aggressive deployment using the command</p>
<blockquote>
<p>C:Program FilesMicrosoft Team Foundation Server 2010Tools&gt;tfsconfig lab /hostgroup /collectionName:myTpc  ?/labenvironmentplacementpolicy:aggressive /edit /Name:&ldquo;My hosts group&rdquo;</p></blockquote>
<h2 id="initial-startup">Initial Startup</h2>
<p>The next problem I saw was that when the new environment was deployed it did not started cleanly. The first time an environment is started it seems to take longer than subsequent starts (assume there is some initial configuration done). Basically in this case network isolation did not start correctly, hence build and testing capabilities also failed.</p>
<p>The fix was simple, shut down the environment and start it again. The tried and trusted IT answer to all problems. This time it started fine, and was faster to start.</p>
<p>Now I have not see this issue every time I deploy. When I deployed the same environment again and it worked perfectly first time. I suspect it was really a capacity issue on the underlying Hyper-V server causing some delay, but I am running in aggressive mode so I should expect this.</p>
]]></content:encoded>
    </item>
    <item>
      <title>More links from our SDL event</title>
      <link>https://blog.richardfennell.net/posts/more-links-from-our-sdl-event/</link>
      <pubDate>Fri, 27 Jan 2012 20:16:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-links-from-our-sdl-event/</guid>
      <description>&lt;p&gt;If you want to hear a bit more detail on some of the common security issues we discussed at yesterdays SDL event why not listen to &lt;a href=&#34;http://www.dotnetrocks.com/default.aspx?showNum=735&#34;&gt;Troy Hunt Secures ASP.NET recent show on .NET Rocks&lt;/a&gt; or download &lt;a href=&#34;http://asafaweb.com/OWASP%20Top%2010%20for%20.NET%20developers.pdf&#34;&gt;his PDF ‘OWASP Top 10 for .NET developers’&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you want to hear a bit more detail on some of the common security issues we discussed at yesterdays SDL event why not listen to <a href="http://www.dotnetrocks.com/default.aspx?showNum=735">Troy Hunt Secures ASP.NET recent show on .NET Rocks</a> or download <a href="http://asafaweb.com/OWASP%20Top%2010%20for%20.NET%20developers.pdf">his PDF ‘OWASP Top 10 for .NET developers’</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Links from our SDL event today</title>
      <link>https://blog.richardfennell.net/posts/links-from-our-sdl-event-today/</link>
      <pubDate>Thu, 26 Jan 2012 19:30:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/links-from-our-sdl-event-today/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended our SDL event today. Here are a few links to information  we mentioned&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.microsoft.com/security/sdl/default.aspx&#34;&gt;Microsoft SDK home page&lt;/a&gt;, full of downloads, links and training videos&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20cat.net%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;TFS Build extensions for CAT.NET&lt;/a&gt;, showing how to add SDL validation to a TFS build&lt;/li&gt;
&lt;li&gt;And there are selection of links to books on SDL on &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/page/Reading-List.aspx&#34;&gt;my blog’s reading list page&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended our SDL event today. Here are a few links to information  we mentioned</p>
<ul>
<li><a href="http://www.microsoft.com/security/sdl/default.aspx">Microsoft SDK home page</a>, full of downloads, links and training videos</li>
<li><a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20cat.net%20build%20activity&amp;referringTitle=Documentation">TFS Build extensions for CAT.NET</a>, showing how to add SDL validation to a TFS build</li>
<li>And there are selection of links to books on SDL on <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/page/Reading-List.aspx">my blog’s reading list page</a></li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>New 1.3.0.0 release of the Community TFS Build Extensions for TFS 2010</title>
      <link>https://blog.richardfennell.net/posts/new-1-3-0-0-release-of-the-community-tfs-build-extensions-for-tfs-2010/</link>
      <pubDate>Wed, 25 Jan 2012 21:53:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-1-3-0-0-release-of-the-community-tfs-build-extensions-for-tfs-2010/</guid>
      <description>&lt;p&gt;A new &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/releases/view/79301&#34;&gt;1.3.0.0 release of the Community TFS Build Extensions&lt;/a&gt; has been published today. This contains some fixes  and two new activities&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20SPDispose%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;SPDisposeChecker&lt;/a&gt; – checks for SharePoint best practice&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20cat.net%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;CatNetScan&lt;/a&gt; – checks for common security issues&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>A new <a href="http://tfsbuildextensions.codeplex.com/releases/view/79301">1.3.0.0 release of the Community TFS Build Extensions</a> has been published today. This contains some fixes  and two new activities</p>
<ul>
<li><a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20SPDispose%20build%20activity&amp;referringTitle=Documentation">SPDisposeChecker</a> – checks for SharePoint best practice</li>
<li><a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20cat.net%20build%20activity&amp;referringTitle=Documentation">CatNetScan</a> – checks for common security issues</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Black Marble launches new series of free online event</title>
      <link>https://blog.richardfennell.net/posts/black-marble-launches-new-series-of-free-online-event/</link>
      <pubDate>Thu, 19 Jan 2012 13:31:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/black-marble-launches-new-series-of-free-online-event/</guid>
      <description>&lt;p&gt;For many years &lt;a href=&#34;http://www.blackmarble.com/SectionDisplay.aspx?name=Events&#34;&gt;Black Marble has run free events&lt;/a&gt; on a wide range of technologies. Originally we only ran these locally in Yorkshire, but as time progressed we have run them across the UK and Eire. They are a great way to help your staff whether technical or managerial get up to speed with new developments in our industry.&lt;/p&gt;
&lt;p&gt;For 2012 we have decided to &lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&amp;amp;title=A%20Series%20of%20Online%20Events&#34;&gt;try running some events online&lt;/a&gt;. The format for these events will be a short 30 minute interactive sessions where you will have the chance to see a short presentation and/or demonstration on a technology followed by a Q&amp;amp;A session with the presenter.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For many years <a href="http://www.blackmarble.com/SectionDisplay.aspx?name=Events">Black Marble has run free events</a> on a wide range of technologies. Originally we only ran these locally in Yorkshire, but as time progressed we have run them across the UK and Eire. They are a great way to help your staff whether technical or managerial get up to speed with new developments in our industry.</p>
<p>For 2012 we have decided to <a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&amp;title=A%20Series%20of%20Online%20Events">try running some events online</a>. The format for these events will be a short 30 minute interactive sessions where you will have the chance to see a short presentation and/or demonstration on a technology followed by a Q&amp;A session with the presenter.</p>
<p>Hopefully these will prove to be as successful as our other events.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF266026 error when a workflow will not start in a lab environment</title>
      <link>https://blog.richardfennell.net/posts/tf266026-error-when-a-workflow-will-not-start-in-a-lab-environment/</link>
      <pubDate>Wed, 18 Jan 2012 17:05:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf266026-error-when-a-workflow-will-not-start-in-a-lab-environment/</guid>
      <description>&lt;p&gt;A common cause of the TF266026 error is because when the build agent tries to start (it is the build agent that runs the workflows in Lab Management) it cannot access the custom assemblies folder as defined for its parent build controller. Obviously this problem only occurs if you have  set a custom assemblies path for parent build controller.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_21.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_21.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The reason for the error is because the agent is running as the Lab Management service account, in my case &lt;strong&gt;tfs2010lab&lt;/strong&gt;, as defined for the TPC in the TFS Administration Console. This account by default has no rights to the source folder assigned for the custom assemblies. This is not usually an issue until it needs to access source control to load custom assemblies (which actually it probably does not ever use as it is not building code!).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A common cause of the TF266026 error is because when the build agent tries to start (it is the build agent that runs the workflows in Lab Management) it cannot access the custom assemblies folder as defined for its parent build controller. Obviously this problem only occurs if you have  set a custom assemblies path for parent build controller.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_21.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_21.png" title="image"></a></p>
<p>The reason for the error is because the agent is running as the Lab Management service account, in my case <strong>tfs2010lab</strong>, as defined for the TPC in the TFS Administration Console. This account by default has no rights to the source folder assigned for the custom assemblies. This is not usually an issue until it needs to access source control to load custom assemblies (which actually it probably does not ever use as it is not building code!).</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_22.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_22.png" title="image"></a></p>
<p>As soon as this service account is granted access  to this folder, by making it a  reader, contributor or builder on the team project, the problem goes away.</p>
]]></content:encoded>
    </item>
    <item>
      <title>‘Showing a modal dialog box or form when the application is not running in UserInteractive mode’ error upgraded to TFS build extensions 1.2.0.0</title>
      <link>https://blog.richardfennell.net/posts/showing-a-modal-dialog-box-or-form-when-the-application-is-not-running-in-userinteractive-mode-error-upgraded-to-tfs-build-extensions-1-2-0-0/</link>
      <pubDate>Tue, 17 Jan 2012 15:14:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/showing-a-modal-dialog-box-or-form-when-the-application-is-not-running-in-userinteractive-mode-error-upgraded-to-tfs-build-extensions-1-2-0-0/</guid>
      <description>&lt;p&gt;Whilst upgrading a TFS 2010 build today to the new &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/&#34;&gt;1.2 release of the Community TFS Build Extensions&lt;/a&gt; we hit an issue. All seemed to go OK until the build tried to use the StyleCop activity, which failed with the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Showing a modal dialog box or form when the application is not running in UserInteractive mode is not a valid operation. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst upgrading a TFS 2010 build today to the new <a href="http://tfsbuildextensions.codeplex.com/">1.2 release of the Community TFS Build Extensions</a> we hit an issue. All seemed to go OK until the build tried to use the StyleCop activity, which failed with the error</p>
<blockquote>
<p><em>Showing a modal dialog box or form when the application is not running in UserInteractive mode is not a valid operation. Specify the ServiceNotification or DefaultDesktopOnly style to display a notification from a service application.</em></p></blockquote>
<p>After a bit of pointless fiddling we decided the only option was to set the build service in question to run interactively (set on the build service properties in TFS administration console on the build box). Once this was done the following dialog popped up</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image001.png"><img alt="clip_image001" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image001_thumb.png" title="clip_image001"></a> </p>
<p>On checking the assemblies copied into the <em><strong>CustomAssemblies</strong></em> folder referenced by the build controller we found we had an older version of this file (from the previous release of the build extensions).</p>
<p>Once we replaced this file we got a bit further, we did not get a dialog, but the build failed with the error in the log</p>
<blockquote>
<p>_Error: Could not load file or assembly &lsquo;StyleCop, Version=4.6.3.0, Culture=neutral, PublicKeyToken=f904653c63bc2738&rsquo; or one of its dependencies. The system cannot find the file specified.. Stack Trace:    at TfsBuildExtensions.Activities.CodeQuality.StyleCop.Scan()    at TfsBuildExtensions.Activities.CodeQuality.StyleCop.InternalExecute() in D:Projectsteambuild2010contribCustomActivitiesVS2010MAINSourceActivities.StyleCopStylecop.cs:line 134    at TfsBuildExtensions.Activities.BaseCodeActivity.Execute(CodeActivityContext context) in D:Projectsteambuild2010contribCustomActivitiesVS2010MAINSourceCommonBaseCodeActivity.cs:line 67.<br>
_</p></blockquote>
<p>The issue was we had not upgraded the StyleCop assemblies in the <em><strong>CustomAssemblies</strong></em> folder to match the ones the 1.2.0.0 release of the build extensions was built against (it needed 4.6.30, note not the latest 4.7.x.x.). So we changed these files to the 4.6.3.0 release and the build worked</p>
<p>Interestingly note that the file names have changed from the 4.4.x.x. to 4.6.x.x release of StyleCop from <strong>Microsoft.StyleCop.*.dll</strong> to just <strong>StyleCop.*.dll</strong>, so make sure you delete the old files in the <strong>CustomActivities</strong> folder to avoid confusion.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_20.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_20.png" title="image"></a></p>
<p>To the top tip here is to make sure you update all of the assemblies involved in your build to avoid dependency issues.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Confused over the workflow to get an environment setup in TFS Lab Management?</title>
      <link>https://blog.richardfennell.net/posts/confused-over-the-workflow-to-get-an-environment-setup-in-tfs-lab-management/</link>
      <pubDate>Tue, 17 Jan 2012 11:48:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/confused-over-the-workflow-to-get-an-environment-setup-in-tfs-lab-management/</guid>
      <description>&lt;p&gt;It can be a bit confusing to get work out which tools to use at which stages required to get a lab environment up and running in TFS. Here is a basic workflow showing what you need to do in System Center Virtual Machine Manager prior to starting in MTM Lab Center&lt;/p&gt;
&lt;p&gt;Note: if you want to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/23/Moving-Environments-between-TPCs-when-using-TFS-Lab-Management.aspx&#34;&gt;copy environments between TFS Team project Collections have a look at this post&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_19.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_19.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It can be a bit confusing to get work out which tools to use at which stages required to get a lab environment up and running in TFS. Here is a basic workflow showing what you need to do in System Center Virtual Machine Manager prior to starting in MTM Lab Center</p>
<p>Note: if you want to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/23/Moving-Environments-between-TPCs-when-using-TFS-Lab-Management.aspx">copy environments between TFS Team project Collections have a look at this post</a></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_19.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_19.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TF260073 incompatible architecture error when trying to deploy an environment in Lab Manager</title>
      <link>https://blog.richardfennell.net/posts/tf260073-incompatible-architecture-error-when-trying-to-deploy-an-environment-in-lab-manager/</link>
      <pubDate>Tue, 17 Jan 2012 09:21:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf260073-incompatible-architecture-error-when-trying-to-deploy-an-environment-in-lab-manager/</guid>
      <description>&lt;p&gt;I got a TF260073, incompatible architecture error when trying to deploy a new virtual lab environment using a newly created VM and template. I found the fix in a &lt;a href=&#34;http://www.google.co.uk/url?sa=t&amp;amp;rct=j&amp;amp;q=tf260073&amp;amp;source=web&amp;amp;cd=2&amp;amp;ved=0CCgQFjAB&amp;amp;url=http%3A%2F%2Fsocial.msdn.microsoft.com%2FForums%2Fen-US%2Fvslab%2Fthread%2F3d613941-e24f-4095-be66-c621a51d5230%2F&amp;amp;ei=hDkVT8OsHJC3hAeXxZi4Ag&amp;amp;usg=AFQjCNGodU6eU2gnhu_nMPcYu__DTLCJTw&amp;amp;sig2=b93gLm5SxAzC3oaQrszwsg&#34;&gt;forum post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The issue was that when I had build the VMs, I had installed the Lab Management agents using a &lt;a href=&#34;http://archive.msdn.microsoft.com/vslabmgmt&#34;&gt;VMprep DVD ISO&lt;/a&gt; and mounted it using ‘share image instead of copying it’ option. This as the name implies means the ISO is mount from a share not copied to the server running the VM, this save time and disk resources. When I had stored my VM into the SCVMM Library I had left this option selected i.e the VMPrep.iso mounted. All I had to do to fix this issue was open the settings of the VM stored in the SCVMM Library and dismount the ISO, as shown below&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I got a TF260073, incompatible architecture error when trying to deploy a new virtual lab environment using a newly created VM and template. I found the fix in a <a href="http://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=tf260073&amp;source=web&amp;cd=2&amp;ved=0CCgQFjAB&amp;url=http%3A%2F%2Fsocial.msdn.microsoft.com%2FForums%2Fen-US%2Fvslab%2Fthread%2F3d613941-e24f-4095-be66-c621a51d5230%2F&amp;ei=hDkVT8OsHJC3hAeXxZi4Ag&amp;usg=AFQjCNGodU6eU2gnhu_nMPcYu__DTLCJTw&amp;sig2=b93gLm5SxAzC3oaQrszwsg">forum post</a>.</p>
<p>The issue was that when I had build the VMs, I had installed the Lab Management agents using a <a href="http://archive.msdn.microsoft.com/vslabmgmt">VMprep DVD ISO</a> and mounted it using ‘share image instead of copying it’ option. This as the name implies means the ISO is mount from a share not copied to the server running the VM, this save time and disk resources. When I had stored my VM into the SCVMM Library I had left this option selected i.e the VMPrep.iso mounted. All I had to do to fix this issue was open the settings of the VM stored in the SCVMM Library and dismount the ISO, as shown below</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_18.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_18.png" title="image"></a></p>
<p>Interestingly the other VM I was using in my environment was stored as template and did not suffer this problem. When creating the template I was warning that it could not be created if an ISO was mounted in this manner. So the fact I had a problem with my VM image should not have been a surprise.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting a ‘File Download’ dialog when trying to view TFS build report in Eclipse 3.7 with TEE</title>
      <link>https://blog.richardfennell.net/posts/getting-a-file-download-dialog-when-trying-to-view-tfs-build-report-in-eclipse-3-7-with-tee/</link>
      <pubDate>Fri, 13 Jan 2012 11:17:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-a-file-download-dialog-when-trying-to-view-tfs-build-report-in-eclipse-3-7-with-tee/</guid>
      <description>&lt;p&gt;When using TEE in Eclipse 3.7 on Ubuntu 11.10 there is a problem trying to view a TFS build report. If you click on the report in the Build Explorer you would expect a new tab to open and the report be shown. This is what you see in Eclipse on Windows and on older versions of Eclipse on Linux. However on Ubuntu 11.10 with Eclipse 3.7 you get a File Download dialog.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When using TEE in Eclipse 3.7 on Ubuntu 11.10 there is a problem trying to view a TFS build report. If you click on the report in the Build Explorer you would expect a new tab to open and the report be shown. This is what you see in Eclipse on Windows and on older versions of Eclipse on Linux. However on Ubuntu 11.10 with Eclipse 3.7 you get a File Download dialog.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_14.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_14.png" title="image"></a></p>
<p>I understand from Microsoft this is a known issue, thanks again to the team for helping get to the bottom of this.</p>
<p>The problem is due to how Eclipse manages its internal web browser. Until version 3.7 it used the Mozilla stack (which is still the stack used internally by TEE for all its calls), but with Eclipse 3.7 on Linux it now uses WebKit as the stack to open request URL such as the build report. For some reason this is causing the dialog to be show.</p>
<p>There are two workaround:</p>
<p><strong>Set Eclipse to use an external browser</strong></p>
<p>In Eclipse –&gt; Windows –&gt; Preference, select use external browser</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_15.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_15.png" title="image"></a></p>
<p>When you now click on the build details an external browser is launched showing the results you would expect.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_16.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_16.png" title="image"></a></p>
<p><strong>Switch Eclipse back to using Mozilla as its default</strong></p>
<p><a href="http://www.eclipse.org/swt/faq.php#browserspecifydefault">You can switch Eclipse back to using mozilla as its default</a>. In your <strong>eclipse.ini</strong> set</p>
<blockquote>
<p><em>-Dorg.eclipse.swt.browser.DefaultType=mozilla</em></p></blockquote>
<p>Once this is done Eclipse should behave as expected, opening a tab to show the build report within Eclipse.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_17.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_17.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Changed my phone to a Nokia</title>
      <link>https://blog.richardfennell.net/posts/changed-my-phone-to-a-nokia/</link>
      <pubDate>Tue, 10 Jan 2012 12:52:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/changed-my-phone-to-a-nokia/</guid>
      <description>&lt;p&gt;I swapped to a Nokia Lumia 800 yesterday from my LG E900, all very quick an easy after &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/10/Error-0x80070490-when-trying-to-make-any-purchase-on-WP7-MarketPlace.aspx&#34;&gt;my experience last month&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;My first impressions&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;the on/off/volume buttons were better placed for a left hander on the LG, but I expect I will get used to that.&lt;/li&gt;
&lt;li&gt;the poor reception in my house was not the LGs fault – just a bad reception area&lt;/li&gt;
&lt;li&gt;the Nokia does seem faster&lt;/li&gt;
&lt;/ol&gt;</description>
      <content:encoded><![CDATA[<p>I swapped to a Nokia Lumia 800 yesterday from my LG E900, all very quick an easy after <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/12/10/Error-0x80070490-when-trying-to-make-any-purchase-on-WP7-MarketPlace.aspx">my experience last month</a>.</p>
<p>My first impressions</p>
<ol>
<li>the on/off/volume buttons were better placed for a left hander on the LG, but I expect I will get used to that.</li>
<li>the poor reception in my house was not the LGs fault – just a bad reception area</li>
<li>the Nokia does seem faster</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>Problems finding XULRunner when running TEE11 CTP1 on Ubuntu and connecting to TFS Azure – a solution</title>
      <link>https://blog.richardfennell.net/posts/problems-finding-xulrunner-when-running-tee11-ctp1-on-ubuntu-and-connecting-to-tfs-azure-a-solution/</link>
      <pubDate>Sun, 08 Jan 2012 21:41:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-finding-xulrunner-when-running-tee11-ctp1-on-ubuntu-and-connecting-to-tfs-azure-a-solution/</guid>
      <description>&lt;p&gt;I recently got round to taking a look at &lt;a href=&#34;http://www.google.co.uk/url?sa=t&amp;amp;rct=j&amp;amp;q=tee%20ctp1&amp;amp;source=web&amp;amp;cd=1&amp;amp;ved=0CB4QFjAA&amp;amp;url=http%3A%2F%2Fwww.microsoft.com%2Fdownload%2Fen%2Fdetails.aspx%3Fid%3D27544&amp;amp;ei=5-4JT-elLcjQ4QTTneiNCA&amp;amp;usg=AFQjCNGfUBcFTHG_hre2XsaSXTXbil9cgg&amp;amp;sig2=PhKiipceucSDUg3HTbhZkQ&#34;&gt;Team Explorer Everywhere 11 CTP1&lt;/a&gt;. This is the version of TEE that allows you to access the &lt;a href=&#34;http://tfspreview.com&#34;&gt;Azure hosted preview of the next version of TFS&lt;/a&gt; using Eclipse as a client. I decided to start with  a clean OS so&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Downloaded the &lt;a href=&#34;http://www.ubuntu.com/download/ubuntu/download&#34;&gt;Ubuntu 32bit ISO&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Used this ISO to create a test VM on my copy of &lt;a href=&#34;https://www.virtualbox.org/&#34;&gt;VirtualBox&lt;/a&gt; (currently using VirtualBox as this allows me to create 64bit and 32bit guest VMs on my Windows 7 laptop without have to reboot  to my dual boot Windows 2008 partition to access Hyper-V)&lt;/li&gt;
&lt;li&gt;Selected default installation options for Ubuntu&lt;/li&gt;
&lt;li&gt;When completed used the Ubuntu Software Centre tool to install Eclipse 3.7&lt;/li&gt;
&lt;li&gt;Downloaded the &lt;a href=&#34;http://www.google.co.uk/url?sa=t&amp;amp;rct=j&amp;amp;q=tee%20ctp1&amp;amp;source=web&amp;amp;cd=1&amp;amp;ved=0CB4QFjAA&amp;amp;url=http%3A%2F%2Fwww.microsoft.com%2Fdownload%2Fen%2Fdetails.aspx%3Fid%3D27544&amp;amp;ei=5-4JT-elLcjQ4QTTneiNCA&amp;amp;usg=AFQjCNGfUBcFTHG_hre2XsaSXTXbil9cgg&amp;amp;sig2=PhKiipceucSDUg3HTbhZkQ&#34;&gt;Team Explorer Everywhere 11 CTP1&lt;/a&gt; and installed the Eclipse plug as detailed on the download page.&lt;/li&gt;
&lt;li&gt;Once installed I then tried to connect to our in house TFS2010 server from with Eclipse – it all worked fine&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I next tried to connect to my project collection on &lt;a href=&#34;https://tfspreview.com&#34;&gt;https://tfspreview.com&lt;/a&gt; and this is where I hit a problem….&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently got round to taking a look at <a href="http://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=tee%20ctp1&amp;source=web&amp;cd=1&amp;ved=0CB4QFjAA&amp;url=http%3A%2F%2Fwww.microsoft.com%2Fdownload%2Fen%2Fdetails.aspx%3Fid%3D27544&amp;ei=5-4JT-elLcjQ4QTTneiNCA&amp;usg=AFQjCNGfUBcFTHG_hre2XsaSXTXbil9cgg&amp;sig2=PhKiipceucSDUg3HTbhZkQ">Team Explorer Everywhere 11 CTP1</a>. This is the version of TEE that allows you to access the <a href="http://tfspreview.com">Azure hosted preview of the next version of TFS</a> using Eclipse as a client. I decided to start with  a clean OS so</p>
<ol>
<li>Downloaded the <a href="http://www.ubuntu.com/download/ubuntu/download">Ubuntu 32bit ISO</a></li>
<li>Used this ISO to create a test VM on my copy of <a href="https://www.virtualbox.org/">VirtualBox</a> (currently using VirtualBox as this allows me to create 64bit and 32bit guest VMs on my Windows 7 laptop without have to reboot  to my dual boot Windows 2008 partition to access Hyper-V)</li>
<li>Selected default installation options for Ubuntu</li>
<li>When completed used the Ubuntu Software Centre tool to install Eclipse 3.7</li>
<li>Downloaded the <a href="http://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=tee%20ctp1&amp;source=web&amp;cd=1&amp;ved=0CB4QFjAA&amp;url=http%3A%2F%2Fwww.microsoft.com%2Fdownload%2Fen%2Fdetails.aspx%3Fid%3D27544&amp;ei=5-4JT-elLcjQ4QTTneiNCA&amp;usg=AFQjCNGfUBcFTHG_hre2XsaSXTXbil9cgg&amp;sig2=PhKiipceucSDUg3HTbhZkQ">Team Explorer Everywhere 11 CTP1</a> and installed the Eclipse plug as detailed on the download page.</li>
<li>Once installed I then tried to connect to our in house TFS2010 server from with Eclipse – it all worked fine</li>
</ol>
<p>I next tried to connect to my project collection on <a href="https://tfspreview.com">https://tfspreview.com</a> and this is where I hit a problem….</p>
<p>Instead of getting the expected LiveID login screen I got an error dialog saying ‘<em>No more handles [Could not detect registered XULRunner to use]</em>’</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002.jpg"><img alt="clip_image002" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002_thumb.jpg" title="clip_image002"></a></p>
<p>A quick search showed this is a <a href="http://vanwynsberghea.posterous.com/team-explorer-everywhere-11-with-eclipse-on-l">known issue</a>, basically Ubuntu has stopped distributing XULRunner. It needs to be installed manually as detailed <a href="http://vanwynsberghea.posterous.com/team-explorer-everywhere-11-with-eclipse-on-l">in the post</a>. Problem was, unlike in the post, when I followed this process it had no effect on the problem, so time for more digging with the excellent assistance of Shaw from the TEE team at  Microsoft.</p>
<p>The first suspect was that an environment variable MOZILLA_FIVE_HOME, which, <a href="http://www.eclipse.org/swt/faq.php">according  to the SWT FAQ</a>, needed to be set to let Eclipse know where to find XULRunner. Checking the Eclipse <em>Help-&gt;Team Explorer Support…</em> dialog</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002%5B5%5D.jpg"><img alt="clip_image002[5]" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002%5B5%5D_thumb.jpg" title="clip_image002[5]"></a></p>
<p>seemed to show the correct setting had been picked up automatically. So as expected, on setting the environment variable it had no effect on the problem. So just to make sure I set the variable in <em>eclipse.ini</em> file using the setting</p>
<blockquote>
<p><em>-Dorg.eclipse.swt.browser.XULRunnerPath=/usr/lib/xulrunner-1.9.2.24</em></p></blockquote>
<p>This changed the error message, and gave the hint to real problem.</p>
<p><a href="/wp-content/uploads/sites/2/historic/clip_image002%5B7%5D.jpg"><img alt="clip_image002[7]" loading="lazy" src="/wp-content/uploads/sites/2/historic/clip_image002%5B7%5D_thumb.jpg" title="clip_image002[7]"></a></p>
<p>XULRunner was failing to load as other dependencies were missing.</p>
<p>At this point I could have started to chase down all these dependencies. However, I realised the issue was the Ubuntu distribution of Eclipse, it just had too many bits missing that you need to login to TFS Azure. So I removed the Ubuntu sourced Eclipse installation and downloaded the current version of  Eclipse <a href="http://www.eclipse.org/downloads/">direct for the Eclipse home site</a>.</p>
<ol>
<li>I unzipped this distribution</li>
<li>Installed TEE CTP1 as before</li>
<li>Check I could access our TFS 2010</li>
<li>And checked I could login via <a href="http://tfspreview.com">http://tfspreview.com</a></li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_13.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_13.png" title="image"></a></p>
<p>So success, the tip being using the official Eclipse distribution, as you never know what another distribution might have removed.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Jan 2012 Agile Yorkshire meeting – An Extreme Hour</title>
      <link>https://blog.richardfennell.net/posts/jan-2012-agile-yorkshire-meeting-an-extreme-hour/</link>
      <pubDate>Fri, 06 Jan 2012 10:27:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/jan-2012-agile-yorkshire-meeting-an-extreme-hour/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.agileyorkshire.org/event-announcements/10thjanuaryanextremehour&#34;&gt;Agile Yorkshire is kicking off the New Year on the Tuesday Jan 10 at Old Broadcasting House&lt;/a&gt;, Leeds where the subject will be &lt;a href=&#34;http://www.extremeprogramming.org/&#34;&gt;Extreme Programming(XP)&lt;/a&gt;. The session will be based around an Extreme Hour: a hands-on XP project miniature with no coding experience required. Then off to be pub to continue the chat.&lt;/p&gt;
&lt;p&gt;Unfortunately I cannot make the session, but I am sure it will be interesting, especially if you have not tried an Extreme Hour.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.agileyorkshire.org/event-announcements/10thjanuaryanextremehour">Agile Yorkshire is kicking off the New Year on the Tuesday Jan 10 at Old Broadcasting House</a>, Leeds where the subject will be <a href="http://www.extremeprogramming.org/">Extreme Programming(XP)</a>. The session will be based around an Extreme Hour: a hands-on XP project miniature with no coding experience required. Then off to be pub to continue the chat.</p>
<p>Unfortunately I cannot make the session, but I am sure it will be interesting, especially if you have not tried an Extreme Hour.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Radio TFS is back</title>
      <link>https://blog.richardfennell.net/posts/radio-tfs-is-back-2/</link>
      <pubDate>Tue, 03 Jan 2012 10:21:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/radio-tfs-is-back-2/</guid>
      <description>&lt;p&gt;In case you had not noticed, the &lt;a href=&#34;http://www.radiotfs.com/&#34;&gt;Radio TFS podcast&lt;/a&gt; is back after an eight month hiatus. A good place to keep in touch the the new announcements related to VS11&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In case you had not noticed, the <a href="http://www.radiotfs.com/">Radio TFS podcast</a> is back after an eight month hiatus. A good place to keep in touch the the new announcements related to VS11</p>
]]></content:encoded>
    </item>
    <item>
      <title>Radio TFS is back</title>
      <link>https://blog.richardfennell.net/posts/radio-tfs-is-back/</link>
      <pubDate>Tue, 03 Jan 2012 10:21:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/radio-tfs-is-back/</guid>
      <description>&lt;p&gt;In case you had not noticed, the &lt;a href=&#34;http://www.radiotfs.com/&#34;&gt;Radio TFS podcast&lt;/a&gt; is back after an eight month hiatus. A good place to keep in touch the the new announcements related to VS11&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In case you had not noticed, the <a href="http://www.radiotfs.com/">Radio TFS podcast</a> is back after an eight month hiatus. A good place to keep in touch the the new announcements related to VS11</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving Environments between TPCs when using TFS Lab Management</title>
      <link>https://blog.richardfennell.net/posts/moving-environments-between-tpcs-when-using-tfs-lab-management-2/</link>
      <pubDate>Fri, 23 Dec 2011 13:09:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-environments-between-tpcs-when-using-tfs-lab-management-2/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;One area I think can be found confusing in &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/ee712698&#34;&gt;TFS Lab Management&lt;/a&gt; is that all environments are associated with specific Team Projects (TP) within Team Project Collections (TPC). This is not what you might first expect if you think of Lab Management as just a big Hyper-V server. When configured you end up with a number of TPC/TP related silos as shown in the diagram below.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_7.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_8.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This becomes a major issue for us as each TP stores its own environment definitions in its own silo; they cannot be shared between TPs and hence TPCs. So it is hard to re-use environments without recreating them.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>One area I think can be found confusing in <a href="http://msdn.microsoft.com/en-us/vstudio/ee712698">TFS Lab Management</a> is that all environments are associated with specific Team Projects (TP) within Team Project Collections (TPC). This is not what you might first expect if you think of Lab Management as just a big Hyper-V server. When configured you end up with a number of TPC/TP related silos as shown in the diagram below.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_8.png" title="image"></a></p>
<p>This becomes a major issue for us as each TP stores its own environment definitions in its own silo; they cannot be shared between TPs and hence TPCs. So it is hard to re-use environments without recreating them.</p>
<p>This problem effects companies like ourselves as we have many TPCs because we tend to have one per client, an arrangement not that uncommon for consultancies.</p>
<p>It is not just in Lab Management this is an issue for us. The isolated nature of TPCs, a great advantage for client security, has caused us to have an ever growing number of Build Controllers and Test Controllers which were are regularly reassigning to whichever are our active TPCs. Luckily multiple the Build Controller can be run on the same VM (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/07/25/more-on-running-multiple-tfs-build-controllers-on-a-single-vm.aspx">I discussed this unsupported hack here</a>), but unfortunate there is no similar workaround for Test Controllers.</p>
<h2 id="mtm-is-not-your-friend-when-storing-environments-for-use-beyond-the-current-tp">MTM is not your friend when  storing environments for use beyond the current TP</h2>
<p>What I want to discuss in this post is how, when you have a working environment in one TP you can get it into another TP with as little fuss as possible.</p>
<p>Naively you would think that you use the <strong>Store in Library</strong> option within MTM that is available for a stopped environment.</p>
<p> <a href="/wp-content/uploads/sites/2/historic/image_9.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_9.png" title="image"></a></p>
<p>This does store the environment on the SCVMM Library, but it is only available for the TP that is was stored from. It is stored in the A1 silo in the SCVMM Library. Now you might ask why, the SCVMM Library is just a share, so anything in it should be available to all? But it turns out it is not just a share. It is true the files are on a UNC share, you can see the stored environments as a number of Lab_[guid] folders, but there is also a DB that stores meta data, this is the problem. This meta data associates the stored environment with a given TP.</p>
<p>The same is true if you choose to just store a single VM from within MTM whether you choose to store it as a VM or a template.</p>
<p>Why is this important you might ask? Well it is all well and good you can build your environment from VMs and templates in the SCVMM Library, but these will not be fully configured for your needs. You will build the environment, making sure TFS agents are in place, maybe putting extra applications, tools or test data on system. It is all work you don’t want to have to repeat for what is in effect the same environment in another TP or TPC. This is a problem we see all the time. We do SharePoint development so want a standard environment (couple of load balanced servers and a client) we can use for many client projects in different TPCs  (Ok <a href="http://rangersvsvmfactory.codeplex.com/">VM factory can help</a>, but this is not my point here).</p>
<h2 id="a-workaround-of-sorts">A workaround of sorts</h2>
<p>The only way I have found to ease this problem is when I have a fully configured environment to clone the key VMs (the servers) into the SCVMM Library using SCVMM <strong>NOT</strong> MTM</p>
<ol>
<li>
<p>Using MTM stop the environment you wish to work with.</p>
</li>
<li>
<p>Identify the VM you wish to store, you need its Lab name. This can be found in MTM if you connect to the lab and check the system info for the VM</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_10.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_10.png" title="image"></a></p>
</li>
<li>
<p>Load SCVMM admin console, select Virtual Machines tab and find the correct VM</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_11.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_11.png" title="image"></a></p>
</li>
<li>
<p>Right click on the VM and select Clone</p>
</li>
<li>
<p>Give the VM as new meaningful name e.g. ‘Fully configured SP2010 Server’</p>
</li>
<li>
<p>Accept the hardware configuration (unless you wish to change it for some reason)</p>
</li>
<li>
<p><strong>IMPORTANT</strong> On the destination tab select the to ‘store the virtual machine in the library’. This appears to be the only means to get a VM into the library such that it can be imported into any TPC/TP.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_12.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_12.png" title="image"></a></p>
</li>
<li>
<p>Next select the library share to use</p>
</li>
<li>
<p>And let the wizard complete.</p>
</li>
<li>
<p>You should now have a VM in the SCVMM Library that can be imported into new environments.</p>
</li>
</ol>
<p>You do have to at this point recreate the environment in your new TP, but at least the servers you import into this environment are configured OK. If for example you have a pair of SP2010 servers, a DC and a NLB, as long as you drop them into a new isolated environment they should just leap into life as they did before. You should not have to do any extra re-configuration.</p>
<p>The same technique could be used for workstation VMs, but it might be as quick to just use template (sysprep’d) clients. You just need to take a view on this for your environment requirements</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving Environments between TPCs when using TFS Lab Management</title>
      <link>https://blog.richardfennell.net/posts/moving-environments-between-tpcs-when-using-tfs-lab-management/</link>
      <pubDate>Fri, 23 Dec 2011 13:09:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-environments-between-tpcs-when-using-tfs-lab-management/</guid>
      <description>&lt;h2 id=&#34;background&#34;&gt;Background&lt;/h2&gt;
&lt;p&gt;One area I think can be found confusing in &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/ee712698&#34;&gt;TFS Lab Management&lt;/a&gt; is that all environments are associated with specific Team Projects (TP) within Team Project Collections (TPC). This is not what you might first expect if you think of Lab Management as just a big Hyper-V server. When configured you end up with a number of TPC/TP related silos as shown in the diagram below.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_7.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_8.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This becomes a major issue for us as each TP stores its own environment definitions in its own silo; they cannot be shared between TPs and hence TPCs. So it is hard to re-use environments without recreating them.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h2 id="background">Background</h2>
<p>One area I think can be found confusing in <a href="http://msdn.microsoft.com/en-us/vstudio/ee712698">TFS Lab Management</a> is that all environments are associated with specific Team Projects (TP) within Team Project Collections (TPC). This is not what you might first expect if you think of Lab Management as just a big Hyper-V server. When configured you end up with a number of TPC/TP related silos as shown in the diagram below.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_8.png" title="image"></a></p>
<p>This becomes a major issue for us as each TP stores its own environment definitions in its own silo; they cannot be shared between TPs and hence TPCs. So it is hard to re-use environments without recreating them.</p>
<p>This problem effects companies like ourselves as we have many TPCs because we tend to have one per client, an arrangement not that uncommon for consultancies.</p>
<p>It is not just in Lab Management this is an issue for us. The isolated nature of TPCs, a great advantage for client security, has caused us to have an ever growing number of Build Controllers and Test Controllers which were are regularly reassigning to whichever are our active TPCs. Luckily multiple the Build Controller can be run on the same VM (<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/07/25/more-on-running-multiple-tfs-build-controllers-on-a-single-vm.aspx">I discussed this unsupported hack here</a>), but unfortunate there is no similar workaround for Test Controllers.</p>
<h2 id="mtm-is-not-your-friend-when-storing-environments-for-use-beyond-the-current-tp">MTM is not your friend when  storing environments for use beyond the current TP</h2>
<p>What I want to discuss in this post is how, when you have a working environment in one TP you can get it into another TP with as little fuss as possible.</p>
<p>Naively you would think that you use the <strong>Store in Library</strong> option within MTM that is available for a stopped environment.</p>
<p> <a href="/wp-content/uploads/sites/2/historic/image_9.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_9.png" title="image"></a></p>
<p>This does store the environment on the SCVMM Library, but it is only available for the TP that is was stored from. It is stored in the A1 silo in the SCVMM Library. Now you might ask why, the SCVMM Library is just a share, so anything in it should be available to all? But it turns out it is not just a share. It is true the files are on a UNC share, you can see the stored environments as a number of Lab_[guid] folders, but there is also a DB that stores meta data, this is the problem. This meta data associates the stored environment with a given TP.</p>
<p>The same is true if you choose to just store a single VM from within MTM whether you choose to store it as a VM or a template.</p>
<p>Why is this important you might ask? Well it is all well and good you can build your environment from VMs and templates in the SCVMM Library, but these will not be fully configured for your needs. You will build the environment, making sure TFS agents are in place, maybe putting extra applications, tools or test data on system. It is all work you don’t want to have to repeat for what is in effect the same environment in another TP or TPC. This is a problem we see all the time. We do SharePoint development so want a standard environment (couple of load balanced servers and a client) we can use for many client projects in different TPCs  (Ok <a href="http://rangersvsvmfactory.codeplex.com/">VM factory can help</a>, but this is not my point here).</p>
<h2 id="a-workaround-of-sorts">A workaround of sorts</h2>
<p>The only way I have found to ease this problem is when I have a fully configured environment to clone the key VMs (the servers) into the SCVMM Library using SCVMM <strong>NOT</strong> MTM</p>
<ol>
<li>
<p>Using MTM stop the environment you wish to work with.</p>
</li>
<li>
<p>Identify the VM you wish to store, you need its Lab name. This can be found in MTM if you connect to the lab and check the system info for the VM</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_10.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_10.png" title="image"></a></p>
</li>
<li>
<p>Load SCVMM admin console, select Virtual Machines tab and find the correct VM</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_11.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_11.png" title="image"></a></p>
</li>
<li>
<p>Right click on the VM and select Clone</p>
</li>
<li>
<p>Give the VM as new meaningful name e.g. ‘Fully configured SP2010 Server’</p>
</li>
<li>
<p>Accept the hardware configuration (unless you wish to change it for some reason)</p>
</li>
<li>
<p><strong>IMPORTANT</strong> On the destination tab select the to ‘store the virtual machine in the library’. This appears to be the only means to get a VM into the library such that it can be imported into any TPC/TP.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_12.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_12.png" title="image"></a></p>
</li>
<li>
<p>Next select the library share to use</p>
</li>
<li>
<p>And let the wizard complete.</p>
</li>
<li>
<p>You should now have a VM in the SCVMM Library that can be imported into new environments.</p>
</li>
</ol>
<p>You do have to at this point recreate the environment in your new TP, but at least the servers you import into this environment are configured OK. If for example you have a pair of SP2010 servers, a DC and a NLB, as long as you drop them into a new isolated environment they should just leap into life as they did before. You should not have to do any extra re-configuration.</p>
<p>The same technique could be used for workstation VMs, but it might be as quick to just use template (sysprep’d) clients. You just need to take a view on this for your environment requirements</p>
]]></content:encoded>
    </item>
    <item>
      <title>Community TFS Build Extensions – December 2011 released</title>
      <link>https://blog.richardfennell.net/posts/community-tfs-build-extensions-december-2011-released/</link>
      <pubDate>Fri, 23 Dec 2011 12:51:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/community-tfs-build-extensions-december-2011-released/</guid>
      <description>&lt;p&gt;A new release of Community TFS Build Extensions have shipped. You can &lt;a href=&#34;http://mikefourie.wordpress.com/2011/12/22/community-tfs-build-extensions-december-2011/&#34;&gt;find more details on Mike Fourie’s blog&lt;/a&gt; or at the &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/releases/view/75159&#34;&gt;project home on Codeplex&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A new release of Community TFS Build Extensions have shipped. You can <a href="http://mikefourie.wordpress.com/2011/12/22/community-tfs-build-extensions-december-2011/">find more details on Mike Fourie’s blog</a> or at the <a href="http://tfsbuildextensions.codeplex.com/releases/view/75159">project home on Codeplex</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Don’t need to go into work at all</title>
      <link>https://blog.richardfennell.net/posts/dont-need-to-go-into-work-at-all/</link>
      <pubDate>Thu, 22 Dec 2011 20:11:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dont-need-to-go-into-work-at-all/</guid>
      <description>&lt;p&gt;Today we got &lt;a href=&#34;http://technet.microsoft.com/en-us/network/dd420463&#34;&gt;Direct Access&lt;/a&gt; working; I can now access all the features of our &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/ee712698&#34;&gt;TFS Lab Management&lt;/a&gt; instance from home. No more remoting onto on box to hop to another. I never need to go to the office again!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today we got <a href="http://technet.microsoft.com/en-us/network/dd420463">Direct Access</a> working; I can now access all the features of our <a href="http://msdn.microsoft.com/en-us/vstudio/ee712698">TFS Lab Management</a> instance from home. No more remoting onto on box to hop to another. I never need to go to the office again!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows update now includes Microsoft Visual Studio 2010 Service Pack 1, so expect a slow shutdown when it applies</title>
      <link>https://blog.richardfennell.net/posts/windows-update-now-includes-microsoft-visual-studio-2010-service-pack-1-so-expect-a-slow-shutdown-when-it-applies/</link>
      <pubDate>Thu, 22 Dec 2011 14:36:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-update-now-includes-microsoft-visual-studio-2010-service-pack-1-so-expect-a-slow-shutdown-when-it-applies/</guid>
      <description>&lt;p&gt;The Microsoft Visual Studio 2010 Service Pack 1 is now coming down as part of Windows Update. Watch out for this, my PC has just taken best part of two hours to reboot as it decided to apply this service pack, even though I had already applied it manually when I built the PC&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The Microsoft Visual Studio 2010 Service Pack 1 is now coming down as part of Windows Update. Watch out for this, my PC has just taken best part of two hours to reboot as it decided to apply this service pack, even though I had already applied it manually when I built the PC</p>
]]></content:encoded>
    </item>
    <item>
      <title>Debugging CodedUi Tests when launching test as a different user</title>
      <link>https://blog.richardfennell.net/posts/debugging-codedui-tests-when-launching-test-as-a-different-user/</link>
      <pubDate>Wed, 21 Dec 2011 14:17:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/debugging-codedui-tests-when-launching-test-as-a-different-user/</guid>
      <description>&lt;p&gt;If you are working with CodedUI tests in Visual Studio you sometimes get unexpected results, such as the wrong field be selected in replays. When trying to work out what has happened the logging features are really useful. These are probably already switched on, but you can check by following the &lt;a href=&#34;http://blogs.msdn.com/b/gautamg/archive/2009/11/29/how-to-enable-tracing-for-ui-test-components.aspx&#34;&gt;details in this post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Assuming you make no logging level changes from the default, if you look in the&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;%Temp%UITestLogsLastRun&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are working with CodedUI tests in Visual Studio you sometimes get unexpected results, such as the wrong field be selected in replays. When trying to work out what has happened the logging features are really useful. These are probably already switched on, but you can check by following the <a href="http://blogs.msdn.com/b/gautamg/archive/2009/11/29/how-to-enable-tracing-for-ui-test-components.aspx">details in this post</a>.</p>
<p>Assuming you make no logging level changes from the default, if you look in the</p>
<blockquote>
<p>%Temp%UITestLogsLastRun</p>
<p>you should see a log file containing warning level messages in the form</p></blockquote>
<p>Playback - {1} [SUCCESS] SendKeys &ldquo;^{HOME}&rdquo; - &ldquo;[MSAA, VisibleOnly]ControlType=&lsquo;Edit&rsquo;&rdquo;</p>
<p>E, 11576, 113, 2011/12/20, 09:55:00.344, 717559875878, QTAgent32.exe, Msaa.GetFocusedElement: could not find accessible object of foreground window</p>
<p>W, 11576, 113, 2011/12/20, 09:55:00.439, 717560081047, QTAgent32.exe, Playback - {2} [SUCCESS] SendKeys &ldquo;^+{END}&rdquo; - &ldquo;[MSAA, VisibleOnly]ControlType=&lsquo;Edit&rsquo;&rdquo;</p>
<p>E, 11576, 113, 2011/12/20, 09:55:00.440, 717560081487, QTAgent32.exe, Msaa.GetFocusedElement: could not find accessible object of foreground window</p>
<p>W, 11576, 113, 2011/12/20, 09:55:00.485, 717560179336, QTAgent32.exe, Playback - {3} [SUCCESS] SendKeys &ldquo;{DELETE}&rdquo; - &ldquo;[MSAA, VisibleOnly]ControlType=&lsquo;Edit&rsquo;&rdquo;</p>
<p>A common problem with coded UI tests can be who you are running the test as. It is possible to launch the application user test using the following command at the start of a test</p>
<blockquote>
<p>     ApplicationUnderTest.Launch(“c:my.exe”, <br>
                              “c:my.exe”, <br>
                             ””, <br>
                             ”username”, <br>
                            securepassword, <br>
                           ”domain”)</p></blockquote>
<p>I have found that this launch mechanism it can cause problems with fields not being found in the CodeUI test unless you run Visual Studio as administrator (using the right click run as Administrator in Windows). This is down to who is allowed to access who&rsquo;s UI thread in Windows if a user is not an administrator.</p>
<p>So if you want to use ApplicationUnderTest.Launch and change the user for an CodedUI test, best the process running the test is an administrator.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DevOps are testers best placed to fill this role?</title>
      <link>https://blog.richardfennell.net/posts/devops-are-testers-best-placed-to-fill-this-role/</link>
      <pubDate>Fri, 16 Dec 2011 17:51:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/devops-are-testers-best-placed-to-fill-this-role/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://en.wikipedia.org/wiki/DevOps&#34;&gt;DevOps&lt;/a&gt; seems to be the new buzz role in the industry at present. People who can bridge the gap between the worlds of development and IT pros. Given my career history this could be a description of  the path I took. I have done both, and now sit in the middle covering ALM consultancy where I work with both roles. You can’t avoid a bit of development and a bit of IT pro work when installing and configuring TFS with some automated build and deployment.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://en.wikipedia.org/wiki/DevOps">DevOps</a> seems to be the new buzz role in the industry at present. People who can bridge the gap between the worlds of development and IT pros. Given my career history this could be a description of  the path I took. I have done both, and now sit in the middle covering ALM consultancy where I work with both roles. You can’t avoid a bit of development and a bit of IT pro work when installing and configuring TFS with some automated build and deployment.</p>
<p>The growth of DevOps is an interesting move because of late I have seen the gap between IT Pros and developers grow. Many developers seem to have less and less understanding of operational issues as times go on. I fear this is a due to the greater levels of abstractions that new development tools cause. This is only going to get worse was we move into the cloud, why does a developer need to care about Ops issues, AppFabric does that for them – doesn’t it?</p>
<p>In my view this is dangerous, we all need at least a working knowledge of what underpins the technology we use. Maybe this should hint at good subjects for informal in-house training, why not get your developers to give intro training to the IT pros and vice versa? Or encourage people to listen to podcasts on the other roles subjects such as <a href="http://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=dot%20net%20rocks&amp;source=web&amp;cd=1&amp;ved=0CCoQFjAA&amp;url=http%3A%2F%2Fwww.dotnetrocks.com%2F&amp;ei=YoPrTvGHLo23hAf4g-3BCA&amp;usg=AFQjCNHjc3REhsOkV0J1kRuO4Mcu_cg53Q">Dot Net Rocks</a> (a dev podcast) and <a href="http://www.google.co.uk/url?sa=t&amp;rct=j&amp;q=run%20as%20radio&amp;source=web&amp;cd=1&amp;ved=0CCUQFjAA&amp;url=http%3A%2F%2Fwww.runasradio.com%2F&amp;ei=gYPrTv6yGom7hAf6pvHMCA&amp;usg=AFQjCNFz-OYAN14zng8NZg1CwHUf5Ibf5Q">Run As Radio</a> (an IT pro podcast). It was always a nice feature of the TechEd conference that it had a dev and IT pro track, so if the fancy took you could hear about technology from the view of the the other role.</p>
<p>However, these are longer term solutions, it is all well and good promoting these but in the short term who is best placed to bridge this gap now?</p>
<p>I think the answer could be testers, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2008/02/03/oh-to-be-a-tester.aspx">I wrote a post a while ago</a> that it was great to be a tester as you got to work with a wide range of technologies, isn’t this just an extension of this role. DevOps needs a working understanding of development and operations, as well as a good knowledge of deployment and build technologies. All aspects of the tester role, assuming your organisation considers a tester not to be a person who just ticks boxes on a check list, but a software development engineer working in test.</p>
<p>This is not to say that DevOps and testers are the same, just that there is some commonality and so you may have more skills in house than you thought you did. DevOps is not new, someone was doing the work already, they just did not historically give it that name (or probably any name)</p>
]]></content:encoded>
    </item>
    <item>
      <title>When you try to run a test in MTM you get a dialog ‘Object reference not set to an instance of an object’</title>
      <link>https://blog.richardfennell.net/posts/when-you-try-to-run-a-test-in-mtm-you-get-a-dialog-object-reference-not-set-to-an-instance-of-an-object/</link>
      <pubDate>Fri, 16 Dec 2011 16:41:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/when-you-try-to-run-a-test-in-mtm-you-get-a-dialog-object-reference-not-set-to-an-instance-of-an-object/</guid>
      <description>&lt;p&gt;When trying to run a newly created manual test in MTM I got the error dialog&lt;/p&gt;
&lt;p&gt;‘You cannot run the selected tests, Object reference not set to an instance of an object’.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_5.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_7.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;On checking the windows event log I saw&lt;/p&gt;
&lt;p&gt;Detailed Message: TF30065: An unhandled exception occurred.&lt;/p&gt;
&lt;p&gt;Web Request Details Url: &lt;a href=&#34;http://%E2%80%A6%E2%80%A6/TestManagement/v1.0/TestResultsEx.asmx&#34;&gt;http://……/TestManagement/v1.0/TestResultsEx.asmx&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;So not really that much help in diagnosing the problem!&lt;/p&gt;
&lt;p&gt;Turns out the problem was I had been editing the test case work item type. Though it had saved/imported without any errors (it is validated during these processes) something was wrong with it. I suspect to do with filtering the list of users in the ‘assigned to’ field as this is what I last remember editing, but I might be wrong, it was on a demo TFS instance I have not used for a while.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When trying to run a newly created manual test in MTM I got the error dialog</p>
<p>‘You cannot run the selected tests, Object reference not set to an instance of an object’.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_5.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7.png" title="image"></a></p>
<p>On checking the windows event log I saw</p>
<p>Detailed Message: TF30065: An unhandled exception occurred.</p>
<p>Web Request Details Url: <a href="http://%E2%80%A6%E2%80%A6/TestManagement/v1.0/TestResultsEx.asmx">http://……/TestManagement/v1.0/TestResultsEx.asmx</a></p>
<p>So not really that much help in diagnosing the problem!</p>
<p>Turns out the problem was I had been editing the test case work item type. Though it had saved/imported without any errors (it is validated during these processes) something was wrong with it. I suspect to do with filtering the list of users in the ‘assigned to’ field as this is what I last remember editing, but I might be wrong, it was on a demo TFS instance I have not used for a while.</p>
<p>The solution was to revert the test case work item type back to a known good version and recreate the failing test(s). Its seems once a test was created from the bad template there was nothing you could do about fixing it.</p>
<p>Once this was done MTM ran the tests without any issues.</p>
<p>When I have some time I will do an XML compare of the exported good and bad work item types to see what the problem really was.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New community build extensions documentation for the RoboCopy activity</title>
      <link>https://blog.richardfennell.net/posts/new-community-build-extensions-documentation-for-the-robocopy-activity/</link>
      <pubDate>Thu, 15 Dec 2011 22:24:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-community-build-extensions-documentation-for-the-robocopy-activity/</guid>
      <description>&lt;p&gt;I have just published some documentation on theTFS2010 community build extensions &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20robocopy%20activity&amp;amp;referringTitle=Documentation&#34;&gt;RoboCopy activity&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just published some documentation on theTFS2010 community build extensions <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20robocopy%20activity&amp;referringTitle=Documentation">RoboCopy activity</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Updated SharePoint TFS Team Build Activity documentation</title>
      <link>https://blog.richardfennell.net/posts/updated-sharepoint-tfs-team-build-activity-documentation/</link>
      <pubDate>Wed, 14 Dec 2011 16:38:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updated-sharepoint-tfs-team-build-activity-documentation/</guid>
      <description>&lt;p&gt;I have just published &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20SharePointDeployment%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;updated documentation on the SharePoint build activity&lt;/a&gt; for the Community TFS build extensions.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just published <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20SharePointDeployment%20build%20activity&amp;referringTitle=Documentation">updated documentation on the SharePoint build activity</a> for the Community TFS build extensions.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The battle of the Lenovo W520 and projectors</title>
      <link>https://blog.richardfennell.net/posts/the-battle-of-the-lenovo-w520-and-projectors/</link>
      <pubDate>Mon, 12 Dec 2011 10:02:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-battle-of-the-lenovo-w520-and-projectors/</guid>
      <description>&lt;p&gt;My Lenovo W520 is the best laptop I have owned, but I have had one major issue with it, external projectors. The problem is it does not like to duplicate the laptop screen output to a projector, it works fine if extending the desktop, not duplicating.&lt;/p&gt;
&lt;p&gt;Every time I have tried to use it with a projector I either end up only showing on the projector and looking over my shoulder, or fiddling for ages until it suddenly works, usually at a low resolution, I don’t know what I did to get to this point so I don’t dare fiddle any more so use it anyway. A bit of a problem given the number of presentations I do. A quick &lt;a href=&#34;http://www.google.co.uk/search?gcx=c&amp;amp;sourceid=chrome&amp;amp;ie=UTF-8&amp;amp;q=w520&amp;#43;optimus&amp;#43;bios#sclient=psy-ab&amp;amp;hl=en&amp;amp;source=hp&amp;amp;q=w520&amp;#43;duplicate&amp;#43;external&amp;amp;pbx=1&amp;amp;oq=w520&amp;#43;duplicate&amp;#43;external&amp;amp;aq=f&amp;amp;aqi=&amp;amp;aql=&amp;amp;gs_sm=e&amp;amp;gs_upl=4183l8502l0l9222l20l13l1l4l4l3l1292l7334l4-3.8.0.1l16l0&amp;amp;bav=on.2,or.r_gc.r_pw.,cf.osb&amp;amp;fp=e5e41118a534a8ca&amp;amp;biw=1920&amp;amp;bih=1019&#34;&gt;search&lt;/a&gt; shows I am not alone in this problem.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My Lenovo W520 is the best laptop I have owned, but I have had one major issue with it, external projectors. The problem is it does not like to duplicate the laptop screen output to a projector, it works fine if extending the desktop, not duplicating.</p>
<p>Every time I have tried to use it with a projector I either end up only showing on the projector and looking over my shoulder, or fiddling for ages until it suddenly works, usually at a low resolution, I don’t know what I did to get to this point so I don’t dare fiddle any more so use it anyway. A bit of a problem given the number of presentations I do. A quick <a href="http://www.google.co.uk/search?gcx=c&amp;sourceid=chrome&amp;ie=UTF-8&amp;q=w520&#43;optimus&#43;bios#sclient=psy-ab&amp;hl=en&amp;source=hp&amp;q=w520&#43;duplicate&#43;external&amp;pbx=1&amp;oq=w520&#43;duplicate&#43;external&amp;aq=f&amp;aqi=&amp;aql=&amp;gs_sm=e&amp;gs_upl=4183l8502l0l9222l20l13l1l4l4l3l1292l7334l4-3.8.0.1l16l0&amp;bav=on.2,or.r_gc.r_pw.,cf.osb&amp;fp=e5e41118a534a8ca&amp;biw=1920&amp;bih=1019">search</a> shows I am not alone in this problem.</p>
<p>The issue it seems is down to the fact the Lenovo has two graphics systems, an integrated (Intel) one and a discrete (Nvidia) one. The drivers in Windows 7 allow it to switch dynamically between the two to save power. This is called Nvidia Optimus Switching.</p>
<p>The answer to the problem is to disable this Optimus feature in the BIOS, this is at the cost of some battery life, but better to have a system that works as I need and have to plug it in, than does not work at most client sites.</p>
<p>So to make the change</p>
<ol>
<li>Reboot into BIOS (press the ThinkVantage button)</li>
<li>Select the Discrete graphics option (the Nvidia 1000M)</li>
<li>Disable the Opitmus features</li>
<li>Save and Reboot</li>
<li>Windows 7 re-detects all the graphics drivers and then all seems OK (so far…)</li>
</ol>
<p>On more point it is worth noting I again fell for the problem that as my WIndows 7 partition is BitLockered you have to enter your recovery key if you change anything in the BIOS, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2011/09/14/bitlocker-keeps-asking-for-my-recovery-key-after-a-change-in-my-disk-s-mbr.aspx">see my past post for details of how to fix this issue</a>. Was a bit surprised by this as I thought BitLocker would only care about changes to the master boot record, but you live and learn.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Error 0x80070490 when trying to make any purchase on WP7 MarketPlace</title>
      <link>https://blog.richardfennell.net/posts/error-0x80070490-when-trying-to-make-any-purchase-on-wp7-marketplace/</link>
      <pubDate>Sat, 10 Dec 2011 16:37:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/error-0x80070490-when-trying-to-make-any-purchase-on-wp7-marketplace/</guid>
      <description>&lt;p&gt;Recently I have had a problem with my LG E900 Windows Phone 7 running Mango. Whenever I try to make a purchase on marketplace I was getting the error “There has been a problem completing your request. Try again later” and seeing the error code 0x80070490. A search on the web, asking around everyone I thought might have an answer and placing a question on &lt;a href=&#34;http://answers.microsoft.com/en-us/winphone/forum/wp7-wpapps/error-0x80070490-when-trying-to-make-any-purchase/e33ce119-120d-4863-bfda-aa709cbd0956?page=1&amp;amp;tm=1323511313645#footer&#34;&gt;Microsoft Answers&lt;/a&gt; got me no where.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I have had a problem with my LG E900 Windows Phone 7 running Mango. Whenever I try to make a purchase on marketplace I was getting the error “There has been a problem completing your request. Try again later” and seeing the error code 0x80070490. A search on the web, asking around everyone I thought might have an answer and placing a question on <a href="http://answers.microsoft.com/en-us/winphone/forum/wp7-wpapps/error-0x80070490-when-trying-to-make-any-purchase/e33ce119-120d-4863-bfda-aa709cbd0956?page=1&amp;tm=1323511313645#footer">Microsoft Answers</a> got me no where.</p>
<p>The phone had been working fine until a few days ago. The problem started when I tired to run the Amazon Kindle app (now my primary platform for reading, yes decided not to buy an actual Kindle at this point), this failed to start, it just kept returning to the phone home page. A power cycle of the phone had no effect. I have seen this before and fixed it with a remove and re-install of the app. However, though the remove was fine, whenever I try to reinstall I got the 0x80070490 error.</p>
<p>I tried installing another WP7 application (not a reinstall) but I got the same error.</p>
<p>As this is a development phone I was able to try to deploy an app XAP file I created from my PC. This worked without a problem.</p>
<p>I checked my account in Zune, I could login and see the applications I have purchased in the past, so I suspected the issue was corruption of the local catalogue on the phone, but I had no way to prove it.</p>
<p>At this point I was out of ideas so did a reset to factory settings on the phone. This was a bit of pain as my phone is one of the ones form the PDC last year, which Microsoft sourced in Germany. So it was off to Google Translate to help me through enough German screens to set the language to English.  But on the plus side I have learnt ‘notruf’ is German for ‘emergency call’.</p>
<p>So I had to</p>
<ul>
<li>Sync with Zune to get my data off the phone</li>
<li>Factory reset (Settings|About)</li>
<li>Set to English</li>
<li>Reinstall Apps I had previous purchased</li>
<li>Re-Sync with Zune and put back any music, podcasts etc.</li>
<li>Set the APN (Setting|Mobile Network) as with Vodafone UK the phone does not seem to pick this automatically</li>
<li>Set things like ring tones, screen locks</li>
<li>And I am sure there are things I will notice I missed over the next few days…..</li>
</ul>
<p>So this took about 30 minutes to get my phone back to something like my settings. Not a great owner experience, but we repave our PCs regularly to get ride of the accumulated rubbish, so why not our phones?</p>
]]></content:encoded>
    </item>
    <item>
      <title>When you forget to save a word document</title>
      <link>https://blog.richardfennell.net/posts/when-you-forget-to-save-a-word-document/</link>
      <pubDate>Fri, 09 Dec 2011 13:33:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/when-you-forget-to-save-a-word-document/</guid>
      <description>&lt;p&gt;We have all done it, opened Word typed all morning, not bothering to save the file as we go along and then for some mad reason exited Word say you did not want to save. So you loose the mornings work.&lt;/p&gt;
&lt;p&gt;Now we know that Word does an auto save, but if you are stupid enough to say yes on exit without saving how do you get the auto backup file? Does Word even keep a backup if you never saved the file for the first time?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have all done it, opened Word typed all morning, not bothering to save the file as we go along and then for some mad reason exited Word say you did not want to save. So you loose the mornings work.</p>
<p>Now we know that Word does an auto save, but if you are stupid enough to say yes on exit without saving how do you get the auto backup file? Does Word even keep a backup if you never saved the file for the first time?</p>
<p>This is just the problem I had recently.</p>
<p>It used to be that to get back an auto recovery file you were hunting around in the</p>
<p>C:Users[user]AppDataRoamingMicrosoftWord auto recover</p>
<p>folder (or wherever it was set in the Word options). Hopefully Word would do this for you, but remember Word will not look for these files if it exited without error. It only tries to recover files if it crashed.</p>
<p>What I did not know was that there was a way to hunt for these files via the menus in Word 2010.</p>
<ol>
<li>
<p>In Word click the &ldquo;File&rdquo; menu, and select the option for &ldquo;Recent.&rdquo;</p>
</li>
<li>
<p>Click the option for &ldquo;Recover Unsaved Documents.&rdquo;</p>
</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_1.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_5.png" title="image"></a></p>
<ol start="4">
<li>You should get the following dialog and your file(s) should be listed<br>
<a href="/wp-content/uploads/sites/2/historic/image_3.png"><br>
<img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6.png" title="image"></a></li>
</ol>
<p>Isn’t it amazing how many features there are in products you use every day you don’t know about. This one saved me a good few hours this week!</p>
]]></content:encoded>
    </item>
    <item>
      <title>My experiences moving to BlogEngine.NET</title>
      <link>https://blog.richardfennell.net/posts/my-experiences-moving-to-blogengine-net/</link>
      <pubDate>Thu, 08 Dec 2011 13:16:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-experiences-moving-to-blogengine-net/</guid>
      <description>&lt;h1 id=&#34;background&#34;&gt;Background&lt;/h1&gt;
&lt;p&gt;I have recently moved this blog server from using &lt;a href=&#34;http://telligent.com/&#34;&gt;Community Server 2007 (CS2007)&lt;/a&gt; to &lt;a href=&#34;http://www.dotnetblogengine.net/&#34;&gt;BlogEngine.NET&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;We started blogging in 2004 using .Text, moving through the free early versions of Community Server then purchased Community Server Small Business edition in 2007. This cost a few hundred pounds. We recently decided that we had to bring this service up to date, if for no other reason, to patch the underling ASP.NET system up to date. We checked how much it would cost to bring Community Server to the current version and were shocked by the cost, many thousands of dollars. Telligent, the developers, have moved to only servicing enterprise customers, they have no small business offering. So we needed to find a new platform.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<h1 id="background">Background</h1>
<p>I have recently moved this blog server from using <a href="http://telligent.com/">Community Server 2007 (CS2007)</a> to <a href="http://www.dotnetblogengine.net/">BlogEngine.NET</a>.</p>
<p>We started blogging in 2004 using .Text, moving through the free early versions of Community Server then purchased Community Server Small Business edition in 2007. This cost a few hundred pounds. We recently decided that we had to bring this service up to date, if for no other reason, to patch the underling ASP.NET system up to date. We checked how much it would cost to bring Community Server to the current version and were shocked by the cost, many thousands of dollars. Telligent, the developers, have moved to only servicing enterprise customers, they have no small business offering. So we needed to find a new platform.</p>
<p>Being a [SharePoint](<a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Specialisations&amp;subsection=SharePoint">http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Specialisations&subsection=SharePoint</a> 2010) house, we consider SharePoint as the blog host. However, we have always had the policy to have systems that have external content creation i.e. you can post a comment, not be on our primary business servers. As we did not want to install a dedicated SharePoint farm for just the blogs we decided to use another platform, remembering we needed on that could support multiple blogs that we could aggregate to provide a BM-Bloggers shared service.</p>
<p>We looked at what appears to be the market leader <a href="http://wordpress.org/">Wordpress</a>, but to host this we needed a <a href="http://www.mysql.com/">MySql</a> Db, which we did not want to install, we don’t need another DB technology on our LAN to support. So we settled on <a href="http://www.dotnetblogengine.net/">BlogEngine.NET</a>, the open source .NET4 blogging platform that can use many different storage technologies, we chose SQL2008 to use our existing SQL server investment.</p>
<h1 id="installation">Installation</h1>
<p>So we did a default install of BlogEngine.NET. We did it manually as I knew we were going to use a custom build of the code, but we could have used the <a href="http://www.microsoft.com/web/downloads/platform.aspx">Web Platform Installer</a></p>
<p>We customised a blog as a template and the used this to create all the child blogs we needed. If we were not bring over old content we would have been finished here. It really would have been quick and simple.</p>
<h1 id="content-migration">Content Migration</h1>
<p>To migrate our data we used <a href="http://blogml.codeplex.com/">BlogML</a>. This allowed us to export CS2007 content as XML files which we then imported to BlogEngine.NET.</p>
<p>BlogEngine.NET provides support for BlogML our the box, but we had install a <a href="http://blogml.codeplex.com/releases/view/171">plug-in for CS2007</a></p>
<p>This was all fairly straight forward, we exported each blog and imported it to the new platform, but as you would expect we did find a few issues</p>
<h2 id="fixing-image-path-do-this-prior-to-import">Fixing Image Path (Do this prior to import)</h2>
<p>The image within blog posts are hard coded as URLs in the export file. If you copied over the image files (that are stored on the blog platform) from the old platform to the new server, on matching urls, then there should be no problems.</p>
<p>However, I decided I wanted images in the location they are meant to be in i.e the [blog]files folder using BlogEngine.NETs image.axd file to  load them. It was easiest to fix these in the BlogML XML file prior to importing it. The basic edited was to change</p>
<p>src=’http://blogs.blackmarble.co.uk/blogs/rfennell/image_file.png’</p>
<p>to</p>
<p>src=’/wp-content/uploads/sites/2/historic/image_file.png’</p>
<p>I did these edits with simple find and replace in a text editor, but you could use regular expressions.</p>
<p>Remember also the images need to be copied from the old server (…blogsrfennellimage_file.png)  to a the new server ( …App_Datablogsrfennellfilesimage_file.png)</p>
<p>We also had posts written with older versions of <a href="http://explore.live.com/windows-live-writer?os=other">LiveWriter</a>. This placed images in a folder structure (e.g.  ..blogsrfennelllivewriterpostsnameimage_file.png). We also need to move these to the new platform and fix the paths appropriately.</p>
<h2 id="post-ownership">Post Ownership</h2>
<p>All the imported posts were shown to have an owner ID not the authors name e.g. 2103 as opposed to Richard.The simplest fix for this was a SQL update after import e.g.</p>
<p>update [BlogEngine].[dbo].[be_Posts] set [Author] = &lsquo;Richard&rsquo; where [Author]=&lsquo;2103&rsquo;</p>
<p>The name set should match the name of a user account created on the blog</p>
<h2 id="comment-ownership">Comment Ownership</h2>
<p>Due to the issues over spam we had forced all users to register on CS2007 to post a comment. These external accounts were not pulled over in the export. However, BlogEngine.NET did not seem that bothered by this.</p>
<p>However no icons for these users was show.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4.png" title="image"></a></p>
<p>These icons should be rendered using the <a href="http://websnapr.com">websnapr.com</a> as a image of the commenter&rsquo;s homepage, but this was failing. This it turned our due to their recent a API changes, you now need to pass a key. As an immediate solution to this I just removed the code that calls websnapr so the default noavatar.jpg image is shown. I intend to look at this when the next release of BlogEngine.NET appears as I am sure this will have a solution to the websnapr API change.</p>
<p>There was also a problem many of the comment author hyper links they all seemed to be <a href="http://g">http://</a>. To fix the worst of this I ran a SQL query.</p>
<p>update be_PostComment set author = &lsquo;Anon&rsquo; where Author = &lsquo;http://&rsquo;</p>
<p>I am sure I could have done a better job with a bit more SQL, but our blog has few comments so I felt I could get away with this basic fix</p>
<h2 id="tags">Tags</h2>
<p>CS2007 displays tag clouds that are based on categories. BlogEngine.Net does the more obvious and uses categories as categories and tags as tags.</p>
<p>To allow the BlogEngine.NET  to show tag clouds the following SQL can be used to duplicate categories to tags</p>
<p>insert into be_PostTag (BlogID,PostID, Tag)<br>
select be_PostCategory.BlogID, postID, categoryname from be_PostCategory, be_Categories where be_PostCategory.CategoryID = be_Categories.CategoryID and be_PostCategory.BlogID =&rsquo;[a guid from be_blogs table]&rsquo;</p>
<h2 id="a-workaround-for-what-could-not-be-exported">A workaround for what could not be exported</h2>
<p>Were we had a major problem was the posts that were made to the original .Text site that was upgraded to Community Server, these were posts from 2004 to 2007.</p>
<p>Unlike all the other blogs these posts would not export via the CS BlogML exporter. We just got a zero byte XML file. I suspect the issue was some flag/property was missing on these posts so the CS2007 internal API was having problems, throwing an internal exception and stopping.</p>
<p>To get around this I had to use the BlogML SDK and some raw SQL queries into CS2007 database. There was a good bit of trial and error here, but by looking at the source of <a href="http://blogml.codeplex.com/releases/view/171">BlogML CS2007 exporter</a> and swapping API calls for my best guess at the SQL I got the posts and comments. It was a bit rough, but am I really that worried over 5 year old plus posts?</p>
<h2 id="blog-relationships">Blog Relationships</h2>
<h3 id="parentchild-relationship">Parent/Child Relationship</h3>
<p>When a child blog is created an existing blog is copied as a template. This includes all its page, posts and users. For this reason it is a really good idea to keep a ‘clean’ template that as as many of the setting correct as possible. So when a new child blog is create you basically only have to create new user accounts and set its name/template</p>
<p><strong>Remember</strong> no user accounts are shared between blogs, so the admin on the parent is not the admin on the child, each blog has its own users.</p>
<h3 id="content-aggregation">Content Aggregation</h3>
<p>A major problem for Black Marble was the lack of aggregation of child blogs. At present BlogEngine.NET allows child blogs, but no built in way to roll up the content to the parent. This is a feature that I understand the developers plan to add in a future release.</p>
<p>To get around this problem, I looked to see if it was easy to modify the FillPosts methods to return all post irrespective of the blog. This would, I my opinion,  have taken too much hacking/editing due to the reliance on the current context to refer to the current blog, so I decided on a more simplistic fix</p>
<ol>
<li>I create a custom template for the parent site that removes all the page/post lists and menu options</li>
<li>Replaced the link to the existing syndication.axd with a hand crafted syndication.ashx</li>
<li>Added the <a href="http://Rssdotnet.com">Rssdotnet.com</a> open source project to the solution and used this to aggregate the Rss feeds of each child blog in the syndication.ashx page</li>
</ol>
<p>This solution will be reviewed on each new release of BlogEngine.Net in case it is no longer required.</p>
<h1 id="summary">Summary</h1>
<p>So how was the process? not as bad as I expected, frankly other than our pre-2007 content it all moved without any major issues.</p>
<p>It is a good feeling to now be on platform we can modify as we need, but has the backing of an active community.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Black Marble’s blog server has changed platform to blogengine.net</title>
      <link>https://blog.richardfennell.net/posts/black-marbles-blog-server-has-changed-platform-to-blogengine-net/</link>
      <pubDate>Wed, 07 Dec 2011 09:45:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/black-marbles-blog-server-has-changed-platform-to-blogengine-net/</guid>
      <description>&lt;p&gt;Hopefully you should not notice, but we have moved this Blog Server from using &lt;a href=&#34;http://telligent.com/&#34;&gt;Community Server 2007&lt;/a&gt; to &lt;a href=&#34;http://www.dotnetblogengine.net/&#34;&gt;BlogEngine.NET&lt;/a&gt;. We have been busy bring over all the content so any search should still return the same page, but it might take a few days.&lt;/p&gt;
&lt;p&gt;I will be posting on my experiences as soon as I have my notes sorted.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;THE MAIN CHANGE YOU MIGHT SEE IS THAT THE BM-BLOGGERS AGGREGRATE FEED HAS MOVED TO&lt;/strong&gt; &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/syndication.axd%20&#34;&gt;http://blogs.blackmarble.co.uk/blogs/syndication.axd &lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Hopefully you should not notice, but we have moved this Blog Server from using <a href="http://telligent.com/">Community Server 2007</a> to <a href="http://www.dotnetblogengine.net/">BlogEngine.NET</a>. We have been busy bring over all the content so any search should still return the same page, but it might take a few days.</p>
<p>I will be posting on my experiences as soon as I have my notes sorted.</p>
<p><strong>THE MAIN CHANGE YOU MIGHT SEE IS THAT THE BM-BLOGGERS AGGREGRATE FEED HAS MOVED TO</strong> <a href="http://blogs.blackmarble.co.uk/blogs/syndication.axd%20">http://blogs.blackmarble.co.uk/blogs/syndication.axd </a></p>
]]></content:encoded>
    </item>
    <item>
      <title>We need to teach children computer science</title>
      <link>https://blog.richardfennell.net/posts/we-need-to-teach-children-computer-science/</link>
      <pubDate>Mon, 28 Nov 2011 11:14:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/we-need-to-teach-children-computer-science/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://news.bbc.co.uk/today/hi/today/newsid_9649000/9649670.stm&#34;&gt;Today programme on BBC Radio 4 this morning had a section on children not being taught computer science.&lt;/a&gt; so says a variety of employers who cannot find the staff they need. It seems entry to Computer Sciences degrees has been dropping over the years and the current ICT course at school are to blame.&lt;/p&gt;
&lt;p&gt;I was at school in the first generation to get access to computers. I did Computer Sciences &lt;a href=&#34;http://en.wikipedia.org/wiki/GCE_Ordinary_Level&#34;&gt;O level&lt;/a&gt; (1982 I think, it is a while ago) and we learnt about flowcharts, CPU, memory and programmed in BASIC on a teletype and sent 5 hole punch paper tape off to a local Polytechnic for processing and a week or so later got back a printout saying error on line 10 (and my staff today complain about their slow PCs!).  During the course we did at least move onto a &lt;a href=&#34;http://en.wikipedia.org/wiki/TRS-80&#34;&gt;Tandy TRS80&lt;/a&gt; so it did get a bit more immediate.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://news.bbc.co.uk/today/hi/today/newsid_9649000/9649670.stm">Today programme on BBC Radio 4 this morning had a section on children not being taught computer science.</a> so says a variety of employers who cannot find the staff they need. It seems entry to Computer Sciences degrees has been dropping over the years and the current ICT course at school are to blame.</p>
<p>I was at school in the first generation to get access to computers. I did Computer Sciences <a href="http://en.wikipedia.org/wiki/GCE_Ordinary_Level">O level</a> (1982 I think, it is a while ago) and we learnt about flowcharts, CPU, memory and programmed in BASIC on a teletype and sent 5 hole punch paper tape off to a local Polytechnic for processing and a week or so later got back a printout saying error on line 10 (and my staff today complain about their slow PCs!).  During the course we did at least move onto a <a href="http://en.wikipedia.org/wiki/TRS-80">Tandy TRS80</a> so it did get a bit more immediate.</p>
<p>My sister is 4 years younger than me, and one of the first group to do <a href="http://en.wikipedia.org/wiki/General_Certificate_of_Secondary_Education">GSCE</a>, she also did Computer Science, but in those few short years the course was already moving towards using a computer as opposed to programming/how it works, though she did have access to a <a href="http://en.wikipedia.org/wiki/BBC_Micro">BBC Micro</a> I don’t remember her ever writing code beyond making a Turtle robot draw a box. This trend to consumption instead of creation seems to have continued onwards. My son who is 9 now uses a computer for many lessons, but only it seems to look things up.</p>
<p>On listening to the radio article, I agreed it is good to teach the ‘how it works’ for computing, just as it is reasonable to know roughly how my car works, though I have no real intention of trying to fix it. A basic understanding of any tools allow you to use it to its best advantage. However, the biggest advantage of Computer Science in schools to me, which they failed to mention, is it teaches logical thinking and fault finding in an unforgiving world. When coding if you miss that semi colon off nothing is going to work.</p>
<p>This sort of skill that is vital in most modern jobs. At Black Marble we have been involved in this area for years, such as being the corporate sponsor/advisors of  the 2007 UK Winners in <a href="http://www.imaginecup.com/default.aspx">Microsoft Imaging Cup</a> with ‘My First Programming Language’. This aimed to teach junior school children how to solve logical problems and program. It also actually addressed one of the issue mentioned in the radio article, the lack of teachers with programming skills (you can’t rely in a keen maths teacher who had built their own computers as my school did on the 80s). It incorporated some AI technology to do the first diagnostic to try to fix the code the student had written before interrupting the teacher, think a complier that can fix the code and explain why the student’s code has failed to the student. This is important when the teacher has 20+ children to help so can only give each one a few minutes.</p>
<p>During this project, and other research work we have done, it showed that many people can benefit from knowing just a little on how to program, not just school children. A good example is the office worker who if they can write an Excel macro can save themselves 20 minutes a day. This adds up so they can save themselves around 60 hours a year and probably reduces there number of errors in calculation too.</p>
<p>I agreed with a recent point on a <a href="http://herdingcode.com/?p=363">Herding Code podcast</a> that many people of my age got into computing because they were typing in games from the front of magazines in the 80s With a bit of application you could see how your favourite arcade games worked. This is less true of todays popular games, an XBox title is more like a 3D movie when all you can create is a holiday snap, it is just too big to comprehend. However, the games on phones and PDAs are more accessible, a kid in the garage can see how they work and create their own games and applications with relative ease, it is back like being the late 80s or early 90s.</p>
<p>So I do hope there is a move back to a real Computer Science in schools. It is not as if there are no jobs in this sector.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows Phone 7 not synchronising Outlook</title>
      <link>https://blog.richardfennell.net/posts/windows-phone-7-not-synchronising-outlook/</link>
      <pubDate>Mon, 28 Nov 2011 10:22:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-phone-7-not-synchronising-outlook/</guid>
      <description>&lt;p&gt;I had a problem with my LG E900 WP7 phone over the weekend, Outlook stopped synchronising with our office Exchange server.&lt;/p&gt;
&lt;p&gt;It started when I got back from a trip Ireland. My phone switched back from roaming and started to use 3G for data again, as opposed to WIFI. Also over the weekend we had a connectivity problem from the office to the Internet so for a while I could not connect to any of our services from any device. However, even after both these things were sorted my Outlook still failed to sync, it said it was in sync but showed no new email since Friday when it was disconnected from my Irish ISP based MIFI in Ireland. No errors were shown. I waited until I got back to the office and tried a sync via our internal WIFI, all to no effect.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I had a problem with my LG E900 WP7 phone over the weekend, Outlook stopped synchronising with our office Exchange server.</p>
<p>It started when I got back from a trip Ireland. My phone switched back from roaming and started to use 3G for data again, as opposed to WIFI. Also over the weekend we had a connectivity problem from the office to the Internet so for a while I could not connect to any of our services from any device. However, even after both these things were sorted my Outlook still failed to sync, it said it was in sync but showed no new email since Friday when it was disconnected from my Irish ISP based MIFI in Ireland. No errors were shown. I waited until I got back to the office and tried a sync via our internal WIFI, all to no effect.</p>
<p>The fix was simple, and obvious, delete the Outlook account on the phone and recreated when I was in the office. Problem is I still have no idea why this issue occurred.</p>
<p>So that is <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/06/16/my-first-serious-problem-with-my-wp7-device.aspx">2 issues</a> in about 6 months, much better than my previous few phones!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Working with Hyper-V, VLAN tags and TFS 2010 Lab Management</title>
      <link>https://blog.richardfennell.net/posts/working-with-hyper-v-vlan-tags-and-tfs-2010-lab-management/</link>
      <pubDate>Wed, 23 Nov 2011 15:45:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/working-with-hyper-v-vlan-tags-and-tfs-2010-lab-management/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/01/21/at-last-my-creature-it-lives-adventures-with-lab-management-and-vlan-tags.aspx&#34;&gt;I did a post&lt;/a&gt; at the start of the year about Lab management and VLAN tags, how they are not supported, but you can work around the problems. Over the past few months we have split our old Hyper-V cluster into one for production and one for test/lab development. This gave our IT team a chance to look at the VLAN problem again.&lt;/p&gt;
&lt;p&gt;So a quick reminder of the issue – the deployment tools in Lab management that create environments provide no means to set a VLAN tag for any networks connections they create. Once an environment is created you can manually set a VLAN tag, but it is all a bit of a pain and certainly unsupported.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/01/21/at-last-my-creature-it-lives-adventures-with-lab-management-and-vlan-tags.aspx">I did a post</a> at the start of the year about Lab management and VLAN tags, how they are not supported, but you can work around the problems. Over the past few months we have split our old Hyper-V cluster into one for production and one for test/lab development. This gave our IT team a chance to look at the VLAN problem again.</p>
<p>So a quick reminder of the issue – the deployment tools in Lab management that create environments provide no means to set a VLAN tag for any networks connections they create. Once an environment is created you can manually set a VLAN tag, but it is all a bit of a pain and certainly unsupported.</p>
<p>The solution our IT team have come up with to avoid the problem is to set the default VLAN tag on the physical port on the Ethernet switch. Hence any VMs/Environments on the the new test/lab Hyper-V don’t have to worry about VLANs at all, they are all automatically, in our case, on subnet 200. This works for TFS Lab Management and also means our developers need to have no knowledge of IP routing setup to deploy a VM/environment. Our production Hyper-V box, that runs much of our business systems, still uses manually set VLAN tagging as before, but as there is no auto deployment involved on this system there are no problems.</p>
<p>There is one gotcha though…..</p>
<p>If you try to use a VM created on our old setup, that was previously set with the VLAN tag of 200, it cannot see the LAN, even though it has what you think is the correct VLAN tag. This is because setting a VLAN tag with Hyper-V to 200 is not the same as not setting a VLAN tag in the operating system and letting the Ethernet switch default the port to the VLAN tag 200. So you have to let the switch manage the VLAN tag, the VM needs to know nothing about it. As shown below</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_233601EE.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1C16C576.png" title="image"></a></p>
<p>So once this is all set you have your routed network, but also have a fully supported Lab Management setup</p>
]]></content:encoded>
    </item>
    <item>
      <title>Typemock Isolator 6.2 released–now with TFS2010 build support</title>
      <link>https://blog.richardfennell.net/posts/typemock-isolator-6-2-released-now-with-tfs2010-build-support/</link>
      <pubDate>Mon, 21 Nov 2011 15:00:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/typemock-isolator-6-2-released-now-with-tfs2010-build-support/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.typemock.com/isolator-product-page&#34;&gt;Typemock&lt;/a&gt; have recently released a new version of Isolator, 6.2. As well as the usual &lt;a href=&#34;http://docs.typemock.com/Isolator/#%23typemock.chm/Documentation/ReleaseNotes62.html&#34;&gt;fixes and enhancements&lt;/a&gt; you would expect, this is the first version to support TFS 2010 team build out the box.&lt;/p&gt;
&lt;p&gt;Prior to this release Typemock supported MSBuild based TFS builds (2005/2008) but not the Windows Workflow based version in TFS 2010. The issue was that in TFS 2010 builds it was possible for build steps to run in parallel, maybe on different build agent PCs, so you could not guarantee that Typemock Isolator was intercepting the correct threads. You could call the MSBuild tasks to enable the mocking interception, as in 2005/2008) it was just that they did not work in practice on 2010.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.typemock.com/isolator-product-page">Typemock</a> have recently released a new version of Isolator, 6.2. As well as the usual <a href="http://docs.typemock.com/Isolator/#%23typemock.chm/Documentation/ReleaseNotes62.html">fixes and enhancements</a> you would expect, this is the first version to support TFS 2010 team build out the box.</p>
<p>Prior to this release Typemock supported MSBuild based TFS builds (2005/2008) but not the Windows Workflow based version in TFS 2010. The issue was that in TFS 2010 builds it was possible for build steps to run in parallel, maybe on different build agent PCs, so you could not guarantee that Typemock Isolator was intercepting the correct threads. You could call the MSBuild tasks to enable the mocking interception, as in 2005/2008) it was just that they did not work in practice on 2010.</p>
<p>To address this problem I wrote a TFS 2010 Build Activity to wrapper the TMOCKRUNNER program to make use all the testing and isolation was run in a single thread. This was (and still is) available from the <a href="http://www.typemock.com/files/Addons/VS2010%20TypemockBuildActivity%201.0.0.0.zip">Typemock add-ons site</a> and there is documentation on the [Community TFS Build Extensions](<a href="http://tfsbuildextensions.codeplex.com/wikipage?title=TFS">http://tfsbuildextensions.codeplex.com/wikipage?title=TFS</a> Build 2010 Activity to run Typemock Isolator based tests&amp;referringTitle=Home&amp;ProjectName=tfsbuildextensions) site and <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx">this blog</a>. In effect you use this custom activity everywhere you would have called MStest in the workflow (the areas circled below)</p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/historic/image_3744061A.png"></p>
<p>This is not the way that the new Typemock Isolated 6.2 tooling works. This goes back to the basic technique used for their MSBuild integration i.e. calling an activity to start mocking interceptions, call other  activities to run test as normal, then call a final activity to switch of mocking interception. The way this signalling occurs has been modified so that it will work  under a TFS 2010 build process template. You end up with a workflow similar to the following</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_19DCADE5.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_12BD716D.png" title="image"></a></p>
<p>The advantage of this model is that you don’t have to fiddle with code to runs the test, unlike my activity which has to wrapper MStest and so needs to be able to handle loads of options command line parameters.</p>
<p>So should make the integration of Typemock Isolator into TFS 2010 build easier. For more detail have a look at the <a href="http://docs.typemock.com/Isolator/#%23typemock.chm/Documentation/TFSBuild.html">Typmock online documentation</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Black Marble at the Abbey Dash in Leeds</title>
      <link>https://blog.richardfennell.net/posts/black-marble-at-the-abbey-dash-in-leeds/</link>
      <pubDate>Mon, 21 Nov 2011 10:36:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/black-marble-at-the-abbey-dash-in-leeds/</guid>
      <description>&lt;p&gt;We got a good turn out of Black Marble staff at this years &lt;a href=&#34;http://www.ageuk.org.uk/get-involved/events-and-challenges/leeds-abbey-dash-in-aid-of-age-uk/&#34;&gt;Age UK Abbey Dash 10K in Leeds&lt;/a&gt;. We were amongst over 8000 runners who turned out for a cool and foggy Yorkshire Sunday morning run.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_7B59B8F5.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_6D1B4005.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;We had a handicap race amongst ourselves, using a &lt;a href=&#34;http://en.wikipedia.org/wiki/The_Wisdom_of_Crowds&#34;&gt;wisdom of crowds&lt;/a&gt; poll to estimate everyone target time. This was won by &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/sspencer/default.aspx&#34;&gt;Steve Spencer our Development Director&lt;/a&gt;. He beat his target by 20 minutes. I am not sure whether this says more about his fitness or our staff’s opinion of him!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We got a good turn out of Black Marble staff at this years <a href="http://www.ageuk.org.uk/get-involved/events-and-challenges/leeds-abbey-dash-in-aid-of-age-uk/">Age UK Abbey Dash 10K in Leeds</a>. We were amongst over 8000 runners who turned out for a cool and foggy Yorkshire Sunday morning run.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7B59B8F5.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6D1B4005.png" title="image"></a></p>
<p>We had a handicap race amongst ourselves, using a <a href="http://en.wikipedia.org/wiki/The_Wisdom_of_Crowds">wisdom of crowds</a> poll to estimate everyone target time. This was won by <a href="http://blogs.blackmarble.co.uk/blogs/sspencer/default.aspx">Steve Spencer our Development Director</a>. He beat his target by 20 minutes. I am not sure whether this says more about his fitness or our staff’s opinion of him!</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_28DAD8B9.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_01A08F84.png" title="image"></a></p>
<p>Congratulations to all who took part, I am sure plenty of good causes benefited from the efforts of everyone who ran</p>
]]></content:encoded>
    </item>
    <item>
      <title>More experiences upgrading my Media Center to receive Freeview HD</title>
      <link>https://blog.richardfennell.net/posts/more-experiences-upgrading-my-media-center-to-receive-freeview-hd/</link>
      <pubDate>Sat, 19 Nov 2011 11:47:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-experiences-upgrading-my-media-center-to-receive-freeview-hd/</guid>
      <description>&lt;p&gt;In my post &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/11/14/experiences-upgrading-my-media-center-to-receive-freeview-hd.aspx&#34;&gt;experiences upgrading my Media Center to receive Freeview HD&lt;/a&gt; I said I thought the reason my Windows 7 Media Center was hanging at the &amp;ldquo;TV signal configuration” step was down to using mixed tuner cards. Well my second &lt;a href=&#34;http://www.pctvsystems.com/Products/ProductsEuropeAsia/Digitalproducts/PCTVnanoStickT2/tabid/248/language/en-GB/Default.aspx&#34;&gt;PCTV nanoStick T2&lt;/a&gt;.arrived yesterday so I was able to try the same process with a pair of identical USB T2 tuners.&lt;/p&gt;
&lt;p&gt;Guess what? I got the same problem!&lt;/p&gt;
&lt;p&gt;However, being USB devices it mean I could test the tuners on my laptop, a Lenovo W520 (Core i7, 16Gb, Windows 7). So I plugged them both in, they found drivers from the web automatically, I ran Media Center, select setup the TV signal and……. it worked! A few worrying pauses here and there, but it got there in about an hour.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In my post <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/11/14/experiences-upgrading-my-media-center-to-receive-freeview-hd.aspx">experiences upgrading my Media Center to receive Freeview HD</a> I said I thought the reason my Windows 7 Media Center was hanging at the &ldquo;TV signal configuration” step was down to using mixed tuner cards. Well my second <a href="http://www.pctvsystems.com/Products/ProductsEuropeAsia/Digitalproducts/PCTVnanoStickT2/tabid/248/language/en-GB/Default.aspx">PCTV nanoStick T2</a>.arrived yesterday so I was able to try the same process with a pair of identical USB T2 tuners.</p>
<p>Guess what? I got the same problem!</p>
<p>However, being USB devices it mean I could test the tuners on my laptop, a Lenovo W520 (Core i7, 16Gb, Windows 7). So I plugged them both in, they found drivers from the web automatically, I ran Media Center, select setup the TV signal and……. it worked! A few worrying pauses here and there, but it got there in about an hour.</p>
<p>So why did it work on a laptop and not on my Media Center PC?</p>
<p>I considered performance, but it seemed unlikely,the Media Center is aCore2 Duo based system about 3 years old and has had no performance problems to date. So the only difference was that the laptop had never seen a TV Tuner before, the Media Center had.</p>
<p><strong>Unused drivers</strong></p>
<p>So I wondered if the old Hauppauge drivers were causing the problem. Remember in Windows if you removed an adaptor card then the drivers are not removed automatically. If  the driver was automatically added (as opposed to you running a setup.exe) then there is no obvious way to removed the drivers. The way to do it as detailed in <a href="http://answers.microsoft.com/en-us/windows/forum/windows_7-tv/hauppauge-win-tv-hvr-1250-will-not-complete-live/eeb11693-91ee-481a-8e42-d4f7b4ca900e">this Microsoft Answers post</a>. When you load device manager this way you see the Hauppauge devices and you can uninstall their drivers.</p>
<p>And it makes no difference to the problem.</p>
<p><strong>Media Center Guide Data and Tuner setup</strong></p>
<p>Using task manager I could see that when Media Center TV setup appeared to hang the <strong>mcupdate.exe</strong> program was running and using a lot of CPU. I had seen this on the Lenovo, but it has passed within 30 seconds or so, on my 3 years old Intel based Media Center PC I would expect it to be a bit slower, but I left it overnight and it did not move on. So it is not just performance.</p>
<p>The <strong>mcupdate.exe</strong> is the tools that updates the TV guide data for Media Center. It is run on a regular basis and also during the setup. So it seems the issue as far as I can see that</p>
<ol>
<li>There is corrupt guide data so that it cannot update the channel guide</li>
<li>There is data about a non-existent tuner that locks the process</li>
<li>There is just too much data to update in the time allows (but you would expect leaving it overnight would fix this)</li>
<li>There is an internet problems getting the guide (which I doubt, too much of a coincidence it happens only when I upgrade a tuner)</li>
</ol>
<p>Simply put I think when the TV setup gets to the point it needs to access this data, it gets into a race condition with the <strong>mcupdate.exe</strong> process which is trying to update the guide.</p>
<p>The <a href="http://www.hack7mc.com/2009/09/clearing-guide-data-and-tuner-setup-from-windows-7-media-center.html">Hack7MC blog</a> post seems to suggest the problem is that the guide data and tuner setup needs to be cleared down and provides a process. post suggest the problem can be addressed by cleared down the data; it provides a process to do this. However I though I would try to avoid this as I did not want really to loose the series recording settings I had if I could avoid it.</p>
<p>So I loaded Media Center and select update guide from the Task menu. This started the <strong>mcupdate</strong> process and  caused a 50% CPU load, and showed no sign of stopping. Again pointing to a probably one of the issues listed above. So I unloaded Media Center, but <strong>mcupdate.exe</strong> was still running as was the tool tray notification application. Again I left this a while to no effect. So I used task manager to kill <strong>mcupdate</strong> and the <strong>ectray.exe</strong> application.</p>
<p>I had at this point intend to run the process from the <a href="http://www.hack7mc.com/2009/09/clearing-guide-data-and-tuner-setup-from-windows-7-media-center.html">Hack7MC post</a>, so stopped all Media Center services, but thought i would give the setup one more try. When I ran the setup TV dsignal I got a message along the lines of ‘guide data corrupt will reload’ and then the setup proceeded exactly as it should have done in the first place. I ended up will all my channels  both HD and non-HD accessible from both tuner, and all my series recording settings intact.</p>
<p>So a success, I am still not clear which step fixed the issue, but I am sure it was down to needing to clear down the guide data and tuner setting fully.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Access denied when running a command with InvokeProcess in a TFS team build</title>
      <link>https://blog.richardfennell.net/posts/access-denied-when-running-a-command-with-invokeprocess-in-a-tfs-team-build/</link>
      <pubDate>Fri, 18 Nov 2011 16:33:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/access-denied-when-running-a-command-with-invokeprocess-in-a-tfs-team-build/</guid>
      <description>&lt;p&gt;When you are trying to run a command line tool via the InvokeProcess activity in a TFS 2010 Team build you might see the somewhat confusing ‘Access denied’ error. There appears to be no more detail in the log.&lt;/p&gt;
&lt;p&gt;I have found that this is usually down to a type on the filename property of the activity.&lt;/p&gt;
&lt;p&gt;It should be set to something like&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“c:my toolstool.exe”&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;but is actually set to&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you are trying to run a command line tool via the InvokeProcess activity in a TFS 2010 Team build you might see the somewhat confusing ‘Access denied’ error. There appears to be no more detail in the log.</p>
<p>I have found that this is usually down to a type on the filename property of the activity.</p>
<p>It should be set to something like</p>
<blockquote>
<p>“c:my toolstool.exe”</p></blockquote>
<p>but is actually set to</p>
<blockquote>
<p>“c:my tools”</p></blockquote>
<p>i.e. it is set to the folder not the filename. An easy mistake to make of cutting and pasting paths in from batch files.</p>
<p>You cannot execute a folder, hence the access denied error. Simple but easy to miss.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Black Marble events for the remainder of 2011 and into 2012</title>
      <link>https://blog.richardfennell.net/posts/black-marble-events-for-the-remainder-of-2011-and-into-2012/</link>
      <pubDate>Mon, 14 Nov 2011 16:55:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/black-marble-events-for-the-remainder-of-2011-and-into-2012/</guid>
      <description>&lt;p&gt;Whilst I have been on holiday &lt;a href=&#34;http://www.blackmarble.com/SectionDisplay.aspx?name=Events&#34;&gt;Black Marble have announced some new events&lt;/a&gt; and this time they are not just in the UK, we have a couple in our &lt;a href=&#34;http://www.glandore.ie/fitzwilliamHall.asp&#34;&gt;new Dublin office location&lt;/a&gt;. The programme up to new year is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;17 Nov 2011 (Holiday Inn Leeds/Bradford)
&lt;ul&gt;
&lt;li&gt;Am - [Delivering High Impact WebSites with SharePoint](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=Delivering&#34;&gt;http://www.blackmarble.com/events.aspx?event=Delivering&lt;/a&gt; High Impact WebSites with SharePoint) SharePoint 2010 creates a dynamic and striking internet experience for your users.&lt;/li&gt;
&lt;li&gt;Pm - [A Guide to Successfully Adopting the Cloud for IT Managers](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=A&#34;&gt;http://www.blackmarble.com/events.aspx?event=A&lt;/a&gt; Guide to Successfully Adopting the Cloud for IT Managers) This session will be exploring the business value of cloud computing with Microsoft Windows Azure.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;30 Nov 2011 (Black Marble office Cleckheaton)
&lt;ul&gt;
&lt;li&gt;Am- [TFS for the small team – hosted in the cloud](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=TFS&#34;&gt;http://www.blackmarble.com/events.aspx?event=TFS&lt;/a&gt; for the small team – hosted in the cloud) I’m doing this one and I will be talking about the hosted version of TFS, its UI, the new features it exposed from TFS vNext.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;6 Dec 2011 (Dublin)
&lt;ul&gt;
&lt;li&gt;Am - [The Right Tools for your Application Lifecycle](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=The&#34;&gt;http://www.blackmarble.com/events.aspx?event=The&lt;/a&gt; Right Tools for your Application Lifecycle) This session will discuss how this Application Lifecycle Management process can be managed, and what are the right tools to do that – Visual Studio Team Foundation Server.&lt;/li&gt;
&lt;li&gt;Pm - [An Afternoon of Windows 8](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=An&#34;&gt;http://www.blackmarble.com/events.aspx?event=An&lt;/a&gt; Afternoon of Windows 8) Following the fanfare, what is there in Windows 8 for developers and the IT professional?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;8 Dec 2011 (Holiday Inn Leeds/Bradford)
&lt;ul&gt;
&lt;li&gt;All day - [Architecture Forum in the North 4](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=Architecture&#34;&gt;http://www.blackmarble.com/events.aspx?event=Architecture&lt;/a&gt; Forum in the North 4) The top Microsoft Architecture Forum in the UK returns for its fourth year.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;26 Jan 2012 (Holiday Inn Leeds/Bradford)
&lt;ul&gt;
&lt;li&gt;Am - [Black Marbles Annual Technical Update for Microsoft Technologies – 2012](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=Black&#34;&gt;http://www.blackmarble.com/events.aspx?event=Black&lt;/a&gt; Marbles Annual Technical Update for Microsoft Technologies - 2012) Return of our roadmap for all things Microsoft!&lt;/li&gt;
&lt;li&gt;Pm - [Implementing the Secure Development Lifecycle in your ALM Process](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=Implementing&#34;&gt;http://www.blackmarble.com/events.aspx?event=Implementing&lt;/a&gt; the Secure Development Lifecycle in your ALM Process) A look at how implementing Microsoft&amp;rsquo;s Security Development Lifecycle (SDL) into your development process can improve quality, reliability and long-term maintainability.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hope to see you at some of these events, I won’t be at all of them due to client commitments, shame I enjoy doing these.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst I have been on holiday <a href="http://www.blackmarble.com/SectionDisplay.aspx?name=Events">Black Marble have announced some new events</a> and this time they are not just in the UK, we have a couple in our <a href="http://www.glandore.ie/fitzwilliamHall.asp">new Dublin office location</a>. The programme up to new year is:</p>
<ul>
<li>17 Nov 2011 (Holiday Inn Leeds/Bradford)
<ul>
<li>Am - [Delivering High Impact WebSites with SharePoint](<a href="http://www.blackmarble.com/events.aspx?event=Delivering">http://www.blackmarble.com/events.aspx?event=Delivering</a> High Impact WebSites with SharePoint) SharePoint 2010 creates a dynamic and striking internet experience for your users.</li>
<li>Pm - [A Guide to Successfully Adopting the Cloud for IT Managers](<a href="http://www.blackmarble.com/events.aspx?event=A">http://www.blackmarble.com/events.aspx?event=A</a> Guide to Successfully Adopting the Cloud for IT Managers) This session will be exploring the business value of cloud computing with Microsoft Windows Azure.</li>
</ul>
</li>
<li>30 Nov 2011 (Black Marble office Cleckheaton)
<ul>
<li>Am- [TFS for the small team – hosted in the cloud](<a href="http://www.blackmarble.com/events.aspx?event=TFS">http://www.blackmarble.com/events.aspx?event=TFS</a> for the small team – hosted in the cloud) I’m doing this one and I will be talking about the hosted version of TFS, its UI, the new features it exposed from TFS vNext.</li>
</ul>
</li>
<li>6 Dec 2011 (Dublin)
<ul>
<li>Am - [The Right Tools for your Application Lifecycle](<a href="http://www.blackmarble.com/events.aspx?event=The">http://www.blackmarble.com/events.aspx?event=The</a> Right Tools for your Application Lifecycle) This session will discuss how this Application Lifecycle Management process can be managed, and what are the right tools to do that – Visual Studio Team Foundation Server.</li>
<li>Pm - [An Afternoon of Windows 8](<a href="http://www.blackmarble.com/events.aspx?event=An">http://www.blackmarble.com/events.aspx?event=An</a> Afternoon of Windows 8) Following the fanfare, what is there in Windows 8 for developers and the IT professional?</li>
</ul>
</li>
<li>8 Dec 2011 (Holiday Inn Leeds/Bradford)
<ul>
<li>All day - [Architecture Forum in the North 4](<a href="http://www.blackmarble.com/events.aspx?event=Architecture">http://www.blackmarble.com/events.aspx?event=Architecture</a> Forum in the North 4) The top Microsoft Architecture Forum in the UK returns for its fourth year.</li>
</ul>
</li>
<li>26 Jan 2012 (Holiday Inn Leeds/Bradford)
<ul>
<li>Am - [Black Marbles Annual Technical Update for Microsoft Technologies – 2012](<a href="http://www.blackmarble.com/events.aspx?event=Black">http://www.blackmarble.com/events.aspx?event=Black</a> Marbles Annual Technical Update for Microsoft Technologies - 2012) Return of our roadmap for all things Microsoft!</li>
<li>Pm - [Implementing the Secure Development Lifecycle in your ALM Process](<a href="http://www.blackmarble.com/events.aspx?event=Implementing">http://www.blackmarble.com/events.aspx?event=Implementing</a> the Secure Development Lifecycle in your ALM Process) A look at how implementing Microsoft&rsquo;s Security Development Lifecycle (SDL) into your development process can improve quality, reliability and long-term maintainability.</li>
</ul>
</li>
</ul>
<p>Hope to see you at some of these events, I won’t be at all of them due to client commitments, shame I enjoy doing these.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences upgrading my Media Center to receive Freeview HD</title>
      <link>https://blog.richardfennell.net/posts/experiences-upgrading-my-media-center-to-receive-freeview-hd/</link>
      <pubDate>Mon, 14 Nov 2011 16:23:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-upgrading-my-media-center-to-receive-freeview-hd/</guid>
      <description>&lt;p&gt;[Update: Also so &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/11/19/more-experiences-upgrading-my-media-center-to-receive-freeview-hd.aspx&#34;&gt;More experiences upgrading my Media Center to receive Freeview HD&lt;/a&gt;]&lt;/p&gt;
&lt;p&gt;I have used a &lt;a href=&#34;http://windows.microsoft.com/en-US/windows/help/windows-media-center&#34;&gt;Windows Media Center&lt;/a&gt; as my only means to watch TV for about 5 years; upgrading over the years from XP to Vista and onto Windows 7.&lt;/p&gt;
&lt;p&gt;![Picture of Windows Media Center](&lt;a href=&#34;http://res1.windows.microsoft.com/resbox/en/Windows&#34;&gt;http://res1.windows.microsoft.com/resbox/en/Windows&lt;/a&gt; 7/main/9b3f877b-eaad-494e-a4ab-938576edf074_60.jpg &amp;ldquo;Picture of Windows Media Center&amp;rdquo;)&lt;/p&gt;
&lt;p&gt;I am a completely hooked on the ease of use, especially if you just want a means to watch and record &lt;a href=&#34;http://freeview.co.uk/&#34;&gt;Freeview&lt;/a&gt; (the UK free to air terrestrial service). Far easier UI than any Freeview PVR I have seen, and I personally think easier than Sky+ box (satellite) or Virgin’s Tivo (cable) though I don’t have much experience of these device (and they do look a bit of a pain to integrate with Media Center, but they are not really designed for that)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>[Update: Also so <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/11/19/more-experiences-upgrading-my-media-center-to-receive-freeview-hd.aspx">More experiences upgrading my Media Center to receive Freeview HD</a>]</p>
<p>I have used a <a href="http://windows.microsoft.com/en-US/windows/help/windows-media-center">Windows Media Center</a> as my only means to watch TV for about 5 years; upgrading over the years from XP to Vista and onto Windows 7.</p>
<p>![Picture of Windows Media Center](<a href="http://res1.windows.microsoft.com/resbox/en/Windows">http://res1.windows.microsoft.com/resbox/en/Windows</a> 7/main/9b3f877b-eaad-494e-a4ab-938576edf074_60.jpg &ldquo;Picture of Windows Media Center&rdquo;)</p>
<p>I am a completely hooked on the ease of use, especially if you just want a means to watch and record <a href="http://freeview.co.uk/">Freeview</a> (the UK free to air terrestrial service). Far easier UI than any Freeview PVR I have seen, and I personally think easier than Sky+ box (satellite) or Virgin’s Tivo (cable) though I don’t have much experience of these device (and they do look a bit of a pain to integrate with Media Center, but they are not really designed for that)</p>
<p>The key reason Freeview is easy to get going with Media Center is the ease of adding a TV Tuner to Windows 7 (this was more awkward in previous versions but the drivers and install process are good now), it now just works – OR SO I THOUGHT…….</p>
<p>I have been running my Media Center with a <a href="http://www.hauppauge.co.uk/site/products/data_novat500.html">Hauppauge Nova T 500 dual tuner</a> for a few years without any issue, this allows two channels to be recorded at the same time. Recently due to the UK switching off the analogue TV signal, some HD channels have become available in my area. To receive these I needed a T2 tuner so I got a <a href="http://www.pctvsystems.com/Products/ProductsEuropeAsia/Digitalproducts/PCTVnanoStickT2/tabid/248/language/en-GB/Default.aspx">PCTV nanoStick T2</a>. Now the first point of interest is that many UK suppliers sell this under the Hauppauge brand, one of the reasons I ordered it as I have been very happy with their kit in the past. Why they do this I don’t know, because as soon as you hit the support forums it is obvious they are different vendors (though part of the same group?).</p>
<p>So I plugged in the new USB tuner, so I now had the 3 tuners, the dual Nova T 500 and the nanoStick. This new device was detected by Windows 7 and a driver installed (without the need to put in the CD-ROM) and Media Center tried to do a signal setup. All seemed OK at first but after the 3 tuners were seen the setup wizard hung with the ‘toilet bowl of death’ busy icon. I left this overnight but it did not move on.</p>
<p>I thought the problem might be that the Nova T 500 could only pickup Freeview whilst the nanoStick could also see Freeview HD. So I rebooted and when it got to the stage to detect the tuners I manual set them up as two groups, one for the pair on the Nova T 500 and the other for the nanoStick. This had no effect on the problem.</p>
<p>So decided to try just the nanoStick; at the manual setup stage I just selected the nanoStick. This worked, I still got the ‘toilet bowl of death’ busy icon, but it cleared in under a minute and the wizard continued to detect Freeview and Freeview HD channels, though this detect phase did take about 30 minutes.</p>
<p>I then tried just configuring the previously working Nova T 500, it all locked up again. So something in installing the nanoStick drivers had screwed up the Nova T 500 install. I tried updating the Nova T 500 drivers to the latest I could find but still got the same problem.</p>
<p>It seems I am not alone in this general problem, a mixing of tuner cards can cause this problems. From the Hauppauge support forums it seems that Nova T 500 is really a USB device bolted onto a PCI riser so I suspect a USB issue, (sorry can’t find the links to this again, but that seems not uncommon with Media Center issues there is no easy central resource, we are in the land for forums – hence Google is your friend, not Bing. I find Google does a better job for forum searches).</p>
<p>So what is my solution? I wimped out and ordered another nanoStick. This has a number of advantages</p>
<ul>
<li>It should just work without hours of fiddling</li>
<li>I do not have to reinstall my Media Center to remove the NanoStick drivers</li>
<li>All my tuners will be able to receive all their available channels; with the mixed tuner install I would have to make sure I associated tuners correctly so that Media Center knew that BBC1 on Freeview on the Nova T 500 is the same as BBC1 on the nanoStick, but BBC1 HD is only on the nanoStick</li>
<li>The image quality seems better on the nanoStick, I don’t just mean that I now have HD, which I do, but I am also seeing less pixilation and break up on the other standard Freeview channels. I assume the nanoStick codecs and silicon are just better, they are a few years newer</li>
<li>It also is a step toward replacing my current Media PC, which is in a standard desktop case with PCI slots, with a quieter, cooler running device something like an <a href="http://rcm-uk.amazon.co.uk/e/cm?t=buitwoonmypc-21&amp;o=2&amp;p=8&amp;l=as1&amp;asins=B0056PDKXI&amp;ref=tf_til&amp;fc1=000000&amp;IS2=1&amp;lt1=_top&amp;m=amazon&amp;lc1=0000FF&amp;bc1=FFFFFF&amp;bg1=FFFFFF&amp;f=ifr">Acer Revo</a>, just not sure I can justify the cost when what I have works.</li>
<li>There is still a reasonable eBay market for a boxed Nova T 500, especially as I have never even opened the wrapper on the Hauppauge remote as I was using the Media Center one.</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>No error detail when using VMPrep</title>
      <link>https://blog.richardfennell.net/posts/no-error-detail-when-using-vmprep/</link>
      <pubDate>Wed, 26 Oct 2011 10:28:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/no-error-detail-when-using-vmprep/</guid>
      <description>&lt;p&gt;When using &lt;a href=&#34;http://archive.msdn.microsoft.com/vslabmgmt&#34;&gt;VMPrep to setup a VM for use in a Lab Management system&lt;/a&gt; I got the error cross at the bottom of the dialog&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_4FB65B25.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_2F9B4E68.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Not too much help, usually there is a link to the log file or a message.&lt;/p&gt;
&lt;p&gt;If you look in the log file in &lt;strong&gt;c:user[name]AppdataRoamingLMInstaller.txt&lt;/strong&gt; you see that the path to the Patches folder is invalid.&lt;/p&gt;
&lt;p&gt;This is fixed by editing the &lt;strong&gt;VMPrepToolVMPrepToolLibraryApplications.XML&lt;/strong&gt; file and correcting the path (which I had made a typo in)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When using <a href="http://archive.msdn.microsoft.com/vslabmgmt">VMPrep to setup a VM for use in a Lab Management system</a> I got the error cross at the bottom of the dialog</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_4FB65B25.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_2F9B4E68.png" title="image"></a></p>
<p>Not too much help, usually there is a link to the log file or a message.</p>
<p>If you look in the log file in <strong>c:user[name]AppdataRoamingLMInstaller.txt</strong> you see that the path to the Patches folder is invalid.</p>
<p>This is fixed by editing the <strong>VMPrepToolVMPrepToolLibraryApplications.XML</strong> file and correcting the path (which I had made a typo in)</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at a Black Marble event on TFS on Azure</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-a-black-marble-event-on-tfs-on-azure/</link>
      <pubDate>Fri, 21 Oct 2011 11:12:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-a-black-marble-event-on-tfs-on-azure/</guid>
      <description>&lt;p&gt;I will be speaking at a [Black Marble event on the 30th November on the new TFS vNext announcements for TFS on Azure](&lt;a href=&#34;http://www.blackmarble.com/events.aspx?event=TFS&#34;&gt;http://www.blackmarble.com/events.aspx?event=TFS&lt;/a&gt; for the small team – hosted in the cloud). Though this event is target as hosted TFS, I will be doing demos of the new web based UI for agile project management in TFS that is common across all VNext versions.&lt;/p&gt;
&lt;p&gt;Hope to see you there&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking at a [Black Marble event on the 30th November on the new TFS vNext announcements for TFS on Azure](<a href="http://www.blackmarble.com/events.aspx?event=TFS">http://www.blackmarble.com/events.aspx?event=TFS</a> for the small team – hosted in the cloud). Though this event is target as hosted TFS, I will be doing demos of the new web based UI for agile project management in TFS that is common across all VNext versions.</p>
<p>Hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Nuget and TFS Build 2010</title>
      <link>https://blog.richardfennell.net/posts/using-nuget-and-tfs-build-2010/</link>
      <pubDate>Mon, 17 Oct 2011 10:38:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-nuget-and-tfs-build-2010/</guid>
      <description>&lt;p&gt;At one of our &lt;a href=&#34;http://www.blackmarble.co.uk/events&#34;&gt;recent events&lt;/a&gt; I was asked if I had any experience using &lt;a href=&#34;http://nuget.codeplex.com/&#34;&gt;Nuget&lt;/a&gt; within a TFS 2010 build. At the time I had not, but I thought it worth a look.&lt;/p&gt;
&lt;p&gt;For those of you who don’t know Nuget is a package manager that provides a developer with a way to manage assembly references in a project for assemblies that are not within their solution. It is most commonly used to manage external commonly used assemblies such a nHibernate or JQuery but you can also use it manage &lt;a href=&#34;http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds&#34;&gt;your own internal shared libraries&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At one of our <a href="http://www.blackmarble.co.uk/events">recent events</a> I was asked if I had any experience using <a href="http://nuget.codeplex.com/">Nuget</a> within a TFS 2010 build. At the time I had not, but I thought it worth a look.</p>
<p>For those of you who don’t know Nuget is a package manager that provides a developer with a way to manage assembly references in a project for assemblies that are not within their solution. It is most commonly used to manage external commonly used assemblies such a nHibernate or JQuery but you can also use it manage <a href="http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds">your own internal shared libraries</a>.</p>
<p>The issue the questioner had was that they had added references via Nuget to a project</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_51C5230C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3F7C5C4A.png" title="image"></a></p>
<p>Their project then contained a <strong>packages.config</strong> file that listed the Nuget dependencies. This was in the project root with the <project>.csproj file.</p>
<blockquote>
<pre tabindex="0"><code>&lt;?xml version=&#34;1.0&#34; encoding=&#34;utf-8&#34;?&gt; &lt;packages&gt;   &lt;package id=&#34;Iesi.Collections&#34; version=&#34;3.2.0.4000&#34; /&gt;   &lt;package id=&#34;NHibernate&#34; version=&#34;3.2.0.4000&#34; /&gt; &lt;/packages&gt;
</code></pre></blockquote>
<p>This <strong>packages.config</strong>  is part of the Visual Studio project and so when the project was put under source control so was it.</p>
<p>However, when they created a TFS build to build this solution all seems OK until the build ran, when they got a build error along the lines</p>
<blockquote>
<p>_Form1.cs (16): The type or namespace name &lsquo;NHibernate&rsquo; could not be found (are you missing a using directive or an assembly reference?)<br>
C:WindowsMicrosoft.NETFramework64v4.0.30319Microsoft.Common.targets (1490): Could not resolve this reference. Could not locate the assembly &ldquo;Iesi.Collections&rdquo;. Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.<br>
C:WindowsMicrosoft.NETFramework64v4.0.30319Microsoft.Common.targets (1490): Could not resolve this reference. Could not locate the assembly &ldquo;NHibernate&rdquo;. Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors.<br>
_</p></blockquote>
<p>Basically the solution builds locally but not on the build box, the assemblies referenced by Nuget are missing. A quick look at the directory structure show why. Nuget stores the assemblies it references in the solution folder, so you end up with</p>
<blockquote>
<p><em>Solution Directory<br>
      Packages – the root of the local cache of assemblies created by Nuget<br>
      Project Directory</em></p></blockquote>
<p>If you look in the <project>.csproj  file you will see a hint path pointing back up to this folder structure so that the project builds locally</p>
<blockquote>
<p><em><Reference Include="NHibernate"><br>
      <HintPath>..packagesNHibernate.3.2.0.4000libNet35NHibernate.dll</HintPath><br>
</Reference></em></p></blockquote>
<p>The problem is that this folder structure is not known to the solution (just to Nuget), so this means when you add the solution to source control this structure is not added, hence the files are not there for the build box to use.</p>
<p>To fix this issue there are two options</p>
<ol>
<li>Add the folder to source control manually</li>
<li>Make the build process aware of Nuget and allow it to get the files it needs as required.</li>
</ol>
<p>For now lets just use the first option, which I like as in general in do want to build my projects against a known version of standard assemblies, so putting the assemblies under source control is not an issue for me. It allows me to easily go back to the specific build if I have to.</p>
<p>(A quick search with your search engine of choice will help with the second option, basically using the nuget.exe command line is the core of the solution)</p>
<p>To add the files to source control, I when into Visual Studio &gt; Team Explorer &gt; Source Control and navigated to the correct folder. I then pressed the add files button and added the whole <strong>Packages</strong> folder. This is where I think my questioner might have gone wrong. When you add the whole folder structure the default is to exclude .DLLs (and .EXEs)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_1F614F8D.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6A54C05C.png" title="image"></a></p>
<p>If you don’t specifically add these files you will still get the missing references on the build, but could easily be thinking ‘ but I just added them!’, easy mistake to made, I know I did it.</p>
<p>Once <strong>ALL</strong> the correct files are under source control the build works as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>You can still sign up for DDDNorth tomorrow</title>
      <link>https://blog.richardfennell.net/posts/you-can-still-sign-up-for-dddnorth-tomorrow/</link>
      <pubDate>Fri, 07 Oct 2011 12:03:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-can-still-sign-up-for-dddnorth-tomorrow/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.developerdeveloperdeveloper.com/north/Default.aspx&#34;&gt;Registration is still open for tomorrow’s event&lt;/a&gt; so if your weekend has just become free why not come along?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.developerdeveloperdeveloper.com/north/Default.aspx">Registration is still open for tomorrow’s event</a> so if your weekend has just become free why not come along?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Documentation for the PowerShell activity in the TFS Community Build Extensions published</title>
      <link>https://blog.richardfennell.net/posts/documentation-for-the-powershell-activity-in-the-tfs-community-build-extensions-published/</link>
      <pubDate>Mon, 03 Oct 2011 20:43:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/documentation-for-the-powershell-activity-in-the-tfs-community-build-extensions-published/</guid>
      <description>&lt;p&gt;I have just published documentation for the &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20InvokePowerShellCommand%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;PowerShell activity in the TFS Community Build extensions&lt;/a&gt;. This opens up a whole range of possibilities for your build process. Off to look at using it for SharePoint deployment now…..&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just published documentation for the <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20InvokePowerShellCommand%20build%20activity&amp;referringTitle=Documentation">PowerShell activity in the TFS Community Build extensions</a>. This opens up a whole range of possibilities for your build process. Off to look at using it for SharePoint deployment now…..</p>
]]></content:encoded>
    </item>
    <item>
      <title>Seeing loads of ‘cannot load load assemblies’ errors when editing a TFS 2010 build process workflow</title>
      <link>https://blog.richardfennell.net/posts/seeing-loads-of-cannot-load-load-assemblies-errors-when-editing-a-tfs-2010-build-process-workflow/</link>
      <pubDate>Mon, 03 Oct 2011 11:47:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/seeing-loads-of-cannot-load-load-assemblies-errors-when-editing-a-tfs-2010-build-process-workflow/</guid>
      <description>&lt;p&gt;I have been following the process in the &lt;a href=&#34;http://rabcg.codeplex.com/&#34;&gt;ALM Rangers build guide&lt;/a&gt; and in the &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;amp;referringTitle=Documentation&#34;&gt;Community Build Extensions&lt;/a&gt; to edit a build process workflow. Now I am sure this process was working until recently on my PC (but we all say that don’t we!), but of late I have found that when the .XAML workflow is loaded into Visual Studio I see loads of warning icons. If I check the list of imported namespaces many of them also have warning icons which if the icons are clicked they say the assembly cannot be found.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been following the process in the <a href="http://rabcg.codeplex.com/">ALM Rangers build guide</a> and in the <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20extensions%20into%20a%20build%20template&amp;referringTitle=Documentation">Community Build Extensions</a> to edit a build process workflow. Now I am sure this process was working until recently on my PC (but we all say that don’t we!), but of late I have found that when the .XAML workflow is loaded into Visual Studio I see loads of warning icons. If I check the list of imported namespaces many of them also have warning icons which if the icons are clicked they say the assembly cannot be found.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_57E918C3.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3BD859D8.png" title="image"></a></p>
<p>Now all these errors did not stop the editing process working. What I found was that if I made an edit in the graphical designer for the workflow or edited a property of an activity then my Visual Studio instance locked for about 20 seconds and it was fine (whilst there was loads of disk activity). I also noticed I got no intellisense when setting properties. Not a great position to be in but at least I could make some edits, if only slowly.</p>
<p>Using <a href="http://technet.microsoft.com/en-us/sysinternals/bb896645">Process Monitor</a> I could see that Visual Studio was scanning folders for the files when loading the XAML workflow, but not finding them.</p>
<p>The fix is actually simple. In the project that is being used as a container for the workflow being editing, make sure you reference the missing assemblies. These can be found in one of the following folders</p>
<ul>
<li>C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDE</li>
<li>C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEPrivateAssemblies</li>
<li>C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEReferenceAssembliesv2.0</li>
</ul>
<p>On my PC most of the assemblies were in the ReferenceAssemblies folder, not the first two, but on checking another PC at my office they were present in the PrivateAssemblies (which VS does scan)</p>
<p>Not sure why this has stopped working, what removed the files from my PrivateAssemblies folder, the only thing I can thing that I did was to install the Dev11 preview, But can’t see how this should have any effect.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Empty groups not being expanded in a combobox for a TFS work item</title>
      <link>https://blog.richardfennell.net/posts/empty-groups-not-being-expanded-in-a-combobox-for-a-tfs-work-item/</link>
      <pubDate>Thu, 29 Sep 2011 11:41:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/empty-groups-not-being-expanded-in-a-combobox-for-a-tfs-work-item/</guid>
      <description>&lt;p&gt;A common work item type (WIT) edit in TFS is to limit the list of names shown in a combo to the users assigned to the project i.e. the members of the Team Projects &lt;strong&gt;Contributors&lt;/strong&gt; and &lt;strong&gt;Project Administrators&lt;/strong&gt; groups.&lt;/p&gt;
&lt;p&gt;This is done by editing the WIT either via your favourite XML editor or the Process Template Editor (part of the &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f&#34;&gt;power tools&lt;/a&gt;). You edit the A&lt;strong&gt;llowedvalues&lt;/strong&gt; for the field you wish to limit such as the &lt;strong&gt;Assigned To&lt;/strong&gt; as shown below,&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A common work item type (WIT) edit in TFS is to limit the list of names shown in a combo to the users assigned to the project i.e. the members of the Team Projects <strong>Contributors</strong> and <strong>Project Administrators</strong> groups.</p>
<p>This is done by editing the WIT either via your favourite XML editor or the Process Template Editor (part of the <a href="http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f">power tools</a>). You edit the A<strong>llowedvalues</strong> for the field you wish to limit such as the <strong>Assigned To</strong> as shown below,</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_4760D173.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_57E8A694.png" title="image"></a></p>
<p>Which gives the following XML behind the scenes (for those using XML editors)</p>
<ListRule filteritems="excludegroups">  
  <LISTITEM value="\[Project\]Contributors" />  
  <LISTITEM value="\[Project\]Project Administrators" />  
  <LISTITEM value="Unassigned" />  
</ListRule>
<p>Notice that <strong>Expand Items</strong> and <strong>Exclude Groups</strong> are checked. This means that the first two lines in the list will be expanded to contain the names in the groups, not the group names themselves.</p>
<p>A small gotcha here is that if either of the groups are empty you do see the group name in the combobox list, even with the <strong>Exclude Groups</strong> checked. Team Explorer does not expand an empty list to be a list with no entries, it show the group name. So you would see in the combo something like</p>
<ul>
<li>[MyProject]Contributors</li>
<li>John</li>
<li>Fred</li>
<li>Unassigned</li>
</ul>
<p>where John and Fred as project administrators and the [MyProject]Contributors group is empty.</p>
<p>This should not be a serious issue as in most cases why would you have a Team Project with no contributors or administrators? However it is conceivable with more complex security models you might see this issue. if so make sure each group in the list has at least one member, again if it does not have any members do you really need it?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Tempted by the new Kindle?</title>
      <link>https://blog.richardfennell.net/posts/tempted-by-the-new-kindle/</link>
      <pubDate>Wed, 28 Sep 2011 15:32:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tempted-by-the-new-kindle/</guid>
      <description>&lt;p&gt;I am back on the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/19/should-i-buy-a-kindle.aspx&#34;&gt;should I buy a Kindle&lt;/a&gt; train of thought. Todays announcements are certainly interesting, I am not talking so much about the new &lt;a href=&#34;http://www.engadget.com/2011/09/28/amazon-fire-tablet-unveiled-7-inch-display-199-price-tag/&#34;&gt;Kindle Fire&lt;/a&gt;, but the new &lt;a href=&#34;http://www.amazon.co.uk/gp/product/B0051QVF7A?country=GB&#34;&gt;entry level version&lt;/a&gt; and the &lt;a href=&#34;http://www.engadget.com/2011/09/28/amazon-launches-kindle-touch/&#34;&gt;Touch&lt;/a&gt;. For me the tempting feature is still the E-Ink and battery life.&lt;/p&gt;
&lt;p&gt;The point is I have got used to reading on my phone, a Kindle might be easier on the eye, but it is more kit to carry, and I just don’t think I want to carry any more things.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am back on the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/19/should-i-buy-a-kindle.aspx">should I buy a Kindle</a> train of thought. Todays announcements are certainly interesting, I am not talking so much about the new <a href="http://www.engadget.com/2011/09/28/amazon-fire-tablet-unveiled-7-inch-display-199-price-tag/">Kindle Fire</a>, but the new <a href="http://www.amazon.co.uk/gp/product/B0051QVF7A?country=GB">entry level version</a> and the <a href="http://www.engadget.com/2011/09/28/amazon-launches-kindle-touch/">Touch</a>. For me the tempting feature is still the E-Ink and battery life.</p>
<p>The point is I have got used to reading on my phone, a Kindle might be easier on the eye, but it is more kit to carry, and I just don’t think I want to carry any more things.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Just posted VirtualPC activity documentation for TFS 2010 Community Build Extensions</title>
      <link>https://blog.richardfennell.net/posts/just-posted-virtualpc-activity-documentation-for-tfs-2010-community-build-extensions/</link>
      <pubDate>Tue, 27 Sep 2011 14:18:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/just-posted-virtualpc-activity-documentation-for-tfs-2010-community-build-extensions/</guid>
      <description>&lt;p&gt;I have just posted new &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20VirtualPC%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;VirtualPC activity documentation for TFS 2010 Community Build Extensions&lt;/a&gt;. This has been a really nasty set of documentation to write as getting this activity running raises a lot of issues over COM security; thanks to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx&#34;&gt;Rik&lt;/a&gt; and &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/adawson/default.aspx&#34;&gt;Andy&lt;/a&gt; (our SharePoint specialists at Black Marble who are therefore used to COM problems!) who helped get to the bottom the issues.&lt;/p&gt;
&lt;p&gt;The best thing I can say about this VirtualPC activity (and I wrote much of it) is don’t use it. Much better to use the Hyper-V one it is far more flexible, allowing control of remotely hosted VMs, or even better use TFS &lt;a href=&#34;http://blogs.msdn.com/b/lab_management/&#34;&gt;Lab Management&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just posted new <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20VirtualPC%20build%20activity&amp;referringTitle=Documentation">VirtualPC activity documentation for TFS 2010 Community Build Extensions</a>. This has been a really nasty set of documentation to write as getting this activity running raises a lot of issues over COM security; thanks to <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx">Rik</a> and <a href="http://blogs.blackmarble.co.uk/blogs/adawson/default.aspx">Andy</a> (our SharePoint specialists at Black Marble who are therefore used to COM problems!) who helped get to the bottom the issues.</p>
<p>The best thing I can say about this VirtualPC activity (and I wrote much of it) is don’t use it. Much better to use the Hyper-V one it is far more flexible, allowing control of remotely hosted VMs, or even better use TFS <a href="http://blogs.msdn.com/b/lab_management/">Lab Management</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Upcoming Black Marble event on Windows 8</title>
      <link>https://blog.richardfennell.net/posts/upcoming-black-marble-event-on-windows-8/</link>
      <pubDate>Tue, 27 Sep 2011 10:09:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upcoming-black-marble-event-on-windows-8/</guid>
      <description>&lt;p&gt;In case you did not make it to the &lt;a href=&#34;http://www.buildwindows.com/&#34;&gt;Microsoft Build Conference&lt;/a&gt;, Black Marble are running a pair of free events in Leeds on the 12th of October on Windows 8 and the other announcements made in Anaheim earlier this month.&lt;/p&gt;
&lt;p&gt;The morning session is focused on the IT pro side and the afternoon on development, so why not make a day of it?&lt;/p&gt;
&lt;p&gt;To get more information, and to register for these free events, have a look at &lt;a href=&#34;http://www.blackmarble.co.uk/Events&#34; title=&#34;http://www.blackmarble.co.uk/Events&#34;&gt;http://www.blackmarble.co.uk/Events&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In case you did not make it to the <a href="http://www.buildwindows.com/">Microsoft Build Conference</a>, Black Marble are running a pair of free events in Leeds on the 12th of October on Windows 8 and the other announcements made in Anaheim earlier this month.</p>
<p>The morning session is focused on the IT pro side and the afternoon on development, so why not make a day of it?</p>
<p>To get more information, and to register for these free events, have a look at <a href="http://www.blackmarble.co.uk/Events" title="http://www.blackmarble.co.uk/Events">http://www.blackmarble.co.uk/Events</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Syncing the build number and assembly version numbers in a TFS build when using the TFSVersion activity</title>
      <link>https://blog.richardfennell.net/posts/syncing-the-build-number-and-assembly-version-numbers-in-a-tfs-build-when-using-the-tfsversion-activity/</link>
      <pubDate>Mon, 26 Sep 2011 18:42:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/syncing-the-build-number-and-assembly-version-numbers-in-a-tfs-build-when-using-the-tfsversion-activity/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 27 July 2013 -&lt;/strong&gt; &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/07/27/Making-the-drops-location-for-a-TFS-build-match-the-assembly-version-number.aspx&#34;&gt;Here is a potential solution&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update 27 Sep 2011 –&lt;/strong&gt; this seemed such as good idea when I initially tried it out, but after more testing I see changing the build number part way through a build causes problems. The key one being that when you queue the next build it is issued the same revision number [the $(Rev:.r) in the BuildNumberFormat] as the just completed build, this will fail with the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 27 July 2013 -</strong> <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/post/2013/07/27/Making-the-drops-location-for-a-TFS-build-match-the-assembly-version-number.aspx">Here is a potential solution</a></p>
<p><strong>Update 27 Sep 2011 –</strong> this seemed such as good idea when I initially tried it out, but after more testing I see changing the build number part way through a build causes problems. The key one being that when you queue the next build it is issued the same revision number [the $(Rev:.r) in the BuildNumberFormat] as the just completed build, this will fail with the error</p>
<blockquote>
<p><em>TF42064: The build number &lsquo;BuildCustomisation_20110927.17 (4.5.269.17)&rsquo; already exists for build definition &lsquo;MSF AgileBuildCustomisation&rsquo;.</em></p></blockquote>
<p>This failed build causes the revision to increment so the next build is fine, and the cycle continues. Every other build works</p>
<p>After a bit more thought the only option I can see to avoid this problems is the one <a href="http://www.ewaldhofman.nl/post/2010/06/01/Customize-Team-Build-2010-e28093-Part-10-Include-Version-Number-in-the-Build-Number.aspx">Ewald</a> originally outlined i.e. a couple of  new activities that are run on the controller, one to get the last assembly version and the second to override the build number generator that are run prior to everything else.</p>
<p>So I have struck out some bits of the post and made it clear where there are issues, but I wanted to leave it available as I think it does show how it is easy to get to point that you think is working, but turns out there are unexpected problems</p>
<p>[Original content follows]</p>
<hr>
<p>I was asked recently if it was possible to make the TFS build number the same format as the assembly build number when you are using the <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20TfsVersion%20build%20activity&amp;referringTitle=Documentation">TFSVersion community extension</a>. The answer is yes and no; the issue is that the build drop location in the standard build process template is create before any files are got into the workspace from source control. So at this point you don’t have new version number, a bit of a problem.</p>
<p>A detailed description of one approach to this general problem can be found in <a href="http://www.ewaldhofman.nl/post/2010/06/01/Customize-Team-Build-2010-e28093-Part-10-Include-Version-Number-in-the-Build-Number.aspx">Ewald Hofman’s blog on the subject</a>, but to use this technique you end up creating another custom activity to update the build number using your own macro expansion. Also there will be some major workflow changes to make sure directories are created with the correct names.</p>
<p><strong><Following Does not Work></strong></p>
<p><strong>There following instructions do not give the desired result for the reasons outlined at the top of this updated post</strong></p>
<p>I was looking for a simpler (lazier) solution when using the TFSVersion activity and the one I came up with to just update the build name, and do it after the drop directory was created. So that I did not end up with two completely different names for the drop folder and build I just append the version number to the existing build number. This is done by re-running the <strong>Update Build Number</strong> activity. I added this new activity to example workflow logic from the <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20TfsVersion%20build%20activity&amp;referringTitle=Documentation">TFSVersion documentation</a>, ending up with something like</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0592D137.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4BA38E4A.png" title="image"></a></p>
<p>where the <strong>Update Build Number</strong> has the following properties</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2A43E8AE.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7B7E300B.png" title="image"></a></p>
<p>So we just append the newly generated version number to the existing build number.</p>
<blockquote>
<p><em>string.Format(&quot;{0} ({1})&quot;, BuildDetail.BuildNumber , VersionNumber)</em></p></blockquote>
<p><strong>Note</strong> you can’t use the <strong>BuildNumberFormat</strong>.in this string format as this string contains the <strong>$(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.r)</strong> text that via macro expansion is used to generate the build number. The problems if this expansion if used in this string format you get the error</p>
<blockquote>
<p><em>The revision number $(Rev:.r) is allowed to occur only at the end of the format string.</em></p></blockquote>
<p>So it is easier to use the previously generated build number.</p>
<p>This also has the effect that drop folder retains the original name but all the reports contain the old folder name with the generated build number in brackets, and the <strong>Open Drop Folder</strong> link still works</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0C162AFA.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_67186A80.png" title="image"></a></p>
<p>I reasonable compromise I think, not too much change from a standard template.</p>
<p>Does not work as only alternate build work, see top of post</p>
<p><strong>&lt;/Following Does not Work&gt;</strong></p>
]]></content:encoded>
    </item>
    <item>
      <title>Video of my Webinar for Typemock is available</title>
      <link>https://blog.richardfennell.net/posts/video-of-my-webinar-for-typemock-is-available/</link>
      <pubDate>Mon, 26 Sep 2011 09:26:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/video-of-my-webinar-for-typemock-is-available/</guid>
      <description>&lt;p&gt;The video of my recent webinar for Typemock on using Isolator for designing SharePoint webparts is up at &lt;a href=&#34;http://www.typemock.com/sharepoint-web-parts&#34; title=&#34;http://www.typemock.com/sharepoint-web-parts&#34;&gt;http://www.typemock.com/sharepoint-web-parts&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You can also find a link to it from &lt;a href=&#34;http://bit.ly/RFennellVideos&#34;&gt;http://bit.ly/RFennellVideos&lt;/a&gt;, as well as links to all the other videos of my presentations I know of that are publically available.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The video of my recent webinar for Typemock on using Isolator for designing SharePoint webparts is up at <a href="http://www.typemock.com/sharepoint-web-parts" title="http://www.typemock.com/sharepoint-web-parts">http://www.typemock.com/sharepoint-web-parts</a></p>
<p>You can also find a link to it from <a href="http://bit.ly/RFennellVideos">http://bit.ly/RFennellVideos</a>, as well as links to all the other videos of my presentations I know of that are publically available.</p>
]]></content:encoded>
    </item>
    <item>
      <title>More documentation for the TFS 2010 community build extensions</title>
      <link>https://blog.richardfennell.net/posts/more-documentation-for-the-tfs-2010-community-build-extensions/</link>
      <pubDate>Sat, 24 Sep 2011 19:03:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-documentation-for-the-tfs-2010-community-build-extensions/</guid>
      <description>&lt;p&gt;Today I have posted a few more pages of the getting started &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/documentation&#34;&gt;documentation for the TFS 2010 community build extensions&lt;/a&gt;. This is an on-going task, I hope to get a few more written when I get a chance. I am writing the documentation in no obvious order, so let me know if any specific activity is in more need of some introductory level documentation than others.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today I have posted a few more pages of the getting started <a href="http://tfsbuildextensions.codeplex.com/documentation">documentation for the TFS 2010 community build extensions</a>. This is an on-going task, I hope to get a few more written when I get a chance. I am writing the documentation in no obvious order, so let me know if any specific activity is in more need of some introductory level documentation than others.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New Release of the Community TFS 2010 Build Extensions</title>
      <link>https://blog.richardfennell.net/posts/new-release-of-the-community-tfs-2010-build-extensions/</link>
      <pubDate>Fri, 23 Sep 2011 07:42:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-release-of-the-community-tfs-2010-build-extensions/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://mikefourie.wordpress.com/2011/09/22/community-tfs-2010-build-extensions-september-2011/&#34;&gt;Mike Fourie has just announced&lt;/a&gt; that we’ve just shipped the second stable &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/releases/view/67139&#34;&gt;release&lt;/a&gt; of the &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/&#34;&gt;Community TFS 2010 Build Extensions&lt;/a&gt;. Well worth a look if you need to customised your TFS 2010 build with any of the following&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;AssemblyInfo&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;BuildReport&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;BuildWorkspace&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;CodeMetric&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;DateAndTime&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Email&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;File&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GetBuildController&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GetBuildDefinition&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GetBuildServer&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;GetWebAccessUrl&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Guid&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Hello&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HyperV&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;IIS7&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;ILMerge&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;InvokePowerShellCommand&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;nUnit&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;QueueBuild&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;RoboCop&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SqlExecute&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;StyleCop&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;TFSVersion&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;VB6&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;VirtualPC&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;VSDevEnv&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Wmi&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;WorkItemTracking&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Zip&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://mikefourie.wordpress.com/2011/09/22/community-tfs-2010-build-extensions-september-2011/">Mike Fourie has just announced</a> that we’ve just shipped the second stable <a href="http://tfsbuildextensions.codeplex.com/releases/view/67139">release</a> of the <a href="http://tfsbuildextensions.codeplex.com/">Community TFS 2010 Build Extensions</a>. Well worth a look if you need to customised your TFS 2010 build with any of the following</p>
<ul>
<li>
<p>AssemblyInfo</p>
</li>
<li>
<p>BuildReport</p>
</li>
<li>
<p>BuildWorkspace</p>
</li>
<li>
<p>CodeMetric</p>
</li>
<li>
<p>DateAndTime</p>
</li>
<li>
<p>Email</p>
</li>
<li>
<p>File</p>
</li>
<li>
<p>GetBuildController</p>
</li>
<li>
<p>GetBuildDefinition</p>
</li>
<li>
<p>GetBuildServer</p>
</li>
<li>
<p>GetWebAccessUrl</p>
</li>
<li>
<p>Guid</p>
</li>
<li>
<p>Hello</p>
</li>
<li>
<p>HyperV</p>
</li>
<li>
<p>IIS7</p>
</li>
<li>
<p>ILMerge</p>
</li>
<li>
<p>InvokePowerShellCommand</p>
</li>
<li>
<p>nUnit</p>
</li>
<li>
<p>QueueBuild</p>
</li>
<li>
<p>RoboCop</p>
</li>
<li>
<p>SqlExecute</p>
</li>
<li>
<p>StyleCop</p>
</li>
<li>
<p>TFSVersion</p>
</li>
<li>
<p>VB6</p>
</li>
<li>
<p>VirtualPC</p>
</li>
<li>
<p>VSDevEnv</p>
</li>
<li>
<p>Wmi</p>
</li>
<li>
<p>WorkItemTracking</p>
</li>
<li>
<p>Zip</p>
</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Follow up to todays Typemock Webinar</title>
      <link>https://blog.richardfennell.net/posts/follow-up-to-todays-typemock-webinar/</link>
      <pubDate>Wed, 21 Sep 2011 17:06:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-up-to-todays-typemock-webinar/</guid>
      <description>&lt;p&gt;Thanks to any one who attended my Typemock Isolator session today, hope you found it useful.&lt;/p&gt;
&lt;p&gt;In due course &lt;a href=&#34;http://www.typemock.com/webinars&#34;&gt;the video will be up on the Typemock site&lt;/a&gt;. But of you want to read more on using Isolator with SHarePoint have a look at&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://bit.ly/MockSharepointWithTypemock&#34;&gt;Mocking Sharepoint for Design with Typemock Isolator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://bit.ly/Handling64BitDlls&#34;&gt;Mocks SP2010 64bit Assemblies with Typemock Isolator&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to any one who attended my Typemock Isolator session today, hope you found it useful.</p>
<p>In due course <a href="http://www.typemock.com/webinars">the video will be up on the Typemock site</a>. But of you want to read more on using Isolator with SHarePoint have a look at</p>
<ul>
<li><a href="http://bit.ly/MockSharepointWithTypemock">Mocking Sharepoint for Design with Typemock Isolator</a></li>
<li><a href="http://bit.ly/Handling64BitDlls">Mocks SP2010 64bit Assemblies with Typemock Isolator</a></li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Stupid gotchas on a SQL 2008 Reporting Services are why I cannot see the Report Builder Button</title>
      <link>https://blog.richardfennell.net/posts/stupid-gotchas-on-a-sql-2008-reporting-services-are-why-i-cannot-see-the-report-builder-button/</link>
      <pubDate>Wed, 21 Sep 2011 12:17:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/stupid-gotchas-on-a-sql-2008-reporting-services-are-why-i-cannot-see-the-report-builder-button/</guid>
      <description>&lt;p&gt;There is a good chance if you are using TFS that you will want to create some custom reports. You can write these in Reporting Services via BI Studio or Excel, but I wanted to use Report Builder, but could not see the Report Builder button on this Reporting Services menu&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_31230710.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_37D61093.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The problem was multi-levelled&lt;/p&gt;
&lt;p&gt;First I had to give the user access to the Report Builder. This is done using folder property security. I chose to give this right to a user (along with browser rights) from the root of the reporting services site&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is a good chance if you are using TFS that you will want to create some custom reports. You can write these in Reporting Services via BI Studio or Excel, but I wanted to use Report Builder, but could not see the Report Builder button on this Reporting Services menu</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_31230710.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_37D61093.png" title="image"></a></p>
<p>The problem was multi-levelled</p>
<p>First I had to give the user access to the Report Builder. This is done using folder property security. I chose to give this right to a user (along with browser rights) from the root of the reporting services site</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_17BB03D6.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_779FF718.png" title="image"></a></p>
<p>But still no button. Forums and blog posts then talk about changing options on the ‘Site Settings’ menu, the above screenshots shows that this is also missing from the top right.</p>
<p>To get this menu option back, I had to run my browser as administrator and then this option appeared. Turns out that the TFS Setup user I was using  had not been made a Reporting Services site administrator, just a content administrator.</p>
<p>But still this was not enough, I also add to add users as System Users to allow the Reporting Services button to appears. So my final Site Settings &gt; Security options were</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_05723D14.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_5065ADE3.png" title="image"></a></p>
<p>Once all this was done I got my Report Build button and I could start to write reports.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Another ALM/TFS focused blogger at Black Marble</title>
      <link>https://blog.richardfennell.net/posts/another-almtfs-focused-blogger-at-black-marble/</link>
      <pubDate>Mon, 19 Sep 2011 08:36:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/another-almtfs-focused-blogger-at-black-marble/</guid>
      <description>&lt;p&gt;Robert Hancock, another one of Black Marble’s ALM consultants, has started blog technical tips on this blog server. You can find &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhancock/default.aspx&#34;&gt;Robert’s new blog here&lt;/a&gt; or the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/default.aspx?GroupID=2&#34;&gt;aggregate feed of all Black marble bloggers here&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Robert Hancock, another one of Black Marble’s ALM consultants, has started blog technical tips on this blog server. You can find <a href="http://blogs.blackmarble.co.uk/blogs/rhancock/default.aspx">Robert’s new blog here</a> or the <a href="http://blogs.blackmarble.co.uk/blogs/default.aspx?GroupID=2">aggregate feed of all Black marble bloggers here</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>What to do when your TFS build agent says it is ready, but the icon says it is not</title>
      <link>https://blog.richardfennell.net/posts/what-to-do-when-your-tfs-build-agent-says-it-is-ready-but-the-icon-says-it-is-not/</link>
      <pubDate>Sat, 17 Sep 2011 20:10:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-to-do-when-your-tfs-build-agent-says-it-is-ready-but-the-icon-says-it-is-not/</guid>
      <description>&lt;p&gt;When using TFS2010 It is possible for a build agent to appear to be ready (or so the status label says) but the icon stays in the off state.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_18EA6702.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_32DE2724.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This is usually due to a couple of broad categories of error, you can find out which by checking the Windows event log.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The build agent cannot communicate with the controller&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the event log you see something like&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Service &amp;lsquo;Default Agent - win7&amp;rsquo; had an exception:&lt;br&gt;
Exception Message: There was no endpoint listening at&lt;/em&gt; &lt;em&gt;&lt;a href=&#34;http://controller.mydomain.co.uk:9191/Build/v3.0/Services/Controller/1&#34;&gt;http://controller.mydomain.co.uk:9191/Build/v3.0/Services/Controller/1&lt;/a&gt;&lt;/em&gt; &lt;em&gt;that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. (type EndpointNotFoundException)&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When using TFS2010 It is possible for a build agent to appear to be ready (or so the status label says) but the icon stays in the off state.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_18EA6702.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_32DE2724.png" title="image"></a></p>
<p>This is usually due to a couple of broad categories of error, you can find out which by checking the Windows event log.</p>
<p><strong>The build agent cannot communicate with the controller</strong></p>
<p>In the event log you see something like</p>
<p><em>Service &lsquo;Default Agent - win7&rsquo; had an exception:<br>
Exception Message: There was no endpoint listening at</em> <em><a href="http://controller.mydomain.co.uk:9191/Build/v3.0/Services/Controller/1">http://controller.mydomain.co.uk:9191/Build/v3.0/Services/Controller/1</a></em> <em>that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. (type EndpointNotFoundException)</em></p>
<p>This should not happen too much within a corporate LAN, though it is always worth checking to make sure any PCs firewalls are not blocking the port used by the build service (9191).</p>
<p>However if you are trying to use build agents that are not directly on your LAN/AD (<a href="http://blogs.like10.com/2011/09/03/team-foundation-server-2010-build-agent-not-part-of-the-domain/">see this lovely clear blog post on how to set up using a non-domain joined build agent)</a> there is a good chance you will not have DNS working as expected. So make sure the controller can resolve the name of the agent and vice versa. For me this meant editing HOST files and checking name resolved with good old PING.</p>
<p><strong>Custom assemblies have un-resolved dependencies</strong></p>
<p>Whilst writing documentation for the <a href="http://tfsbuildextensions.codeplex.com/">community extensions for TFS build</a> I had the this problem. I had the custom assemblies path set for the build controller. This meant when the build agent starts it downloads any custom assemblies from the specified folder. Some the community extensions assume that the build agent has features/application installed such IIS or Visual Studio. I had assumed any missing dependencies would only show up when you tried to use a community activity in a build. However, this is not the case it seems. During the agent load some checks are made. In the event log I saw errors such as</p>
<p><em>Service &lsquo;Default Agent - win7&rsquo; had an exception:<br>
Exception Message: Problem with loading custom assemblies: Could not load file or assembly &lsquo;Microsoft.VisualStudio.Shell, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a&rsquo; or one of its dependencies. The system cannot find the file specified. (type Exception)</em></p>
<p><em><strong>or</strong></em></p>
<p><em>Service &lsquo;Default Agent - win7&rsquo; had an exception:<br>
Exception Message: Problem with loading custom assemblies: Could not load file or assembly &lsquo;Microsoft.Web.Administration, Version=7.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35&rsquo; or one of its dependencies. The system cannot find the file specified. (type Exception)</em></p>
<p>To resolve these issues I installed the features and application on the agent needed by the build extensions. The other option would be remove the custom extension assemblies that had the dependencies, assuming you did not need them.</p>
<p>So hopefully this post should give you a pointer on fixing strange ‘has it started or not’ states that the build agent can get into</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS on Windows Azure preview announced at Build</title>
      <link>https://blog.richardfennell.net/posts/tfs-on-windows-azure-preview-announced-at-build/</link>
      <pubDate>Wed, 14 Sep 2011 20:59:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-on-windows-azure-preview-announced-at-build/</guid>
      <description>&lt;p&gt;At the Build  conference today the preview of TFS on Windows Azure was announced. All the attendees of the conference have been given access codes for the preview. If you did not make it to Build have a look at  &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2011/09/14/team-foundation-server-on-windows-azure.aspx&#34;&gt;Brian Harry’s blog&lt;/a&gt; for details of how everyone else can get access to the preview.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the Build  conference today the preview of TFS on Windows Azure was announced. All the attendees of the conference have been given access codes for the preview. If you did not make it to Build have a look at  <a href="http://blogs.msdn.com/b/bharry/archive/2011/09/14/team-foundation-server-on-windows-azure.aspx">Brian Harry’s blog</a> for details of how everyone else can get access to the preview.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Bitlocker keeps asking for my recovery key after a change in my disk’s MBR</title>
      <link>https://blog.richardfennell.net/posts/bitlocker-keeps-asking-for-my-recovery-key-after-a-change-in-my-disks-mbr/</link>
      <pubDate>Wed, 14 Sep 2011 17:40:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/bitlocker-keeps-asking-for-my-recovery-key-after-a-change-in-my-disks-mbr/</guid>
      <description>&lt;p&gt;My development laptop is &lt;a href=&#34;http://windows.microsoft.com/en-US/windows7/products/features/bitlocker&#34;&gt;bitlocker’ed&lt;/a&gt;, and yours should be too. It provides a great and non-invasive way (assuming you have a TPM chip) to protect you and your clients data on a machine that is far to easy to steal or loose. However, &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/09/14/first-try-with-windows8-and-it-won-t-boot.aspx&#34;&gt;whilst fiddling with Windows 8&lt;/a&gt; I did trip myself up.&lt;/p&gt;
&lt;p&gt;I have my PC setup for a boot to Windows 7 from a bitlocker’ed drive C with a non bitlocker’d drive D used to boot to Windows 2008 for demos (and hence no production data). To try out Windows 8 I added a new boot device, a boot from VHD partition. This edited the PC’s master boot record (MBR) and bitlocker did not like it. It thought the PC had a root kit or something similar to prompted me to enter a my bitlocker recovery key (which is 48 characters long) when I tried to boot to Windows 7. However, once this is done my bitlocker’ed Windows 7 partition worked find, but on each reboot I had to type the key in, bit of  pain. Removing the new VHD boot entry did not help, the MBR has still be edited, so bitlocker complained&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My development laptop is <a href="http://windows.microsoft.com/en-US/windows7/products/features/bitlocker">bitlocker’ed</a>, and yours should be too. It provides a great and non-invasive way (assuming you have a TPM chip) to protect you and your clients data on a machine that is far to easy to steal or loose. However, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/09/14/first-try-with-windows8-and-it-won-t-boot.aspx">whilst fiddling with Windows 8</a> I did trip myself up.</p>
<p>I have my PC setup for a boot to Windows 7 from a bitlocker’ed drive C with a non bitlocker’d drive D used to boot to Windows 2008 for demos (and hence no production data). To try out Windows 8 I added a new boot device, a boot from VHD partition. This edited the PC’s master boot record (MBR) and bitlocker did not like it. It thought the PC had a root kit or something similar to prompted me to enter a my bitlocker recovery key (which is 48 characters long) when I tried to boot to Windows 7. However, once this is done my bitlocker’ed Windows 7 partition worked find, but on each reboot I had to type the key in, bit of  pain. Removing the new VHD boot entry did not help, the MBR has still be edited, so bitlocker complained</p>
<p>The solution was actually easy, but took me a while to find as it does not seem to be clear in any documentation or via a search.</p>
<p>When the WIndows7 partition is booted open the control panel, select the bitlocker option and then suspend bitlocker, then restart it</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_679A5FD8.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_2764465E.png" title="image"></a></p>
<p>This has the effect of telling the bitlocker system that you are accepting the current hardware/MBR setting are correct. After this the PC boots as expected</p>
<p>If I were being more sensible I would suspend bitlocker prior to any fiddling about with Windows 8 – but the bits from Build was just too tempting……….</p>
]]></content:encoded>
    </item>
    <item>
      <title>First try with Windows8 and it won’t boot (updated with notes on a solution)</title>
      <link>https://blog.richardfennell.net/posts/first-try-with-windows8-and-it-wont-boot-updated-with-notes-on-a-solution/</link>
      <pubDate>Wed, 14 Sep 2011 17:16:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-try-with-windows8-and-it-wont-boot-updated-with-notes-on-a-solution/</guid>
      <description>&lt;p&gt;I tried to install Windows 8 Developer Preview on my Lenovo W520 laptop as a VHD boot. All seemed to go well following the same &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/09/17/windows-7-boot-from-vhd.aspx&#34;&gt;process as for windows 7&lt;/a&gt;. I got a nice new blue Window 8 boot partition choice screen (which went back to the Windows 7 DOS like one after I made Windows 7 my default partition).&lt;/p&gt;
&lt;p&gt;If I selected Windows 8 it tried to but but ended up with the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I tried to install Windows 8 Developer Preview on my Lenovo W520 laptop as a VHD boot. All seemed to go well following the same <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/09/17/windows-7-boot-from-vhd.aspx">process as for windows 7</a>. I got a nice new blue Window 8 boot partition choice screen (which went back to the Windows 7 DOS like one after I made Windows 7 my default partition).</p>
<p>If I selected Windows 8 it tried to but but ended up with the error</p>
<blockquote>
<p><em>‘Windows could not complete the installation. To install windows on this computer restart the installation’</em></p></blockquote>
<p>No more info than that, I suspect it does not like VHD boot. But I don’t have time to dig now, more later I am sure……</p>
<p><strong>[Update 15 Sep 2011]</strong></p>
<p>My colleague <a href="http://blogs.blackmarble.co.uk/blogs/rhancock/">Robert Hancock</a> could not wait to try the Windows 8 bits on his identical Lenovo to mine that dual boots between Windows 7 and Windows 2008. He took a slightly different route which worked.</p>
<ol>
<li>On his Server 2008 boot created a new virtual machine in Hyper-V and Installed Windows 8</li>
<li>Exported the virtual machine</li>
<li>Booted back into Windows 7</li>
<li>Attached exported VHD in Disk Management (VHD was on the d: drive)</li>
<li>Ran the following command C:WindowsSystem32bcdboot I:Windows where I represented the drive letter for the attached VHD</li>
<li>Ran bcdedit with no parameters to verify the boot entry had been added</li>
<li>Rebooted my machine and verified ‘Windows Developer Preview’ appeared. Started up the boot and verified windows 8 ran correctly.</li>
</ol>
<p>The only other thing was he commented his C: drive was not BitLocker’ed. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/09/14/bitlocker-keeps-asking-for-my-recovery-key-after-a-change-in-my-disk-s-mbr.aspx">My post about problems with BitLocker</a> prompted me to suspend this before trying the Windows 8 install.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why can’t I see my custom work item types in Team Explorer?</title>
      <link>https://blog.richardfennell.net/posts/why-cant-i-see-my-custom-work-item-types-in-team-explorer/</link>
      <pubDate>Wed, 14 Sep 2011 13:40:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-cant-i-see-my-custom-work-item-types-in-team-explorer/</guid>
      <description>&lt;p&gt;If you are editing a TFS process template you have the choice editing XML files or using the Process Template Editor within the &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f&#34;&gt;TFS 2010 PowerTools.&lt;/a&gt; Unfortunately neither is fool proof. You can make errors than means the revised template does not fully work.&lt;/p&gt;
&lt;p&gt;The worst of these errors will be picked up when upload the process template to a Team project Collection as during this process the XML is validated.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are editing a TFS process template you have the choice editing XML files or using the Process Template Editor within the <a href="http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f">TFS 2010 PowerTools.</a> Unfortunately neither is fool proof. You can make errors than means the revised template does not fully work.</p>
<p>The worst of these errors will be picked up when upload the process template to a Team project Collection as during this process the XML is validated.</p>
<p>However, this will not find everything. Today after uploading a new process template I found I could not see two of my revised work item type in the list when I tried to create a new work item.</p>
<p>The best way I found to work out the problem was to try to import the .WIT file (from your local copy of the process template) again using either the Visual Studio –&gt; Tools –&gt; Process Editor –&gt; Work item Types –&gt; Import from File, or the command line tool WITIMPORT</p>
<p>In my case I then got the far more useful error</p>
<p><em>-&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;-<br>
Error<br>
-&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&ndash;<br>
Error importing work item type definition:</em></p>
<p>_TF237094: Field name &lsquo;Priority&rsquo; is used by the field &lsquo;Microsoft.VSTS.Common.Priority&rsquo;, so it cannot be used by the field &lsquo;MyNewProcess.Priority&rsquo;.<br>
-&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&ndash;<br>
OK  <br>
-&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&mdash;&ndash;<br>
_</p>
<p>This gave me much more information to fix my problem</p>
]]></content:encoded>
    </item>
    <item>
      <title>Bradford the centre of all scientific advance (at least this week)</title>
      <link>https://blog.richardfennell.net/posts/bradford-the-centre-of-all-scientific-advance-at-least-this-week/</link>
      <pubDate>Tue, 13 Sep 2011 10:23:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/bradford-the-centre-of-all-scientific-advance-at-least-this-week/</guid>
      <description>&lt;p&gt;Over the weekend I took my son to the excellent ‘&lt;a href=&#34;http://www.bbc.co.uk/programmes/b00lwxj1&#34;&gt;Bang goes the Theory’ Live road show&lt;/a&gt; which is part of the &lt;a href=&#34;http://www.britishscienceassociation.org/web/BritishScienceFestival/&#34;&gt;Festival of Science in Bradford&lt;/a&gt;. If a similar event is near you I recommend you go along, fun for all the family even if science is not really your thing. The whole event makes the subject very accessible to all.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_750F71C2.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_731ED061.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;Even if I had not known this was on I might have guessed something as afoot in Bradford as virtually every &lt;a href=&#34;http://www.bbc.co.uk/radio4/&#34;&gt;Radio 4&lt;/a&gt; news program has had a short article on some new scientific advance from Bradford University. Today a &lt;a href=&#34;http://www.bbc.co.uk/news/science-environment-14900800&#34;&gt;lie detector using facial analysis&lt;/a&gt; and a &lt;a href=&#34;http://www.bbc.co.uk/news/science-environment-14855666&#34;&gt;new cancer treatment&lt;/a&gt; to name just the ones I remember. Now I went to Bradford and I know it is world leader in &lt;a href=&#34;http://www.brad.ac.uk/acad/lifesci/biomedical/&#34;&gt;biomedical science&lt;/a&gt;, but I do wonder if they have been saving press released to get the biggest bang. Or is it the BBC and other media outlets are in Bradford at the festival so it is easy to get Bradford stories?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the weekend I took my son to the excellent ‘<a href="http://www.bbc.co.uk/programmes/b00lwxj1">Bang goes the Theory’ Live road show</a> which is part of the <a href="http://www.britishscienceassociation.org/web/BritishScienceFestival/">Festival of Science in Bradford</a>. If a similar event is near you I recommend you go along, fun for all the family even if science is not really your thing. The whole event makes the subject very accessible to all.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_750F71C2.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_731ED061.png" title="image"></a> </p>
<p>Even if I had not known this was on I might have guessed something as afoot in Bradford as virtually every <a href="http://www.bbc.co.uk/radio4/">Radio 4</a> news program has had a short article on some new scientific advance from Bradford University. Today a <a href="http://www.bbc.co.uk/news/science-environment-14900800">lie detector using facial analysis</a> and a <a href="http://www.bbc.co.uk/news/science-environment-14855666">new cancer treatment</a> to name just the ones I remember. Now I went to Bradford and I know it is world leader in <a href="http://www.brad.ac.uk/acad/lifesci/biomedical/">biomedical science</a>, but I do wonder if they have been saving press released to get the biggest bang. Or is it the BBC and other media outlets are in Bradford at the festival so it is easy to get Bradford stories?</p>
<p>It is great to see more positive news articles on science and technology, but the danger is it is just a headline. We need to look behind the headlines to be better informed. Events like this road show a great step to get people interested to find out more, and always to keep ‘asking why is that……’</p>
]]></content:encoded>
    </item>
    <item>
      <title>‘Expected to find an element’ error when running VS2010 Database unit tests</title>
      <link>https://blog.richardfennell.net/posts/expected-to-find-an-element-error-when-running-vs2010-database-unit-tests/</link>
      <pubDate>Mon, 12 Sep 2011 22:31:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/expected-to-find-an-element-error-when-running-vs2010-database-unit-tests/</guid>
      <description>&lt;p&gt;If you use the database unit testing feature of VS2010 there is a good chance you will want to run these tests on more than one PC and probably the build server. The issue is that these different PC will need different deployment paths and SQL connection strings. Luckily &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/aa833210.aspx&#34;&gt;there is a feature to address this, as detailed on MSDN&lt;/a&gt;. Basically the test runner swaps in a different config file based on either the PC name or user running the tests.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you use the database unit testing feature of VS2010 there is a good chance you will want to run these tests on more than one PC and probably the build server. The issue is that these different PC will need different deployment paths and SQL connection strings. Luckily <a href="http://msdn.microsoft.com/en-us/library/aa833210.aspx">there is a feature to address this, as detailed on MSDN</a>. Basically the test runner swaps in a different config file based on either the PC name or user running the tests.</p>
<p>This all seems straight forward, but when I followed the process and ran my tests they failed</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_6081FA40.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_566D5915.png" title="image"></a> </p>
<p>The most useful error is found if you get the test run details (the button highlighted in green). You can see that it found the file replacement config file but failed to parse it giving the error</p>
<p><em>An error occurred while reading file C:…TestResultsfred_PCNAME 2011-09-12 22_33_11Outbuildbox.dbunittest.config : Expected to find an element.</em></p>
<p>[It is work noting here it is easy to forget to make sure the <em>buildbox.dbunittest.config</em> is deployed to the test folder as detailed on MSDN. If you forget this you get a file not found error not an ‘expected to find an element’ error]</p>
<p>So I checked my <em>buildbox.dbunittest.config</em> file to look for typos. The MSDN instructions say to copy the app.config and make your edits, but then goes onto mention after editing it should resemble.</p>
<p><DatabaseUnitTesting> <br>
       <DatabaseDeployment DatabaseProjectFileName="......SourcesUnitTestUnitTestUnitTest.dbproj" Configuration="Debug" /><br>
       <DataGeneration ClearDatabase="true" /><br>
       <ExecutionContext Provider="System.Data.SqlClient" ConnectionString="Data Source=.SQLEXPRESS;  
                       Initial Catalog=UnitTestB;Integrated Security=True;Pooling=False" CommandTimeout="30" /><br>
       <PrivilegedContext Provider="System.Data.SqlClient" ConnectionString="Data Source=.SQLEXPRESS;  
                       Initial Catalog=UnitTestB;Integrated Security=True;Pooling=False" CommandTimeout="30" /><br>
</DatabaseUnitTesting> </p>
<p>It should NOT include the <?xml version="1.0" encoding="utf-8" ?>, <configuration> or <configSections> tags. This as been stressed in some forum posts. However even when I made sure my <em>buildbox.dbunittest.config</em> first line was <DatabaseUnitTesting> I still got the same error.</p>
<p>Turns out the issue was that I had leading white space before the <DatabaseUnitTesting>, once these were removed the test ran as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Registration has opened for Black Marble’s Autumn Events 2011</title>
      <link>https://blog.richardfennell.net/posts/registration-has-opened-for-black-marbles-autumn-events-2011/</link>
      <pubDate>Wed, 07 Sep 2011 14:04:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/registration-has-opened-for-black-marbles-autumn-events-2011/</guid>
      <description>&lt;p&gt;If you look on the &lt;a href=&#34;http://www.blackmarble.co.uk/events&#34;&gt;Black Marble web site&lt;/a&gt; you will find listing of the free events we are running up to the end of the year and into the new year. This year our events include&lt;/p&gt;
&lt;p&gt;12 Oct 2011&lt;/p&gt;
&lt;p&gt;[Re-Build – IT&lt;br&gt;
](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Re-Build&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Re-Build&lt;/a&gt; - IT)[Re-Build – Dev](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Re-Build&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Re-Build&lt;/a&gt; - Dev)&lt;/p&gt;
&lt;p&gt;Black Marble will bring you the key announcements and developments from Microsoft Build about Windows 8&lt;/p&gt;
&lt;p&gt;17 Nov 2011&lt;/p&gt;
&lt;p&gt;[Delivering High Impact WebSites with SharePoint](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Delivering&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Delivering&lt;/a&gt; High Impact WebSites with SharePoint)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you look on the <a href="http://www.blackmarble.co.uk/events">Black Marble web site</a> you will find listing of the free events we are running up to the end of the year and into the new year. This year our events include</p>
<p>12 Oct 2011</p>
<p>[Re-Build – IT<br>
](<a href="http://www.blackmarble.co.uk/events.aspx?event=Re-Build">http://www.blackmarble.co.uk/events.aspx?event=Re-Build</a> - IT)[Re-Build – Dev](<a href="http://www.blackmarble.co.uk/events.aspx?event=Re-Build">http://www.blackmarble.co.uk/events.aspx?event=Re-Build</a> - Dev)</p>
<p>Black Marble will bring you the key announcements and developments from Microsoft Build about Windows 8</p>
<p>17 Nov 2011</p>
<p>[Delivering High Impact WebSites with SharePoint](<a href="http://www.blackmarble.co.uk/events.aspx?event=Delivering">http://www.blackmarble.co.uk/events.aspx?event=Delivering</a> High Impact WebSites with SharePoint)</p>
<p>SharePoint 2010 creates a dynamic and striking internet experience for your users.</p>
<p>17 Nov 2011</p>
<p>[A Guide to Successfully Adopting the Cloud for IT Managers](<a href="http://www.blackmarble.co.uk/events.aspx?event=A">http://www.blackmarble.co.uk/events.aspx?event=A</a> Guide to Successfully Adopting the Cloud for IT Managers)</p>
<p>This session will be exploring the business value of cloud computing with Microsoft Windows Azure</p>
<p>8 Dec 2011</p>
<p>[Architecture Forum in the North 4](<a href="http://www.blackmarble.co.uk/events.aspx?event=Architecture">http://www.blackmarble.co.uk/events.aspx?event=Architecture</a> Forum in the North 4)</p>
<p>The top Microsoft Architecture Forum in the UK returns for its fourth year.</p>
<p>26 Jan 2012</p>
<p>[Black Marbles Annual Technical Update for Microsoft Technologies - 2012](<a href="http://www.blackmarble.co.uk/events.aspx?event=Black">http://www.blackmarble.co.uk/events.aspx?event=Black</a> Marbles Annual Technical Update for Microsoft Technologies - 2012)</p>
<p>Return of our roadmap for all things Microsoft!</p>
<p>26 Jan 2012</p>
<p>[Implementing the Secure Development Lifecycle in your ALM Process](<a href="http://www.blackmarble.co.uk/events.aspx?event=Implementing">http://www.blackmarble.co.uk/events.aspx?event=Implementing</a> the Secure Development Lifecycle in your ALM Process)</p>
<p>This session will look at how implementing Microsoft&rsquo;s Security Development Lifecycle (SDL) into your development process can improve quality, reliability and long-term maintainability</p>
<p>All these events are free, but they can fill up so book early.</p>
<p>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Update on using Typemock Isolator to allow webpart development without a Sharepoint server</title>
      <link>https://blog.richardfennell.net/posts/update-on-using-typemock-isolator-to-allow-webpart-development-without-a-sharepoint-server/</link>
      <pubDate>Tue, 06 Sep 2011 21:56:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/update-on-using-typemock-isolator-to-allow-webpart-development-without-a-sharepoint-server/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx&#34;&gt;I have in the past posted about developing SharePoint web parts without having to use a SharePoint server by using Typemock Isolator&lt;/a&gt;. This technique relies on using Cassini or IIS Express as the web server to host the aspx page that in turn contains the webpart. This is all well and good for SharePoint 2007, but we get a problem with SharePoint 2010 which seems to be due to 32/64bit issues.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx">I have in the past posted about developing SharePoint web parts without having to use a SharePoint server by using Typemock Isolator</a>. This technique relies on using Cassini or IIS Express as the web server to host the aspx page that in turn contains the webpart. This is all well and good for SharePoint 2007, but we get a problem with SharePoint 2010 which seems to be due to 32/64bit issues.</p>
<p><strong>Working with SharePoint 2007 assemblies when SharePoint 2010 assemblies are in the GAC</strong> </p>
<p>I started this adventure with a SharePoint 2007 webpart solution setup as discussed in my previous post. In this solution’s web test harness I was only referencing the SharePoint 2007 <strong>Microsoft.Sharepoint.dll.</strong> This had been working fine on a PC that had never had SharePoint installed, the required DLL was loaded from a local solution folder of SharePoint assemblies.</p>
<p>This was until I <a href="http://msdn.microsoft.com/en-us/library/ee554869%28office.14%29.aspx">installed SharePoint 2010 onto my Windows 7 development PC</a> (a great way to do SharePoint development). This put the SharePoint 2010 assemblies into the GAC. So now when I ran my Sharepoint 2007 test harness I got the error</p>
<blockquote>
<p><em><strong>Description:</strong> An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately.<br>
<strong>Compiler Error Message:</strong> CS1705: Assembly &lsquo;Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c&rsquo; uses &lsquo;Microsoft.SharePoint.Library, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c&rsquo; which has a higher version than referenced assembly &lsquo;Microsoft.SharePoint.Library, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c&rsquo;</em></p></blockquote>
<p>The solution is fairly simple, assuming you want work with the 2007 assemblies. All you need to do is make sure the test harness project also references the 2007 <strong>Microsoft.Sharepoint.library.dll</strong> so it does not pickup version in the GAC.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0318E5A8.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6C769760.png" title="image"></a></p>
<p>Once this is done the 2007 based test harness worked again</p>
<p><strong>But what about using 2010 assemblies?</strong></p>
<p>If you want to work against SharePoint 2010 assemblies there are other problems. If you just reference the 2010 <strong>Microsoft.sharepoint.dll</strong> you get the error</p>
<blockquote>
<p><em>Could not load file or assembly &lsquo;Microsoft.Sharepoint.Sandbox&rsquo; or one of its dependencies. An attempt was made to load a program with an incorrect format</em></p></blockquote>
<p>As I said, on my PC I now have a SharePoint local installation, so I have the SharePoint 2010 assemblies in the GAC. It is from here the test harness tries to load the <strong>Microsoft.Sharepoint.Sandbox.dll</strong> assembly. The problem is that this is not a standard MSIL assembly but a 64bit one. The default Cassini development web server is 32bit. Hence the incorrect format error, the WOW64 technology behind the scenes cannot manage the loading. The only option is to use a 64bit web server to address the problem; so this rules out Cassini and <a href="http://learn.iis.net/page.aspx/901/iis-developer-express-faq/">IIS Express</a> at this time as these are 32bit only.</p>
<p>A possible solution is to use the full IIS 7.5 installation available with Windows7, as this must be 64bit as it is able to run SharePoint 2010. The problem here is that when you load the test harness you get the error</p>
<blockquote>
<p><em><strong>Description:</strong> An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.<br>
<strong>Exception Details:</strong> TypeMock.TypeMockException:<br>
*** Typemock Isolator is not currently enabled.<br>
To enable do one of the following:<br>
* To run Typemock Isolator as part of an automated process you can:<br>
- run tests via TMockRunner.exe command line tool<br>
- use &lsquo;TypeMockStart&rsquo; tasks for MSBuild or NAnt<br>
* To work with Typemock Isolator inside Visual Studio.NET:<br>
set Tools-&gt;Enable Typemock Isolator from within Visual Studio<br>
For more information consult the documentation (see &lsquo;Running&rsquo; topic)</em></p></blockquote>
<p>This is because this IIS instance is not under the control of Visual Studio and so it cannot start Isolator for you. To get round this you have to start Isolator manually, maybe you could do it in your test harness pages. However, you also have to remember that you if you want to debug against this IIS instance you must run Visual Studio as administrator – OK this will work, but I don’t like any of this. I really do try not to run as administrator these days.</p>
<p>So what we need is a 64bit web server. The best option appears to be <a href="http://cassinidev.codeplex.com" title="http://cassinidev.codeplex.com">http://cassinidev.codeplex.com</a>. This can be used as a direct replacement for Cassini. <a href="http://cassinidev.codeplex.com/discussions/224135">This is still a 32bit build by default, but if you pull the source down you can change this</a>. You need to change all the projects from <strong>x86</strong> to <strong>Any CPU</strong>, rebuild and <a href="http://cassinidev.codeplex.com/wikipage?title=Visual%20Studio%202008%2f2010%20Development%20server%20drop-in%20replacement&amp;referringTitle=Documentation">copy the resultant EXE and DLLs over the Cassini installation.</a> I recommend you copy the 32bit release build over first to get the right .config files in place. You probably don’t want to use the ones from the source code zip.</p>
<p>Once this is all done you have a web server that can load 32bit and 64bits without issue. So for my test project I referenced the SharePoint 2010 assemblies (I maybe could have referenced less, but this works)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_782BDE92.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1DB55BF4.png" title="image"></a></p>
<p>So we have a workaround, once setup it is used automatically. It is just a shame that the default web servers are all set to be x86 as opposed to Any CPU.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Doing a webinar on using Typemock Isolator with SharePoint Webparts</title>
      <link>https://blog.richardfennell.net/posts/doing-a-webinar-on-using-typemock-isolator-with-sharepoint-webparts/</link>
      <pubDate>Mon, 05 Sep 2011 13:50:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/doing-a-webinar-on-using-typemock-isolator-with-sharepoint-webparts/</guid>
      <description>&lt;p&gt;I am doing a free webinar on Wednesday, September 21, 2011 at 3:00 PM BST with the title &amp;ldquo;Using Typemock Isolator to speed the development of SharePoint Web Parts&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;For more details have a look at &lt;a href=&#34;http://www.typemock.com/webinars&#34; title=&#34;http://www.typemock.com/webinars&#34;&gt;http://www.typemock.com/webinars&lt;/a&gt; the details should be up soon&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am doing a free webinar on Wednesday, September 21, 2011 at 3:00 PM BST with the title &ldquo;Using Typemock Isolator to speed the development of SharePoint Web Parts&rdquo;.</p>
<p>For more details have a look at <a href="http://www.typemock.com/webinars" title="http://www.typemock.com/webinars">http://www.typemock.com/webinars</a> the details should be up soon</p>
]]></content:encoded>
    </item>
    <item>
      <title>#DDDNorth registration has opened</title>
      <link>https://blog.richardfennell.net/posts/dddnorth-registration-has-opened/</link>
      <pubDate>Mon, 05 Sep 2011 12:14:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dddnorth-registration-has-opened/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.developerdeveloperdeveloper.com/north/Default.aspx&#34;&gt;agenda is up for DDDNorth&lt;/a&gt;, seems .JS related subjects are the fashionable topics for this conference. As I suspected my session on build customisation did not make the cut, just not cool enough!&lt;/p&gt;
&lt;p&gt;As with all DDD events the event is free, but the place tend to go quickly. I think this DDD has 300 spaces so I suggest you register quickly to avoid disappointment&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.developerdeveloperdeveloper.com/north/Default.aspx">agenda is up for DDDNorth</a>, seems .JS related subjects are the fashionable topics for this conference. As I suspected my session on build customisation did not make the cut, just not cool enough!</p>
<p>As with all DDD events the event is free, but the place tend to go quickly. I think this DDD has 300 spaces so I suggest you register quickly to avoid disappointment</p>
]]></content:encoded>
    </item>
    <item>
      <title>Verifying non–public methods have been called using Typemock Isolator in VB.Net</title>
      <link>https://blog.richardfennell.net/posts/verifying-non-public-methods-have-been-called-using-typemock-isolator-in-vb-net/</link>
      <pubDate>Sat, 03 Sep 2011 21:30:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/verifying-non-public-methods-have-been-called-using-typemock-isolator-in-vb-net/</guid>
      <description>&lt;p&gt;I am currently writing some training material for a client on mocking showing how life is so much easier if software is designed with testing in mind. As part of this material I showing that though we aim for nice design patterns which allow testing we have to deal with legacy systems that were not designed this way. However, there are tools that allow us to write unit tests for poorly designed (with testing in mind) legacy systems, specifically &lt;a href=&#34;http://www.typemock.com/&#34;&gt;Typemock Isolator&lt;/a&gt;. Using Isolator means we can write tests that we can use to make sure the code is not broken whilst refactoring to new structures.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am currently writing some training material for a client on mocking showing how life is so much easier if software is designed with testing in mind. As part of this material I showing that though we aim for nice design patterns which allow testing we have to deal with legacy systems that were not designed this way. However, there are tools that allow us to write unit tests for poorly designed (with testing in mind) legacy systems, specifically <a href="http://www.typemock.com/">Typemock Isolator</a>. Using Isolator means we can write tests that we can use to make sure the code is not broken whilst refactoring to new structures.</p>
<p>Just to add a bit of interest the material needs to be in VB.NET not my usual language, so I have spent some time porting the demos I use at conference sessions on Isolator from C# to VB.NET.</p>
<p>Whilst doing this I had to get used to the fact that  the Typemock syntax used in VB in a different C#. Using ‘using’ blocks for most things, so the C# lamda expression syntax</p>
<pre tabindex="0"><code>Isolate.Verify.WasCalledWithExactArguments(() =&gt; fake.IsEmailAddessInDB.SendEmail(“fred@home.com&#34;));   
</code></pre><p>becomes</p>
<pre tabindex="0"><code>Using AssertCalls.HappenedWithExactArguments()   
     fake.IsEmailAddessInDB(&#34;fred@home.com&#34;) End Using
```

But this is not an issue I had. The problem was that I was trying to show a real worst case where you need to mock private methods. One the [dirtiest tricks](http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/05/24/everytime-i-have-to-use-typemock-i-need-to-ask-does-my-code-stinks.aspx) of Isolator. Setting up the non-public Isolator behaviour is easy in VB

```
Isolator.VisualBasic.NonPublicWillReturn (fakeProcessor, &#34;GetOrderForClient&#34;,fakeOrder) Isolator.VisualBasic.NonPublicWillBeIgnored(fakeProcessor, &#34;SendEmail&#34;)
```

Problem is there seems to be no non-public verify methods in the Isolator VB syntax. I found an [old forum post on the subject](http://forums.typemock.com/viewtopic.php?p=5714), but nothing since. However, I did find [answer hinted at in another post](http://forums.typemock.com/viewtopic.php?t=1140&amp;view=previous), but it might be a bit obscure if you are new to Isolator. The process is as follows

1.  In your test project as well as the references to **Typemock**  and **Typemock.Isolator.VisualBasic** also add one to the C# Isolator API **Typemock.ActAssertArrange**
2.  You can now use the C# methods to do the job. So the verification becomes

```
TypeMock.ArrangeActAssert.Isolate.Verify.NonPublic.WasCalled(fakeProcessor, &#34;GetOrderForClient&#34;) TypeMock.ArrangeActAssert.Isolate.Verify.NonPublic.WasCalled(fakeProcessor, &#34;SendEmail&#34;).WithArguments(“abc@home.com&#34;)       
```

Now don’t think for a second I am suggesting it is a good idea to be mocking private methods, or frankly using any of the Isolator dirty tricks. I still think if you find yourself using any of the Isolator features that all the other mocking frameworks cannot do then you seriously need to look at your code bases design. However, there are just cases when you can’t change the design and the use of Isolator is the only option you have if you wish to write any tests. In these cases it is great to a tool to give at least some test coverage.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>You can vote now for sessions at #dddnorth</title>
      <link>https://blog.richardfennell.net/posts/you-can-vote-now-for-sessions-at-dddnorth/</link>
      <pubDate>Tue, 30 Aug 2011 10:43:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-can-vote-now-for-sessions-at-dddnorth/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://developerdeveloperdeveloper.com/north/ProposedSessions.aspx&#34;&gt;Session voting has opened at DDD North,&lt;/a&gt; vote for what you would like to see at the conference in October&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://developerdeveloperdeveloper.com/north/ProposedSessions.aspx">Session voting has opened at DDD North,</a> vote for what you would like to see at the conference in October</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problem setting an email alert for the TFS 2010 Power Tools backup</title>
      <link>https://blog.richardfennell.net/posts/problem-setting-an-email-alert-for-the-tfs-2010-power-tools-backup/</link>
      <pubDate>Thu, 25 Aug 2011 20:15:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problem-setting-an-email-alert-for-the-tfs-2010-power-tools-backup/</guid>
      <description>&lt;p&gt;One of the very useful features of the &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f&#34;&gt;TFS 2010 Power Tools&lt;/a&gt; is that it provides a backup wizard, great for teams using TFS who don’t have much SQL/IT Pro experience. Whist setting this up on a server I hit an interesting gotta with sending alerts.&lt;/p&gt;
&lt;p&gt;This TFS server in question was configured to enable SMTP alerting (via the TFS Administration Console)&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_3A7B938E.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_640F5EC1.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;and this was working&lt;/p&gt;
&lt;p&gt;During the backup configuration wizard you have the option to be send email if the backup job fails. The SMTP details are picked up from the TFS servers SMTP alert settings. When I enter a TO: email address and pressed the test button I got an error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One of the very useful features of the <a href="http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f">TFS 2010 Power Tools</a> is that it provides a backup wizard, great for teams using TFS who don’t have much SQL/IT Pro experience. Whist setting this up on a server I hit an interesting gotta with sending alerts.</p>
<p>This TFS server in question was configured to enable SMTP alerting (via the TFS Administration Console)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_3A7B938E.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_640F5EC1.png" title="image"></a></p>
<p>and this was working</p>
<p>During the backup configuration wizard you have the option to be send email if the backup job fails. The SMTP details are picked up from the TFS servers SMTP alert settings. When I enter a TO: email address and pressed the test button I got an error</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_709D0BDD.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_598E8AA1.png" title="image"></a></p>
<p>Turns out the error was at the SMTP server end. The SMTP was set to only relay emails from known user (those with a local mailbox). To allow TFS to send alert emails the MYDOMAINTFSSERVICE account, used by TFS as a service account, had had a mailbox created.</p>
<p>I had assumed (wrongly) that the TFS backup system relayed it alerts via the TFS alert system. It does not, though it does pickup the alert systems setting in the wizard. This means that the email is sent using the account the backup service is set to run as, not the one used by TFS.</p>
<p>So as soon as set my backup process as MYDOMAINTFSSERVICE (earlier in the wizard) the email test worked and so did the whole backup altering process.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF80070 when loading a team query in MS Project</title>
      <link>https://blog.richardfennell.net/posts/tf80070-when-loading-a-team-query-in-ms-project/</link>
      <pubDate>Wed, 24 Aug 2011 15:50:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf80070-when-loading-a-team-query-in-ms-project/</guid>
      <description>&lt;p&gt;I create a number of task work items in TFS 2010 using a list in Excel. This all appeared to work OK, and I could run a work item query in Team Explorer and see all my new work items. However then I tried to open the work item query in Project I got a TF80070 error. It loaded the work items up a point in the list then failed and showed error dialog.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I create a number of task work items in TFS 2010 using a list in Excel. This all appeared to work OK, and I could run a work item query in Team Explorer and see all my new work items. However then I tried to open the work item query in Project I got a TF80070 error. It loaded the work items up a point in the list then failed and showed error dialog.</p>
<p>I altered the query to remove the work item that it had failed on and it loaded without issue. After looking at the offending work item in more detail I saw the Title field had a carriage return in it. This did not bother Excel or Team Explorer, but Project obviously hated it. Once I edited the title (in Team Explorer) to remove the carriage return everything was fine in Project.</p>
<p>So if you see a TF80070 error, it might be a Project/Team Explorer patching issue as forums suggest, but also check for invalid characters in query fields</p>
]]></content:encoded>
    </item>
    <item>
      <title>One week left to submit a session for DDDNorth</title>
      <link>https://blog.richardfennell.net/posts/one-week-left-to-submit-a-session-for-dddnorth/</link>
      <pubDate>Mon, 22 Aug 2011 09:08:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/one-week-left-to-submit-a-session-for-dddnorth/</guid>
      <description>&lt;p&gt;Any &lt;a href=&#34;http://developerdeveloperdeveloper.com/north/Default.aspx&#34;&gt;session submissions for DDDNorth&lt;/a&gt; have to be in by Friday the 26th.&lt;/p&gt;
&lt;p&gt;Go on you must have something you want to talk about……..&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Any <a href="http://developerdeveloperdeveloper.com/north/Default.aspx">session submissions for DDDNorth</a> have to be in by Friday the 26th.</p>
<p>Go on you must have something you want to talk about……..</p>
]]></content:encoded>
    </item>
    <item>
      <title>New release of TFS 2010 Power Tools</title>
      <link>https://blog.richardfennell.net/posts/new-release-of-tfs-2010-power-tools/</link>
      <pubDate>Sat, 20 Aug 2011 16:41:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-release-of-tfs-2010-power-tools/</guid>
      <description>&lt;p&gt;The latest set of TFS 2010 power tools are out, well worth updating if you are using the March release, and if not using any version a definite install.&lt;/p&gt;
&lt;p&gt;There are major update across the board, summary details can be found on &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2011/08/19/august-11-tfs-power-tools-are-available.aspx&#34;&gt;Brian Harry’s blog&lt;/a&gt;. With the main release note and &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f&#34;&gt;download on Visual Studio Gallery&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The latest set of TFS 2010 power tools are out, well worth updating if you are using the March release, and if not using any version a definite install.</p>
<p>There are major update across the board, summary details can be found on <a href="http://blogs.msdn.com/b/bharry/archive/2011/08/19/august-11-tfs-power-tools-are-available.aspx">Brian Harry’s blog</a>. With the main release note and <a href="http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f">download on Visual Studio Gallery</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows search not starting, making itself disabled on a reboot</title>
      <link>https://blog.richardfennell.net/posts/windows-search-not-starting-making-itself-disabled-on-a-reboot/</link>
      <pubDate>Fri, 19 Aug 2011 10:15:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-search-not-starting-making-itself-disabled-on-a-reboot/</guid>
      <description>&lt;p&gt;Since I got my new laptop I have had a problem that the Windows Search services in Windows 7 keeps setting itself to be Disabled whenever I restart the the PC. I usually notice this when I try to do a search in Outlook and I have to press enter in the search box to start a search, it is not matching emails as soon as I start to type.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since I got my new laptop I have had a problem that the Windows Search services in Windows 7 keeps setting itself to be Disabled whenever I restart the the PC. I usually notice this when I try to do a search in Outlook and I have to press enter in the search box to start a search, it is not matching emails as soon as I start to type.</p>
<p>If I load the services control panel I can enable the Windows Search service and start it manually, restart Outlook and everything is as I expect, but all a bit of a pain to do. I checked the Windows event logs, and it said nothing about Windows Search apart from my manual starting of the service.</p>
<p>Seems the issue was that I had set the service’s start-up action to be <strong>Automatic – Delayed</strong>. When I set it <strong>Automatic</strong> then it seems to start Ok. All I can assume is that something else in the start-up process is not completing soon enough so the delayed start search search never bother to even try, but I surprised there is nothing in the log, but it is working now!</p>
]]></content:encoded>
    </item>
    <item>
      <title>New TFS 2010 Community TFS Build Extension documentation – nUnit</title>
      <link>https://blog.richardfennell.net/posts/new-tfs-2010-community-tfs-build-extension-documentation-nunit/</link>
      <pubDate>Fri, 19 Aug 2011 08:17:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-tfs-2010-community-tfs-build-extension-documentation-nunit/</guid>
      <description>&lt;p&gt;I have just posted new ‘&lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20nUnit%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;how to use it 101’ documentation for the nUnit activity&lt;/a&gt; on the community extensions site.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just posted new ‘<a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20nUnit%20build%20activity&amp;referringTitle=Documentation">how to use it 101’ documentation for the nUnit activity</a> on the community extensions site.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New TFS 2010 Community TFS Build Extension documentation – TFSVersion</title>
      <link>https://blog.richardfennell.net/posts/new-tfs-2010-community-tfs-build-extension-documentation-tfsversion/</link>
      <pubDate>Thu, 18 Aug 2011 20:40:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-tfs-2010-community-tfs-build-extension-documentation-tfsversion/</guid>
      <description>&lt;p&gt;I have just posted new ‘&lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20TfsVersion%20build%20activity&amp;amp;referringTitle=Documentation&#34;&gt;how to use it 101’ documentation for the TFSVersion activity&lt;/a&gt; on the community extensions site.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just posted new ‘<a href="http://tfsbuildextensions.codeplex.com/wikipage?title=How%20to%20integrate%20the%20TfsVersion%20build%20activity&amp;referringTitle=Documentation">how to use it 101’ documentation for the TFSVersion activity</a> on the community extensions site.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where has my mouse cursor done? Unable to record a video in Microsoft Test Manager</title>
      <link>https://blog.richardfennell.net/posts/where-has-my-mouse-cursor-done-unable-to-record-a-video-in-microsoft-test-manager/</link>
      <pubDate>Thu, 18 Aug 2011 14:13:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-has-my-mouse-cursor-done-unable-to-record-a-video-in-microsoft-test-manager/</guid>
      <description>&lt;p&gt;MTM has the feature that you can use Expression Media Encoder 4 to record the test run as a video. To enable this feature, after you install MTM, you have to install the basic version of Expression Encoder, and a few patches &lt;a href=&#34;http://msmvps.com/blogs/vstsblog/archive/2010/05/03/configure-microsoft-test-manager-for-video-recording.aspx&#34;&gt;see notes here for a list of files and the process&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I recently did this on PC and tried to record a video. As soon as the recording process started the PC virtually stopped. It ran to 50%+ load on a dual core 2.3GHz CPU and the mouse disappeared. As soon as I stopped MTM (via task manager and the keyboard alone) it became responsive again. If I ran MTM without a video recording it was fine. Should be noted that if I ran the Expression Encoder (not via MTM) I got the same problem.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>MTM has the feature that you can use Expression Media Encoder 4 to record the test run as a video. To enable this feature, after you install MTM, you have to install the basic version of Expression Encoder, and a few patches <a href="http://msmvps.com/blogs/vstsblog/archive/2010/05/03/configure-microsoft-test-manager-for-video-recording.aspx">see notes here for a list of files and the process</a>.</p>
<p>I recently did this on PC and tried to record a video. As soon as the recording process started the PC virtually stopped. It ran to 50%+ load on a dual core 2.3GHz CPU and the mouse disappeared. As soon as I stopped MTM (via task manager and the keyboard alone) it became responsive again. If I ran MTM without a video recording it was fine. Should be noted that if I ran the Expression Encoder (not via MTM) I got the same problem.</p>
<p>Turns out the problem was not the PC performance but the nVidia video drivers. Once I updated these to the current ones from the nVidia site it all worked as expected.</p>
]]></content:encoded>
    </item>
    <item>
      <title>More on using the StyleCop TFS 2010 Build Activity– handling settings files</title>
      <link>https://blog.richardfennell.net/posts/more-on-using-the-stylecop-tfs-2010-build-activity-handling-settings-files/</link>
      <pubDate>Tue, 16 Aug 2011 16:00:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-on-using-the-stylecop-tfs-2010-build-activity-handling-settings-files/</guid>
      <description>&lt;p&gt;In a recent build I wanted a bit more control over rules used by StyleCop; in the past I have just tended to have the correct ruleset in the &lt;em&gt;Program FilesStyleCop&lt;/em&gt; directory and be done with that. This time I wanted to make sure different rules were associated with different given solutions.&lt;/p&gt;
&lt;p&gt;The StyleCop build activity does allow for this; there is a property to set the path to the settings file. In my build process template I set this property as below, via an assignment activity&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In a recent build I wanted a bit more control over rules used by StyleCop; in the past I have just tended to have the correct ruleset in the <em>Program FilesStyleCop</em> directory and be done with that. This time I wanted to make sure different rules were associated with different given solutions.</p>
<p>The StyleCop build activity does allow for this; there is a property to set the path to the settings file. In my build process template I set this property as below, via an assignment activity</p>
<blockquote>
<p><em>StyleCopSettingsFile</em> = <em>String.Format(&quot;{0}Settings.StyleCop&quot;, localProject.Substring(0, localProject.LastIndexOf(&quot;&quot;)))</em></p></blockquote>
<p>so picking up the <em>settings.stylecop</em> file in the solution folder, but you could use any logic you need here, or just pass the fixed path to the settings file as a build process argument.</p>
<p>So I placed an edited <em>settings.stylecop</em> in the same folder as my .SLN file under source control and ran a build. However when it ran more rules were evaluated than I had expected, in fact all the rules from the default ruleset had been used.</p>
<p>What I had forgotten to do was set the merge rules for StyleCop. So I opened the <em>setting.stylecop</em> file in the StyleCop editor (installed when you install StyleCop on the PC)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_3E17414B.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7DE127D0.png" title="image"></a></p>
<p>I then set the setting to not merge this ruleset with any other ones, and to always re-run all the results.</p>
<p>Once these changes were saved and the file checked back into TFS, StyleCop ran the rules I expected.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_48D498A0.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4F87A223.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Do you find editing TFS Build Process Templates slow?</title>
      <link>https://blog.richardfennell.net/posts/do-you-find-editing-tfs-build-process-templates-slow/</link>
      <pubDate>Tue, 09 Aug 2011 13:41:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/do-you-find-editing-tfs-build-process-templates-slow/</guid>
      <description>&lt;p&gt;… of course you do.&lt;/p&gt;
&lt;p&gt;Microsoft have today released a patch to improve performance and reliability of workflow designer which should help with the Build Process Template design surface.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/buckh/archive/2011/08/09/patch-to-improve-perf-and-reliability-of-the-workflow-designer.aspx&#34;&gt;See Buck Hodges blog for details&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>… of course you do.</p>
<p>Microsoft have today released a patch to improve performance and reliability of workflow designer which should help with the Build Process Template design surface.</p>
<p><a href="http://blogs.msdn.com/b/buckh/archive/2011/08/09/patch-to-improve-perf-and-reliability-of-the-workflow-designer.aspx">See Buck Hodges blog for details</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Still time to propose sessions for DDDNorth</title>
      <link>https://blog.richardfennell.net/posts/still-time-to-propose-sessions-for-dddnorth/</link>
      <pubDate>Mon, 08 Aug 2011 13:12:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/still-time-to-propose-sessions-for-dddnorth/</guid>
      <description>&lt;p&gt;It is great to see a good wide selection of &lt;a href=&#34;http://developerdeveloperdeveloper.com/north/&#34;&gt;proposed sessions for DDDNorth&lt;/a&gt;, many from new speakers to the DDD events, which is always nice to see.&lt;/p&gt;
&lt;p&gt;There is still time for you to get involved and propose a session. Go on you know you want to….&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is great to see a good wide selection of <a href="http://developerdeveloperdeveloper.com/north/">proposed sessions for DDDNorth</a>, many from new speakers to the DDD events, which is always nice to see.</p>
<p>There is still time for you to get involved and propose a session. Go on you know you want to….</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where do I find the product key for Team Explorer Everywhere?</title>
      <link>https://blog.richardfennell.net/posts/where-do-i-find-the-product-key-for-team-explorer-everywhere/</link>
      <pubDate>Wed, 03 Aug 2011 14:23:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-do-i-find-the-product-key-for-team-explorer-everywhere/</guid>
      <description>&lt;p&gt;When TEE is installed you have to provide a product key if you do not wish to run it in 90 day trial mode. Those of you used to using MSDN Subscriber downloads would guess you press the &lt;strong&gt;Key&lt;/strong&gt; button next to the &lt;strong&gt;Download&lt;/strong&gt; button and a product will be provided. However this is not the case, all you get is the message the product does not require a key.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When TEE is installed you have to provide a product key if you do not wish to run it in 90 day trial mode. Those of you used to using MSDN Subscriber downloads would guess you press the <strong>Key</strong> button next to the <strong>Download</strong> button and a product will be provided. However this is not the case, all you get is the message the product does not require a key.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_5FDA969D.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3FBF89E0.png" title="image"></a></p>
<p>The answer is actually simple, you are just in the wrong place. You need to go back to the subscriptions menu and look in ‘My Product Keys’</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_06A8ACDE.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_668DA020.png" title="image"></a></p>
<p>Scroll down and you will find you TEE key in the list</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2D76C31E.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_265786A6.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Linking a TFS work item to a specific version of a document in SharePoint</title>
      <link>https://blog.richardfennell.net/posts/linking-a-tfs-work-item-to-a-specific-version-of-a-document-in-sharepoint/</link>
      <pubDate>Tue, 02 Aug 2011 13:39:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/linking-a-tfs-work-item-to-a-specific-version-of-a-document-in-sharepoint/</guid>
      <description>&lt;p&gt;SharePoint in my opinion is a better home for a Word or Visio requirements document than TFS. You can use all the SharePoint document workspace features to allow collaboration in the production of the document. When you have done enough definition to create your projects user stories or requirement then you can create them in TFS using whatever client you wish e.g. Visual Studio, Excel, Project etc.&lt;/p&gt;
&lt;p&gt;You can add a Hyperlink from each of these work items back to the SharePoint hosted document they relate to, so you still retain the single version of the source document. The thing to note here is that you don’t have to link to the last version of the document. If SharePoint’s revision control is enabled for the document library you can refer to any stored version. Thus allowing the specification document to continue evolving for future releases whilst the development team are still able to reference the specific version their requirements are based on.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>SharePoint in my opinion is a better home for a Word or Visio requirements document than TFS. You can use all the SharePoint document workspace features to allow collaboration in the production of the document. When you have done enough definition to create your projects user stories or requirement then you can create them in TFS using whatever client you wish e.g. Visual Studio, Excel, Project etc.</p>
<p>You can add a Hyperlink from each of these work items back to the SharePoint hosted document they relate to, so you still retain the single version of the source document. The thing to note here is that you don’t have to link to the last version of the document. If SharePoint’s revision control is enabled for the document library you can refer to any stored version. Thus allowing the specification document to continue evolving for future releases whilst the development team are still able to reference the specific version their requirements are based on.</p>
<p>The process to do this is as follows..</p>
<p>Open your version history enabled document library, select the dropdown for a document and select version history</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_53C6CDA4.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_391A318B.png" title="image"></a></p>
<p>If you cut the hyperlink for the 4.0 version of the document you get an ordinary Url link  “…<em>/BlackMarble/SharePoint Engagement Document.docx”</em></p>
<p>If you cut the hyperlink for the 2.0 version of the document you get  a Url like this with a version in it “..<em>./_vti_history/1024/Black Marble/SharePoint Engagement Document.docx”</em></p>
<p>You can paste these into ‘Add link to requirement’ dialog as often as required</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2DF0A741.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_31FAF513.png" title="image"></a></p>
<p>So there is a link to each revision of the document</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0DD59A84.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_78E41810.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD in t’North</title>
      <link>https://blog.richardfennell.net/posts/ddd-in-tnorth/</link>
      <pubDate>Wed, 27 Jul 2011 08:54:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-in-tnorth/</guid>
      <description>&lt;p&gt;It was announced at DDDSW that there will be a &lt;a href=&#34;http://www.developerdeveloperdeveloper.com/north/&#34;&gt;DDD event on the 8th October 2011 in Sunderland, DDD North&lt;/a&gt;. As of today, session submission is now open. Why not proposes one yourself?&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.andrewwestgarth.co.uk/blog/image.axd?picture=dddSunderlandWPathenon.png&#34;&gt;&lt;img alt=&#34;dddSunderlandWPathenon&#34; loading=&#34;lazy&#34; src=&#34;http://www.andrewwestgarth.co.uk/blog/image.axd?picture=dddSunderlandWPathenon_thumb.png&#34; title=&#34;dddSunderlandWPathenon&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It was announced at DDDSW that there will be a <a href="http://www.developerdeveloperdeveloper.com/north/">DDD event on the 8th October 2011 in Sunderland, DDD North</a>. As of today, session submission is now open. Why not proposes one yourself?</p>
<p><a href="http://www.andrewwestgarth.co.uk/blog/image.axd?picture=dddSunderlandWPathenon.png"><img alt="dddSunderlandWPathenon" loading="lazy" src="http://www.andrewwestgarth.co.uk/blog/image.axd?picture=dddSunderlandWPathenon_thumb.png" title="dddSunderlandWPathenon"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>More tips and tricks using my Typemock custom build activity with TFS 2010 build</title>
      <link>https://blog.richardfennell.net/posts/more-tips-and-tricks-using-my-typemock-custom-build-activity-with-tfs-2010-build/</link>
      <pubDate>Tue, 26 Jul 2011 12:54:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-tips-and-tricks-using-my-typemock-custom-build-activity-with-tfs-2010-build/</guid>
      <description>&lt;p&gt;Every time I add the &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/wikipage?title=TeamBuild%202010%20Activity%20to%20run%20Typemock%20Isolator%20based%20tests&amp;amp;referringTitle=Home&#34;&gt;Typemock Isolator custom activity to a TFS 2010 build&lt;/a&gt; I learn something new that eases the process. In the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx&#34;&gt;past I have posted&lt;/a&gt; on the basic process to get the activity into your build, and I would also draw your attention to the &lt;a href=&#34;http://rabcg.codeplex.com/&#34;&gt;ALM rangers guide to build customisation&lt;/a&gt;, which provides loads of useful information on this front.&lt;/p&gt;
&lt;p&gt;Today, when I added the activity to a build I made the following improvements to the settings to make life a bit easier. This build is one used wildcard scanning for assemblies containing test with a test configuration file (this is the red &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx&#34;&gt;usage in the documentation&lt;/a&gt;, it will make sense if you read the documentation)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Every time I add the <a href="http://tfsbuildextensions.codeplex.com/wikipage?title=TeamBuild%202010%20Activity%20to%20run%20Typemock%20Isolator%20based%20tests&amp;referringTitle=Home">Typemock Isolator custom activity to a TFS 2010 build</a> I learn something new that eases the process. In the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx">past I have posted</a> on the basic process to get the activity into your build, and I would also draw your attention to the <a href="http://rabcg.codeplex.com/">ALM rangers guide to build customisation</a>, which provides loads of useful information on this front.</p>
<p>Today, when I added the activity to a build I made the following improvements to the settings to make life a bit easier. This build is one used wildcard scanning for assemblies containing test with a test configuration file (this is the red <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx">usage in the documentation</a>, it will make sense if you read the documentation)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_76441BB7.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_360E023D.png" title="image"></a></p>
<p>So the changes over and above the usually configuration.</p>
<ol>
<li>In the project used to edit the build process workflow I added a reference to the <strong>Microsoft.TeamFoundation.Client</strong> assembly (found in <strong>C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEReferenceAssembliesv2.0Microsoft.TeamFoundation.Client.dll</strong>)</li>
<li>This meant I could replace the hard coded TPC Url property <strong>ProjectCollection</strong> on the ExternalTestRunner activity with <strong>BuildDetail.BuildServer.TeamProjectCollection.Uri.ToString()</strong></li>
<li>I noticed you have to explicitly set the build definition’s configuration and platform. If you only set the target solution and let these default the test results are not published. This I suppose is a bug in the activity, but not one I am rushing to look at as I would normally set these values</li>
</ol>
<p>So for my usage of the custom activity the properties are</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2E104A01.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6007EA8B.png" title="image"></a></p>
<p>As you can see these are nicely generic now, no project based hardcoded values</p>
]]></content:encoded>
    </item>
    <item>
      <title>More on running multiple TFS build controllers on a single VM</title>
      <link>https://blog.richardfennell.net/posts/more-on-running-multiple-tfs-build-controllers-on-a-single-vm/</link>
      <pubDate>Mon, 25 Jul 2011 15:05:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-on-running-multiple-tfs-build-controllers-on-a-single-vm/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/20/now-i-have-three-tfs-build-instances-on-my-vm.aspx&#34;&gt;have been having a on-going project to run multiple build controllers on a single VM&lt;/a&gt;. Today I needed to reconfigure a on of the controllers to point at a different TPC. You have to do this the correct way to avoid problems.&lt;/p&gt;
&lt;p&gt;My error log was full of&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;_Http communication failure:&lt;br&gt;
_&lt;em&gt;Exception Message: Cannot listen on pipe name &amp;rsquo;net.pipe://build/ServiceHost/1&amp;rsquo; because another pipe endpoint is already listening on that name. (type AddressAlreadyInUseException)&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/20/now-i-have-three-tfs-build-instances-on-my-vm.aspx">have been having a on-going project to run multiple build controllers on a single VM</a>. Today I needed to reconfigure a on of the controllers to point at a different TPC. You have to do this the correct way to avoid problems.</p>
<p>My error log was full of</p>
<blockquote>
<p>_Http communication failure:<br>
_<em>Exception Message: Cannot listen on pipe name &rsquo;net.pipe://build/ServiceHost/1&rsquo; because another pipe endpoint is already listening on that name. (type AddressAlreadyInUseException)</em></p></blockquote>
<p>It all boiled down to the fact I ended up with two controllers trying to use the same URI, in my case</p>
<blockquote>
<p><em>vstfs:///Build/ServiceHost/1</em></p></blockquote>
<p>The number at the end of this URI is assigned when the controller is registered with a TPC. If, as I did, you just stop a controller and edit its properties to point at another TPC and restart it, it is possible to end up with two controllers on the same box trying to use the same ID.</p>
<p>The simple fix is to unregister the build controller and the register it with the new TPC as needed. This will cause the machine to be scanned and new, empty ID chosen for the URI. <a href="http://blogs.msdn.com/b/jimlamb/archive/2010/04/13/configuring-multiple-tfs-build-services-on-one-machine.aspx">As as detailed in Jim Lamb’s original post</a>.</p>
<p>As a side effect I also saw errors in the log saying custom activity assemblies could not be loaded due to permission errors. This all turned out to be that the custom activities are stored in</p>
<blockquote>
<p>_C:Users[tfs build account]AppDataLocalTempBuildAgent[agent ID] _</p></blockquote>
<p>So if two agents have on the ID, even if one’s parent controller is failing to load fully, it will tend to lock the files for the other controller. Again this was fixed by registering the controller and agents in the correct manner</p>
]]></content:encoded>
    </item>
    <item>
      <title>WP7 update fails when using laptop docking station, works when direct to laptop</title>
      <link>https://blog.richardfennell.net/posts/wp7-update-fails-when-using-laptop-docking-station-works-when-direct-to-laptop-2/</link>
      <pubDate>Fri, 22 Jul 2011 15:55:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/wp7-update-fails-when-using-laptop-docking-station-works-when-direct-to-laptop-2/</guid>
      <description>&lt;p&gt;When I plugged my WP7 LG-E900 today it told me there was a update. When I tried to install this it failed, twice. So I changed USB cable and plugged it directly into a port on my laptop, not my usually phone syncing cable plugged into my laptops docking station, and it worked.&lt;/p&gt;
&lt;p&gt;So not sure if it a USB cable quality issue, or a USB port issue, just something to remember.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When I plugged my WP7 LG-E900 today it told me there was a update. When I tried to install this it failed, twice. So I changed USB cable and plugged it directly into a port on my laptop, not my usually phone syncing cable plugged into my laptops docking station, and it worked.</p>
<p>So not sure if it a USB cable quality issue, or a USB port issue, just something to remember.</p>
]]></content:encoded>
    </item>
    <item>
      <title>WP7 update fails when using laptop docking station, works when direct to laptop</title>
      <link>https://blog.richardfennell.net/posts/wp7-update-fails-when-using-laptop-docking-station-works-when-direct-to-laptop/</link>
      <pubDate>Fri, 22 Jul 2011 15:55:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/wp7-update-fails-when-using-laptop-docking-station-works-when-direct-to-laptop/</guid>
      <description>&lt;p&gt;When I plugged my WP7 LG-E900 today it told me there was a update. When I tried to install this it failed, twice. So I changed USB cable and plugged it directly into a port on my laptop, not my usually phone syncing cable plugged into my laptops docking station, and it worked.&lt;/p&gt;
&lt;p&gt;So not sure if it a USB cable quality issue, or a USB port issue, just something to remember.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When I plugged my WP7 LG-E900 today it told me there was a update. When I tried to install this it failed, twice. So I changed USB cable and plugged it directly into a port on my laptop, not my usually phone syncing cable plugged into my laptops docking station, and it worked.</p>
<p>So not sure if it a USB cable quality issue, or a USB port issue, just something to remember.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Follow up to last nights session on mocking</title>
      <link>https://blog.richardfennell.net/posts/follow-up-to-last-nights-session-on-mocking/</link>
      <pubDate>Fri, 22 Jul 2011 11:17:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-up-to-last-nights-session-on-mocking/</guid>
      <description>&lt;p&gt;Thanks to everyone who came to my session at NxtGen in Southampton last night, and congratulation to the people who won Typemock Isolator licenses kindly provided by &lt;a href=&#34;http://www.typemock.com&#34;&gt;Typemock&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The session was meant to be based on TDD and Mocking, quite a big subject for 90 minutes. I of course had to gloss over some areas that are interesting around the edges of this subject. As I mentioned there are &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/pages/videos-of-my-presentations.aspx&#34;&gt;videoes of related sessions on this blog&lt;/a&gt;; also here are links on other subjects I touched on…&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who came to my session at NxtGen in Southampton last night, and congratulation to the people who won Typemock Isolator licenses kindly provided by <a href="http://www.typemock.com">Typemock</a>.</p>
<p>The session was meant to be based on TDD and Mocking, quite a big subject for 90 minutes. I of course had to gloss over some areas that are interesting around the edges of this subject. As I mentioned there are <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/pages/videos-of-my-presentations.aspx">videoes of related sessions on this blog</a>; also here are links on other subjects I touched on…</p>
<ul>
<li>using <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/06/22/logging-everything-that-is-going-on-when-an-assembly-loads-using-cthru.aspx">Typemock for AOP style logging, there is a post on doing this with Cthru</a> </li>
<li>how you can mock out IIS using <a href="http://www.sm-art.biz/Ivonna.aspx">Ivonna with Isolator</a></li>
<li>a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx">TFS 2010 custom build activity to allow Typemock tests</a> to be run in the build process.</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>The community - a route to free training</title>
      <link>https://blog.richardfennell.net/posts/the-community-a-route-to-free-training/</link>
      <pubDate>Thu, 21 Jul 2011 09:21:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-community-a-route-to-free-training/</guid>
      <description>&lt;p&gt;I presented last night at the West Yorkshire BCS and received a number of comments that it was great to see free training being available. This for me is one of the most important areas of our community, and an area we are very lucky with. There are plenty of opportunities to access knowledge…..&lt;/p&gt;
&lt;p&gt;Of late we have seen that the big vendor conferences (such as Microsoft’s &lt;a href=&#34;http://northamerica.msteched.com/?fbid=nVlvC7mV6Nf&#34;&gt;TechEd&lt;/a&gt;, &lt;a href=&#34;http://channel9.msdn.com/events/PDC/PDC10&#34;&gt;PDC&lt;/a&gt; and &lt;a href=&#34;http://live.visitmix.com/&#34;&gt;MIX&lt;/a&gt;) and the platform independent developer conferences (such as &lt;a href=&#34;http://www.ndc2011.no/&#34;&gt;NDC&lt;/a&gt;) streaming their sessions live for free and also have their sessions available for free download within a few days. &lt;a href=&#34;http://channel9.msdn.com/&#34;&gt;Microsoft’ Channel 9&lt;/a&gt; is also an excellent resource of conference sessions and specially made material on a vast range of subjects from &lt;a href=&#34;http://channel9.msdn.com/shows/Going&amp;#43;Deep/Erik-Meijer-and-Matthew-Podwysocki-Perspectives-on-Functional-Programming/&#34;&gt;language design&lt;/a&gt; to &lt;a href=&#34;http://channel9.msdn.com/Series/CampusTours&#34;&gt;scheduling of the buses on Microsoft campus&lt;/a&gt;. Or if you prefer to listen to something whist your drive why not try the &lt;a href=&#34;http://www.dotnetrocks.com/&#34;&gt;.Net Rocks podcast&lt;/a&gt;. There are plenty more specialist resources out there on the web.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I presented last night at the West Yorkshire BCS and received a number of comments that it was great to see free training being available. This for me is one of the most important areas of our community, and an area we are very lucky with. There are plenty of opportunities to access knowledge…..</p>
<p>Of late we have seen that the big vendor conferences (such as Microsoft’s <a href="http://northamerica.msteched.com/?fbid=nVlvC7mV6Nf">TechEd</a>, <a href="http://channel9.msdn.com/events/PDC/PDC10">PDC</a> and <a href="http://live.visitmix.com/">MIX</a>) and the platform independent developer conferences (such as <a href="http://www.ndc2011.no/">NDC</a>) streaming their sessions live for free and also have their sessions available for free download within a few days. <a href="http://channel9.msdn.com/">Microsoft’ Channel 9</a> is also an excellent resource of conference sessions and specially made material on a vast range of subjects from <a href="http://channel9.msdn.com/shows/Going&#43;Deep/Erik-Meijer-and-Matthew-Podwysocki-Perspectives-on-Functional-Programming/">language design</a> to <a href="http://channel9.msdn.com/Series/CampusTours">scheduling of the buses on Microsoft campus</a>. Or if you prefer to listen to something whist your drive why not try the <a href="http://www.dotnetrocks.com/">.Net Rocks podcast</a>. There are plenty more specialist resources out there on the web.</p>
<p>At the national level in the UK we also have vendor events such as <a href="http://uktechdays.cloudapp.net/techdays-live.aspx">Microsoft UK Techdays</a>, and company run events like my company’s <a href="http://www.blackmarble.com/SectionDisplay.aspx?name=Events">Black Marbles free events</a>.</p>
<p>Outside company structures we have community events like <a href="http://developerdeveloperdeveloper.com/home/">DDD</a> (general .NET interest) and <a href="http://sqlbits.com/">SQLBits</a> (SQL specific). These are free community run conferences where the members of the community submit session ideas, these are voted for by the community and a free Saturday conference is run using usually the  20 most popular sessions. Again, increasing some of these sessions are videoed and made <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/pages/videos-of-my-presentations.aspx">available on the web</a>.</p>
<p>At the even more local level there are always usergroup. In Leeds we have the <a href="http://www.bcs.org/category/14985">BCS</a>, <a href="http://www.agileyorkshire.org/">Agile Yorkshire</a>, <a href="http://www.sqlserverfaq.com/">Leeds SQL Usergroup</a>, <a href="http://geekup.org/">GeekUp</a> and more I am sure I have not heard of. This pattern is reflected across the OK, it is just a matter of searching to see what is near you.</p>
<p>So there is no excuse for not trying to keep up to date, whether you want to learn via the web or in person. Why not have a look?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Solution to ‘Missing requirement: Shared profile 1.0.0.1308118925849’ error when installation TEE SP1 on Eclipse Indigo</title>
      <link>https://blog.richardfennell.net/posts/solution-to-missing-requirement-shared-profile-1-0-0-1308118925849-error-when-installation-tee-sp1-on-eclipse-indigo/</link>
      <pubDate>Mon, 18 Jul 2011 19:16:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/solution-to-missing-requirement-shared-profile-1-0-0-1308118925849-error-when-installation-tee-sp1-on-eclipse-indigo/</guid>
      <description>&lt;p&gt;&lt;strong&gt;[Updated 1 mar 2012 - This should only effect you if Eclipse is unzipped into your &amp;lsquo;c:program files&amp;rsquo; folder structure)&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;During my new laptop build I have had to reinstall Eclipse, so I took the chance to upgrade to the &lt;a href=&#34;http://www.eclipse.org/downloads/&#34;&gt;Indigo&lt;/a&gt; release. When I tried to install the TFS 2010 &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-us/products/2010-editions/team-explorer-everywhere&#34;&gt;Team Explorer Everywhere SP1&lt;/a&gt; plug-in I got the error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Cannot complete the install because one or more required items could not be found.&lt;br&gt;
  Software currently installed: Shared profile 1.0.0.1308118925849 (SharedProfile_epp.package.java 1.0.0.1308118925849)&lt;br&gt;
  Missing requirement: Shared profile 1.0.0.1308118925849 (SharedProfile_epp.package.java 1.0.0.1308118925849) requires &amp;lsquo;org.maven.ide.eclipse [1.0.0.20110607-2117]&amp;rsquo; but it could not be found&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>[Updated 1 mar 2012 - This should only effect you if Eclipse is unzipped into your &lsquo;c:program files&rsquo; folder structure)</strong></p>
<p>During my new laptop build I have had to reinstall Eclipse, so I took the chance to upgrade to the <a href="http://www.eclipse.org/downloads/">Indigo</a> release. When I tried to install the TFS 2010 <a href="http://www.microsoft.com/visualstudio/en-us/products/2010-editions/team-explorer-everywhere">Team Explorer Everywhere SP1</a> plug-in I got the error</p>
<p><em>Cannot complete the install because one or more required items could not be found.<br>
  Software currently installed: Shared profile 1.0.0.1308118925849 (SharedProfile_epp.package.java 1.0.0.1308118925849)<br>
  Missing requirement: Shared profile 1.0.0.1308118925849 (SharedProfile_epp.package.java 1.0.0.1308118925849) requires &lsquo;org.maven.ide.eclipse [1.0.0.20110607-2117]&rsquo; but it could not be found</em></p>
<p>This stumped me for a while, but after a bit of searching on Eclipse forums I found this was a problem common to installing other plug-ins, not just TEE, so not a TEE dependency as the error suggests.</p>
<p>The issue was <a href="http://eastmond.org/blog/?p=29#comments">you have to be running Eclipse as an administrator to install the plug-in</a>. Judging from the forums this has tripped a few people up with plug-ins on the Indigo release, seems this was not the case with the previous releases such as Helios.</p>
<p>Once you have installed the plug-in as administrator you can restart Eclipse in standard mode and connect to your TFS server as you would expect.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Lenovo W520 – one week on</title>
      <link>https://blog.richardfennell.net/posts/lenovo-w520-one-week-on/</link>
      <pubDate>Mon, 18 Jul 2011 10:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/lenovo-w520-one-week-on/</guid>
      <description>&lt;p&gt;I have had my Lenovo W520 a week now and must say I am very happy with it. I am still hitting the wrong keys a good deal of the time, the only problem being the Fn and Ctrl keys are reversed in position from my old Acer, it is a good job Fn-V and Fn-C does nothing dangerous!.&lt;/p&gt;
&lt;p&gt;The keyboard is very nice to type on, a far more solid feel than any previous laptop I have had.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have had my Lenovo W520 a week now and must say I am very happy with it. I am still hitting the wrong keys a good deal of the time, the only problem being the Fn and Ctrl keys are reversed in position from my old Acer, it is a good job Fn-V and Fn-C does nothing dangerous!.</p>
<p>The keyboard is very nice to type on, a far more solid feel than any previous laptop I have had.</p>
<p>The performance is great, but you would expect it to be good with an Core i7 and 16Gb.</p>
<p>But the most impressive thing has been the battery life, a good five hours even running a TFS instance on Windows 7 and having VS2010 and Office open as I do most days. I had not expected that, the battery extender bits seems to do the job very well.</p>
<p>So all in all very happy, seems an excellent desktop replacement, I just need to find the time to get a lab management demo rig onto it to check it out in really bit load.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving a VHD boot disk to Hyper-V</title>
      <link>https://blog.richardfennell.net/posts/moving-a-vhd-boot-disk-to-hyper-v/</link>
      <pubDate>Wed, 13 Jul 2011 10:30:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-a-vhd-boot-disk-to-hyper-v/</guid>
      <description>&lt;p&gt;I have just replaced my old Acer laptop with a rather nice Lenovo W520. This has plenty of memory and is able to run Hyper-V. In the past for TFS demos I had used &lt;a href=&#34;http://technet.microsoft.com/en-us/library/dd799282%28WS.10%29.aspx&#34;&gt;boot from VHD&lt;/a&gt; to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/09/17/windows-7-boot-from-vhd.aspx&#34;&gt;boot the Acer&lt;/a&gt; into Windows 2K8, as the Acer could not run Hyper-V due to lack of hardware virtualisation support in the bios. So I had a fully configured VHD boot what I wanted to move to Hyper-V.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just replaced my old Acer laptop with a rather nice Lenovo W520. This has plenty of memory and is able to run Hyper-V. In the past for TFS demos I had used <a href="http://technet.microsoft.com/en-us/library/dd799282%28WS.10%29.aspx">boot from VHD</a> to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/09/17/windows-7-boot-from-vhd.aspx">boot the Acer</a> into Windows 2K8, as the Acer could not run Hyper-V due to lack of hardware virtualisation support in the bios. So I had a fully configured VHD boot what I wanted to move to Hyper-V.</p>
<p>My first though was I could just use the <a href="http://technet.microsoft.com/en-us/library/cc764232.aspx">P2V system build into SCVMM</a>. I ran the wizard, connected to my VHD booted Acer laptop, provide the details asked for and all looked good until the last screen when I got the error</p>
<blockquote>
<p><em>Error (13256):<br>
The disk with index 1 on source machine 192.168.100.30 is an attached virtual hard disk. This configuration is not supported for physical-to-virtual conversions.</em></p></blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_317AB598.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_2A5B7920.png" title="image"></a></p>
<p>So that was a non starter.</p>
<p>So I just copied the VHD to my new Hyper-V server and created a new Hyper-V VM using it. However, when I tried to boot this it said I had no boot sector. When you think about it this not unreasonable as this VHD was always booted off my laptops primary disk boot partition. The process to fix this was as follows (thanks to <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/">Rik</a> for some help here)</p>
<ol>
<li>Mount the VHD directly onto my Windows 2K8 Hyper-V host via the Disk Manager in the admin tools.Don’t bother to assign a drive letter.</li>
<li>Select the Windows 2K8 partition on the VHD and shrink it by 110Mb, this is to create space for the boot partition (I suppose you could use a VHD resizer tool, but that would be slower from my experience as this rewrites the whole VHD, the shrink is very quick)</li>
<li>In the new gap, create a simple partition and format as NTFS with the same ‘System Reserved’</li>
<li>Dismount the VHD</li>
<li>Start the Hyper-V VM using the edit VHD, you will still get the no boot device error</li>
<li>Attach a Windows 7 DVD ISO and restart the VM, it should boot into the Windows setup. On the first screen press Shift F10 to get the command prompt.</li>
<li>You should be on drive X: and see a drive C; (your old VHD partition) and D: ( the newly created one)</li>
<li>Run the command <strong>bcdboot c:windows /s d:</strong> to create the boot partition</li>
<li>Load <strong>diskpart</strong> and (probably) the following commands</li>
</ol>
<ul>
<li>select the VHD disk – <strong>sel disk 0</strong></li>
<li>list the partitions – <strong>list part</strong></li>
<li>select the new partition – <strong>sel part 2</strong></li>
<li>make it active – <strong>active</strong></li>
<li>exit distpart</li>
</ul>
<ol start="11">
<li>As a check you can run the command <strong>bcdedit</strong> to see that it added something (this command would have returned nothing prior to bcdboot being run)</li>
</ol>
<p>You should now be able to restart the VM and it should boot using the installed Windows 2K8 partition. As it has changed hardware it will probably want to reboot a few times as drivers are updated.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at usergroups in Leeds and Southampton</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-usergroups-in-leeds-and-southampton/</link>
      <pubDate>Mon, 11 Jul 2011 10:39:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-usergroups-in-leeds-and-southampton/</guid>
      <description>&lt;p&gt;Next week I am speaking at:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &lt;a href=&#34;http://www.bcs.org/content/conWebDoc/40784&#34;&gt;West Yorkshire BCS&lt;/a&gt; on “Application Lifecycle Management - supporting the software development process from inception to retirement” , Wednesday 20 July 2011, 6.30pm (refreshments available from 5.45pm) at NTI Leeds, Old Broadcasting House, 148 Woodhouse Lane, Leeds, LS2 9EN.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;and&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&#34;http://www.nxtgenug.net/ViewEvent.aspx?EventID=415&#34;&gt;NxtGen Southampton&lt;/a&gt; on “TDD &amp;amp; Mocking, a love affair”, Thursday, July 21, 2011 , 7pm,  St Andrew&amp;rsquo;s Hall, Avenue St Andrew&amp;rsquo;s URC, SOUTHAMPTON, SO17 1XQ&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Next week I am speaking at:</p>
<blockquote>
<p>The <a href="http://www.bcs.org/content/conWebDoc/40784">West Yorkshire BCS</a> on “Application Lifecycle Management - supporting the software development process from inception to retirement” , Wednesday 20 July 2011, 6.30pm (refreshments available from 5.45pm) at NTI Leeds, Old Broadcasting House, 148 Woodhouse Lane, Leeds, LS2 9EN.</p></blockquote>
<p>and</p>
<blockquote>
<p><a href="http://www.nxtgenug.net/ViewEvent.aspx?EventID=415">NxtGen Southampton</a> on “TDD &amp; Mocking, a love affair”, Thursday, July 21, 2011 , 7pm,  St Andrew&rsquo;s Hall, Avenue St Andrew&rsquo;s URC, SOUTHAMPTON, SO17 1XQ</p></blockquote>
<p>Maybe see you at one of these events, but given the distance between the venues I doubt it will be at both!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Workaround to connect to a TFS Lab Environment from outside a TMG firewall</title>
      <link>https://blog.richardfennell.net/posts/workaround-to-connect-to-a-tfs-lab-environment-from-outside-a-tmg-firewall/</link>
      <pubDate>Fri, 08 Jul 2011 11:56:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/workaround-to-connect-to-a-tfs-lab-environment-from-outside-a-tmg-firewall/</guid>
      <description>&lt;p&gt;Whist on the road I have had need to access our &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/ee712698.aspx&#34;&gt;Lab Management&lt;/a&gt; system via our &lt;a href=&#34;http://www.microsoft.com/forefront/threat-management-gateway/en/us/default.aspx&#34;&gt;TMG firewall&lt;/a&gt; through which we expose our TFS 2010 for remote users (via SSL). When I load Microsoft Tests Manager (MTM) I can connect to the TFS server, as expected, and go into ‘Lab Center’ mode. I can see my project’s environment and can start, stop and deploy them without issue (all communication routed via our TFS server). However the MTM environment viewer fails to make a connection to the test VMs in the environments.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist on the road I have had need to access our <a href="http://msdn.microsoft.com/en-us/vstudio/ee712698.aspx">Lab Management</a> system via our <a href="http://www.microsoft.com/forefront/threat-management-gateway/en/us/default.aspx">TMG firewall</a> through which we expose our TFS 2010 for remote users (via SSL). When I load Microsoft Tests Manager (MTM) I can connect to the TFS server, as expected, and go into ‘Lab Center’ mode. I can see my project’s environment and can start, stop and deploy them without issue (all communication routed via our TFS server). However the MTM environment viewer fails to make a connection to the test VMs in the environments.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0FBB4339.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_2960DB90.png" title="image"></a></p>
<p>MTM environment viewer can connect to an environment in two ways:</p>
<ul>
<li>Host connection – via the Hyper-V management protocols</li>
<li>Guest connection – via a RDP session to the VM’s operating system</li>
</ul>
<p>From outside our firewall a host connection is not an option as the required ports are not open. So my only option was a guest connection. However, our TMG firewall is set to provide a  RD gateway, effectively a proxy for RDP sessions. You have to configure RDP to use this, and have to authenticate with this gateway prior to authenticating with the actual target remote machine.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_25FF43E8.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7A8AFD20.png" title="image"></a></p>
<p>The problem is MTM does not support the use of TMG RD Gateways.</p>
<p>However there is a solution. If I right click on the VM in MTM Environment Viewer you can launch a standard remote desktop session.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_09FDE923.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_0F0026D2.png" title="image"></a></p>
<p>If you do this you will be prompted to authenticate correctly.Firstly with your domain account to authenticate with the TMG RD gateway, then for other credentials to the test VM.</p>
<p>So a reasonable workaround, if a VPN or <a href="http://www.microsoft.com/windowsserver2008/en/us/directaccess.aspx">TMG Direct Access</a> is not on option for you.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF30162: Task &amp;quot;BuildTask&amp;quot; from Group &amp;quot;Build&amp;quot; failed – when creating a team project</title>
      <link>https://blog.richardfennell.net/posts/tf30162-task-buildtask-from-group-build-failed-when-creating-a-team-project/</link>
      <pubDate>Tue, 05 Jul 2011 11:00:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf30162-task-buildtask-from-group-build-failed-when-creating-a-team-project/</guid>
      <description>&lt;p&gt;When trying to create a new Team Project on my test TFS 2010 basic installation I got the error&lt;/p&gt;
&lt;p&gt;Time: 2011-07-05T11:43:36&lt;br&gt;
Module: Engine&lt;br&gt;
Event Description: TF30162: Task &amp;ldquo;BuildTask&amp;rdquo; from Group &amp;ldquo;Build&amp;rdquo; failed&lt;br&gt;
Exception Type: Microsoft.TeamFoundation.Client.PcwException&lt;br&gt;
Exception Message: Multiple identities found matching workspace name &amp;lsquo;TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC)&amp;rsquo; and owner name &amp;lsquo;BLACKMARBLEfez&amp;rsquo;. Please specify one of the following workspace specs:&lt;br&gt;
TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC);BLACKMARBLEfez&lt;br&gt;
TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC);BLACKMARBLEfez&lt;br&gt;
Stack Trace:&lt;br&gt;
   at Microsoft.VisualStudio.TeamFoundation.Build.ProjectComponentCreator.ExecuteInternal(ProjectCreationContext context, XmlNode taskXml, Boolean validationOnly)&lt;br&gt;
   at Microsoft.VisualStudio.TeamFoundation.Build.ProjectComponentCreator.Execute(ProjectCreationContext context, XmlNode taskXml)&lt;br&gt;
   at Microsoft.VisualStudio.TeamFoundation.ProjectCreationEngine.TaskExecutor.PerformTask(IProjectComponentCreator componentCreator, ProjectCreationContext context, XmlNode taskXml)&lt;br&gt;
   at Microsoft.VisualStudio.TeamFoundation.ProjectCreationEngine.RunTask(Object taskObj)&lt;br&gt;
--   Inner Exception   &amp;ndash;&lt;br&gt;
Exception Message: Multiple identities found matching workspace name &amp;lsquo;TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC)&amp;rsquo; and owner name &amp;lsquo;BLACKMARBLEfez&amp;rsquo;. Please specify one of the following workspace specs:&lt;br&gt;
TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC);BLACKMARBLEfez&lt;br&gt;
TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC);BLACKMARBLEfez (type MultipleWorkspacesFoundException)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When trying to create a new Team Project on my test TFS 2010 basic installation I got the error</p>
<p>Time: 2011-07-05T11:43:36<br>
Module: Engine<br>
Event Description: TF30162: Task &ldquo;BuildTask&rdquo; from Group &ldquo;Build&rdquo; failed<br>
Exception Type: Microsoft.TeamFoundation.Client.PcwException<br>
Exception Message: Multiple identities found matching workspace name &lsquo;TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC)&rsquo; and owner name &lsquo;BLACKMARBLEfez&rsquo;. Please specify one of the following workspace specs:<br>
TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC);BLACKMARBLEfez<br>
TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC);BLACKMARBLEfez<br>
Stack Trace:<br>
   at Microsoft.VisualStudio.TeamFoundation.Build.ProjectComponentCreator.ExecuteInternal(ProjectCreationContext context, XmlNode taskXml, Boolean validationOnly)<br>
   at Microsoft.VisualStudio.TeamFoundation.Build.ProjectComponentCreator.Execute(ProjectCreationContext context, XmlNode taskXml)<br>
   at Microsoft.VisualStudio.TeamFoundation.ProjectCreationEngine.TaskExecutor.PerformTask(IProjectComponentCreator componentCreator, ProjectCreationContext context, XmlNode taskXml)<br>
   at Microsoft.VisualStudio.TeamFoundation.ProjectCreationEngine.RunTask(Object taskObj)<br>
--   Inner Exception   &ndash;<br>
Exception Message: Multiple identities found matching workspace name &lsquo;TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC)&rsquo; and owner name &lsquo;BLACKMARBLEfez&rsquo;. Please specify one of the following workspace specs:<br>
TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC);BLACKMARBLEfez<br>
TYPHOONfbb296e15246421e9fc8d25e9d128512typhoon (VC);BLACKMARBLEfez (type MultipleWorkspacesFoundException)</p>
<p>Exception Stack Trace:    at Microsoft.TeamFoundation.VersionControl.Client.InternalCache.GetWorkspace(Guid repositoryGuid, String name, String owner)<br>
   at Microsoft.TeamFoundation.VersionControl.Client.InternalCache.Merge(XmlNode xmlOnDisk, InternalWorkspaceConflictInfo[]&amp; removedConflictingWorkspaces)<br>
   at Microsoft.TeamFoundation.VersionControl.Client.InternalCache.Save(XmlNode inputXml, XmlNode outputXml, InternalWorkspaceConflictInfo[]&amp; removedConflictingWorkspaces)<br>
   at Microsoft.TeamFoundation.VersionControl.Client.InternalCacheLoader.SaveConfigIfDirty(InternalCache internalCache, InternalWorkspaceConflictInfo[]&amp; conflictingWorkspaces)<br>
   at Microsoft.TeamFoundation.VersionControl.Client.Workstation.UpdateWorkspaceInfoCache(String key, VersionControlServer sourceControl, String ownerName)<br>
   at Microsoft.TeamFoundation.VersionControl.Client.Workstation.EnsureUpdateWorkspaceInfoCache(VersionControlServer sourceControl, String ownerName, TimeSpan maxAge)<br>
   at Microsoft.TeamFoundation.Build.Controls.VersionControlHelper.CheckinFiles(VersionControlServer versionControl, Dictionary`2 localPathsToServerPaths, String checkinComment)<br>
   at Microsoft.VisualStudio.TeamFoundation.Build.ProjectComponentCreator.CheckinFiles(ProjectCreationContext context, VersionControlServer versionControl, List`1 templates)<br>
   at Microsoft.VisualStudio.TeamFoundation.Build.ProjectComponentCreator.ExecuteInternal(ProjectCreationContext context, XmlNode taskXml, Boolean validationOnly)</p>
<p>This turns out to be <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/15/tf30162-task-quot-uploadstructure-quot-from-group-quot-classification-quot-failed.aspx">another version of the local cache refresh issue I blogged about recently</a>. The issue was I had deleted a couple of team projects on my TFS server via the TFS Administration Console, but my local VS2010 copy did not know as the cache had not been refreshed. Unloading VS2010, reloading it so the local cache was refreshed and then the create new project works fine</p>
]]></content:encoded>
    </item>
    <item>
      <title>First Stable Release of Community TFS 2010 Build Extensions</title>
      <link>https://blog.richardfennell.net/posts/first-stable-release-of-community-tfs-2010-build-extensions/</link>
      <pubDate>Tue, 05 Jul 2011 09:44:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-stable-release-of-community-tfs-2010-build-extensions/</guid>
      <description>&lt;p&gt;The first stable release of the &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/releases/view/67138&#34;&gt;Community TFS Build Extensions&lt;/a&gt; has been shipped. It contains around 100 activities. It is hoped that this project will be be shipping on a 2 to 3 month cycle in the future.&lt;/p&gt;
&lt;p&gt;So have a look, this project provides many ways to extend your build process. And if you have ideas for more activities why not contribute?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The first stable release of the <a href="http://tfsbuildextensions.codeplex.com/releases/view/67138">Community TFS Build Extensions</a> has been shipped. It contains around 100 activities. It is hoped that this project will be be shipping on a 2 to 3 month cycle in the future.</p>
<p>So have a look, this project provides many ways to extend your build process. And if you have ideas for more activities why not contribute?</p>
]]></content:encoded>
    </item>
    <item>
      <title>I’ve been re-awarded as an ALM MVP for 2011</title>
      <link>https://blog.richardfennell.net/posts/ive-been-re-awarded-as-an-alm-mvp-for-2011/</link>
      <pubDate>Mon, 04 Jul 2011 10:58:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ive-been-re-awarded-as-an-alm-mvp-for-2011/</guid>
      <description>&lt;p&gt;I am really pleased to say I have been re-awarded as a &lt;a href=&#34;https://mvp.support.microsoft.com/profile=59FE5790-D13C-40D8-9E99-B82DE635E4EE&#34;&gt;Microsoft MVP for Visual Studio ALM&lt;/a&gt; for 2011, this is my fourth year. I am looking forwarded to another year of working with such a great crowd of people.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_20610AD0.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_64353F27.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really pleased to say I have been re-awarded as a <a href="https://mvp.support.microsoft.com/profile=59FE5790-D13C-40D8-9E99-B82DE635E4EE">Microsoft MVP for Visual Studio ALM</a> for 2011, this is my fourth year. I am looking forwarded to another year of working with such a great crowd of people.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_20610AD0.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_64353F27.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Todays ISV event at Microsoft UK</title>
      <link>https://blog.richardfennell.net/posts/todays-isv-event-at-microsoft-uk/</link>
      <pubDate>Mon, 27 Jun 2011 20:42:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/todays-isv-event-at-microsoft-uk/</guid>
      <description>&lt;p&gt;Thanks to everyone who came today’s ISV session at TVP.&lt;/p&gt;
&lt;p&gt;You can find all the slides from the event at &lt;a href=&#34;http://blogs.msdn.com/b/ukisvdev/archive/2011/06/27/slides-and-links-from-alm-and-visual-studio-2010-event-on-the-27th-june-2010.aspx&#34; title=&#34;http://blogs.msdn.com/b/ukisvdev/archive/2011/06/27/slides-and-links-from-alm-and-visual-studio-2010-event-on-the-27th-june-2010.aspx&#34;&gt;http://blogs.msdn.com/b/ukisvdev/archive/2011/06/27/slides-and-links-from-alm-and-visual-studio-2010-event-on-the-27th-june-2010.aspx&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who came today’s ISV session at TVP.</p>
<p>You can find all the slides from the event at <a href="http://blogs.msdn.com/b/ukisvdev/archive/2011/06/27/slides-and-links-from-alm-and-visual-studio-2010-event-on-the-27th-june-2010.aspx" title="http://blogs.msdn.com/b/ukisvdev/archive/2011/06/27/slides-and-links-from-alm-and-visual-studio-2010-event-on-the-27th-june-2010.aspx">http://blogs.msdn.com/b/ukisvdev/archive/2011/06/27/slides-and-links-from-alm-and-visual-studio-2010-event-on-the-27th-june-2010.aspx</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft UK Techdays video’s available</title>
      <link>https://blog.richardfennell.net/posts/microsoft-uk-techdays-videos-available/</link>
      <pubDate>Thu, 23 Jun 2011 11:42:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-uk-techdays-videos-available/</guid>
      <description>&lt;p&gt;The videos from the various Microsoft UK Techdays events can now be viewed at &lt;a href=&#34;http://uktechdays.cloudapp.net/techdays-live.aspx&#34;&gt;the UK TechDays site&lt;/a&gt;. This includes my session on &lt;a href=&#34;http://uktechdays.cloudapp.net/techdays-live/virtualisation-of-the-test-environment.aspx&#34;&gt;‘Virtualisation of the Test Environment’&lt;/a&gt; and the joint one I did with &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/jmann/default.aspx&#34;&gt;James Mann&lt;/a&gt; on the &lt;a href=&#34;http://uktechdays.cloudapp.net/techdays-live/the-new-experience-for-developing-sharepoint-solutions-in-vs2010.aspx&#34;&gt;new experience for developing SharePoint solutions in VS2010&lt;/a&gt; from the Leeds event.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The videos from the various Microsoft UK Techdays events can now be viewed at <a href="http://uktechdays.cloudapp.net/techdays-live.aspx">the UK TechDays site</a>. This includes my session on <a href="http://uktechdays.cloudapp.net/techdays-live/virtualisation-of-the-test-environment.aspx">‘Virtualisation of the Test Environment’</a> and the joint one I did with <a href="http://blogs.blackmarble.co.uk/blogs/jmann/default.aspx">James Mann</a> on the <a href="http://uktechdays.cloudapp.net/techdays-live/the-new-experience-for-developing-sharepoint-solutions-in-vs2010.aspx">new experience for developing SharePoint solutions in VS2010</a> from the Leeds event.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Microsoft UK next week</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-microsoft-uk-next-week/</link>
      <pubDate>Wed, 22 Jun 2011 10:19:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-microsoft-uk-next-week/</guid>
      <description>&lt;p&gt;I will be speaking at Microsoft UK’s ‘Application Lifecycle Management for Independent Software Vendors’ event next Monday. I am one of four speakers, between us well all address a variety of subjects within ALM&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Modern Software Delivery: The continuous delivery of high quality software - Colin Bird&lt;/li&gt;
&lt;li&gt;Driving Quality Throughout the Application Lifecycle - Richard Erwin&lt;/li&gt;
&lt;li&gt;Extending Testing into the Lab - Richard Fennell&lt;/li&gt;
&lt;li&gt;The Secrets of Repeatable Success - Adam Gilmore&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&#34;https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032485778&amp;amp;Culture=en-GB&#34;&gt;I believe there are still spaces available&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking at Microsoft UK’s ‘Application Lifecycle Management for Independent Software Vendors’ event next Monday. I am one of four speakers, between us well all address a variety of subjects within ALM</p>
<ul>
<li>Modern Software Delivery: The continuous delivery of high quality software - Colin Bird</li>
<li>Driving Quality Throughout the Application Lifecycle - Richard Erwin</li>
<li>Extending Testing into the Lab - Richard Fennell</li>
<li>The Secrets of Repeatable Success - Adam Gilmore</li>
</ul>
<p><a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032485778&amp;Culture=en-GB">I believe there are still spaces available</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>More from the ALM Rangers - Lab Management Guide</title>
      <link>https://blog.richardfennell.net/posts/more-from-the-alm-rangers-lab-management-guide/</link>
      <pubDate>Mon, 20 Jun 2011 20:45:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-from-the-alm-rangers-lab-management-guide/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/06/19/alm-ranger-s-build-customization-guidance-has-shipped.aspx&#34;&gt;Build Customisation project&lt;/a&gt; was not the only ALM Rangers release over the weekend. The &lt;a href=&#34;http://ralabman.codeplex.com/&#34;&gt;Lab Management Guide&lt;/a&gt; was also shipped. This provides scenario based and hands-on guidance for the planning, setup, configuration and usage of Visual Studio Lab Management, backed by custom VM Template automation for reference environments.&lt;/p&gt;
&lt;p&gt;If you work with Lab Management, or would like, this is well worth a read&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://public.blu.livefilestore.com/y1pxJM6viSI4gtfNXGmyZP0uAW9tY3ULI9hd-M6y6biBQ_uzD8thUmAG54G5g4xChjLl_pTfWvdBrCDy4oUTV1bKg/VSALMRangersLogo2011-transparent-300dpi.png?psid=1&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/06/19/alm-ranger-s-build-customization-guidance-has-shipped.aspx">Build Customisation project</a> was not the only ALM Rangers release over the weekend. The <a href="http://ralabman.codeplex.com/">Lab Management Guide</a> was also shipped. This provides scenario based and hands-on guidance for the planning, setup, configuration and usage of Visual Studio Lab Management, backed by custom VM Template automation for reference environments.</p>
<p>If you work with Lab Management, or would like, this is well worth a read</p>
<p><img loading="lazy" src="http://public.blu.livefilestore.com/y1pxJM6viSI4gtfNXGmyZP0uAW9tY3ULI9hd-M6y6biBQ_uzD8thUmAG54G5g4xChjLl_pTfWvdBrCDy4oUTV1bKg/VSALMRangersLogo2011-transparent-300dpi.png?psid=1"></p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2010 SP1 Cumulative Update 1</title>
      <link>https://blog.richardfennell.net/posts/tfs-2010-sp1-cumulative-update-1/</link>
      <pubDate>Mon, 20 Jun 2011 20:40:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2010-sp1-cumulative-update-1/</guid>
      <description>&lt;p&gt;For those of you have have not spotted it the &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2011/06/13/tfs-2010-sp1-cumulative-update-1-available.aspx&#34;&gt;TFS 2010 SP1 Cumulative Update 1 is now available&lt;/a&gt;. This is a roll up of all the fixes that have be released since TFS 2010 SP1. This form of cumulative release is going to be the way forward for TFS, so it will be easier to know what to install on a server without reading loads of QFE documents.&lt;/p&gt;
&lt;p&gt;For more details &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2011/06/13/tfs-2010-sp1-cumulative-update-1-available.aspx&#34;&gt;read the full details on Brian Harry’s blog&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For those of you have have not spotted it the <a href="http://blogs.msdn.com/b/bharry/archive/2011/06/13/tfs-2010-sp1-cumulative-update-1-available.aspx">TFS 2010 SP1 Cumulative Update 1 is now available</a>. This is a roll up of all the fixes that have be released since TFS 2010 SP1. This form of cumulative release is going to be the way forward for TFS, so it will be easier to know what to install on a server without reading loads of QFE documents.</p>
<p>For more details <a href="http://blogs.msdn.com/b/bharry/archive/2011/06/13/tfs-2010-sp1-cumulative-update-1-available.aspx">read the full details on Brian Harry’s blog</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>ALM Ranger’s Build Customization Guidance has shipped</title>
      <link>https://blog.richardfennell.net/posts/alm-rangers-build-customization-guidance-has-shipped/</link>
      <pubDate>Sun, 19 Jun 2011 20:13:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alm-rangers-build-customization-guidance-has-shipped/</guid>
      <description>&lt;p&gt;I am really please to say that the first ALM Rangers project I have been involved with, the Build Customization Guidance, has shipped.&lt;/p&gt;
&lt;p&gt;The project had the primary goal of delivering scenario based and hands-on lab guidance for the customization and deployment of Team Foundation Build 2010 activities such as versioning, code signing, and branching. You can find details at the &lt;a href=&#34;http://blogs.msdn.com/b/willy-peter_schaub/&#34;&gt;Rangers blog&lt;/a&gt;,the project &lt;a href=&#34;http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/06/17/toc-build-customization-guide-blog-posts-and-reference-sites.aspx&#34;&gt;table of content&lt;/a&gt; or the &lt;a href=&#34;http://rabcg.codeplex.com/&#34;&gt;codeplex site&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I have certainly learnt a good deal working on this projects, thanks to everyone who made it such a interesting experience. Hope anyone reading the materials find them as useful.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really please to say that the first ALM Rangers project I have been involved with, the Build Customization Guidance, has shipped.</p>
<p>The project had the primary goal of delivering scenario based and hands-on lab guidance for the customization and deployment of Team Foundation Build 2010 activities such as versioning, code signing, and branching. You can find details at the <a href="http://blogs.msdn.com/b/willy-peter_schaub/">Rangers blog</a>,the project <a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/06/17/toc-build-customization-guide-blog-posts-and-reference-sites.aspx">table of content</a> or the <a href="http://rabcg.codeplex.com/">codeplex site</a></p>
<p>I have certainly learnt a good deal working on this projects, thanks to everyone who made it such a interesting experience. Hope anyone reading the materials find them as useful.</p>
<p><img loading="lazy" src="http://public.blu.livefilestore.com/y1pxJM6viSI4gtfNXGmyZP0uAW9tY3ULI9hd-M6y6biBQ_uzD8thUmAG54G5g4xChjLl_pTfWvdBrCDy4oUTV1bKg/VSALMRangersLogo2011-transparent-300dpi.png?psid=1"></p>
]]></content:encoded>
    </item>
    <item>
      <title>My first serious problem with my WP7 device</title>
      <link>https://blog.richardfennell.net/posts/my-first-serious-problem-with-my-wp7-device/</link>
      <pubDate>Thu, 16 Jun 2011 07:35:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-first-serious-problem-with-my-wp7-device/</guid>
      <description>&lt;p&gt;My WP7 LG-E900 decided to do a factory reset last night. It locked while playing the XBox Live GEODefense game on a particular busy level I had just lost (so lots of graphic processing). The phone paused for a while then restarted and ran the first use wizard, in German. This is because it is one of the PDC units that seem, given the packaging, to have been sourced from Vodafone German.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My WP7 LG-E900 decided to do a factory reset last night. It locked while playing the XBox Live GEODefense game on a particular busy level I had just lost (so lots of graphic processing). The phone paused for a while then restarted and ran the first use wizard, in German. This is because it is one of the PDC units that seem, given the packaging, to have been sourced from Vodafone German.</p>
<p>This is the first failure I have had on the phone, not a real pain as once the phone was back running in English it was quick to setup and sync, but 15 minutes I would rather have not spent.</p>
<p><strong>[Update 17 Jun 20011]</strong> What I had fogotten to setup</p>
<ol>
<li>I had to set the APN for mobile data (just the name &lsquo;internet&rsquo; with no UID or PWD for Vodafone UK, but I had forgotten that too and of course could not chekc them on the internet). When I first got this phone I am sure mobile data just worked. Anyway I only found it was not working when stuck in a huge traffic jam and I could not use the maps on my phone to get an alternative route. At home data had worked as I had setup WiFi</li>
<li>Pair my bluetooth headset, again only found this out when tried to make a call from the huge traffic jam to say I would be late. Turns out it is hard to pair a headset in a traffic jam full of bluetooth devices.</li>
<li>And I keep finding I am missing application and I can&rsquo;t remember what they are called&hellip;&hellip;</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>Using VSTO to access TFS</title>
      <link>https://blog.richardfennell.net/posts/using-vsto-to-access-tfs/</link>
      <pubDate>Fri, 03 Jun 2011 22:40:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-vsto-to-access-tfs/</guid>
      <description>&lt;p&gt;Have you ever thought ‘&lt;em&gt;I know that you get the Team ribbon in Excel to manage TFS work items, and I can use Excel to as a SQL/OLAP client to run reports from the TFS warehouse but I want to do more….&lt;/em&gt;’?&lt;/p&gt;
&lt;p&gt;Have you every considered using VSTO to create an ActionPane that uses the TFS .NET API?&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_7D2BE2A2.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_4953C684.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;In this example I have created an ActionPane that allows you to select a Team Project Collection, a Team Project and then a build definition. Press the button and it lists the builds that have run of this type using the TFS .NET API.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Have you ever thought ‘<em>I know that you get the Team ribbon in Excel to manage TFS work items, and I can use Excel to as a SQL/OLAP client to run reports from the TFS warehouse but I want to do more….</em>’?</p>
<p>Have you every considered using VSTO to create an ActionPane that uses the TFS .NET API?</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7D2BE2A2.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4953C684.png" title="image"></a></p>
<p>In this example I have created an ActionPane that allows you to select a Team Project Collection, a Team Project and then a build definition. Press the button and it lists the builds that have run of this type using the TFS .NET API.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot connect to a Lab Management Environment using the MTM Environment Viewer</title>
      <link>https://blog.richardfennell.net/posts/cannot-connect-to-a-lab-management-environment-using-the-mtm-environment-viewer/</link>
      <pubDate>Wed, 01 Jun 2011 16:27:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-connect-to-a-lab-management-environment-using-the-mtm-environment-viewer/</guid>
      <description>&lt;p&gt;Today we had a problem that we could not connect to VMs within a Lab Management environment from the Environment Viewer in MTM. We had composed the environment from VMs independently create and tested on the HyperV host. The plan in the end is to have this as a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx&#34;&gt;network isolated environment&lt;/a&gt;, but for now it is a private domain that exists on our main LAN.&lt;/p&gt;
&lt;p&gt;The first issue we had was that as this was a private domain the various hosts were not registered on our DNS, so we got a DNS lookup error for the VM host names. This is best fixed with network isolation, but for a quick fix we put some entries in a local &lt;strong&gt;hosts&lt;/strong&gt; file on the PC we were using to resolve the name to IP addresses.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today we had a problem that we could not connect to VMs within a Lab Management environment from the Environment Viewer in MTM. We had composed the environment from VMs independently create and tested on the HyperV host. The plan in the end is to have this as a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx">network isolated environment</a>, but for now it is a private domain that exists on our main LAN.</p>
<p>The first issue we had was that as this was a private domain the various hosts were not registered on our DNS, so we got a DNS lookup error for the VM host names. This is best fixed with network isolation, but for a quick fix we put some entries in a local <strong>hosts</strong> file on the PC we were using to resolve the name to IP addresses.</p>
<p>The next problem was one of concepts. The environment had been composed by one user (and could access everything via a host connection via Hyper-V, with no local host file fixes), but it was to be used by another user, a tester who was not the owner of the environment (yes again I know we should they should be provisioning their own network isolated version). This mean that a Hyper-V based host connection was not possible, <a href="http://msdn.microsoft.com/es-ar/library/ee712698">as you have to be the owner to get a host connection</a>.</p>
<p>This meant that the new user had to use a guest connection, a Remote Desktop Connection (RDC) created behind the scenes by the MTM Environment Viewer. This worked for the domain controller (a server OS) but failed for the other three VMs in the environment which were all running Windows 7 with a ‘lost connection to virtual machine error’</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_30314B3C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4BD5D732.png" title="image"></a></p>
<p>Turns out the issue was the level of security set for RDC connection in Windows 7. We remoted onto the VMs with the problems using the standard Windows RDC client (not MTM) and set the Allows connections from computers running any version of RD.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_16C94802.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_249B8DFD.png" title="image"></a></p>
<p>Once this was done the Environment Viewer could make guest connections and all was good in the world.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to fix extra Quickstart help menu items when installing Typemock Isolator</title>
      <link>https://blog.richardfennell.net/posts/how-to-fix-extra-quickstart-help-menu-items-when-installing-typemock-isolator/</link>
      <pubDate>Mon, 30 May 2011 20:58:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-fix-extra-quickstart-help-menu-items-when-installing-typemock-isolator/</guid>
      <description>&lt;p&gt;I recently noticed that I had a few too many ‘Typemock Isolator Quickstart’ help menu items in Visual Studio 2010&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_619FEC6B.jpg&#34;&gt;&lt;img alt=&#34;clip_image002&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_thumb_7F31C76A.jpg&#34; title=&#34;clip_image002&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;After a quick check with Typemock it seems this was a known issue with previous versions of Isolator, you should not see it on a new installation (6.0.10). If you do have the problem do as I was advised, &lt;a href=&#34;http://forums.typemock.com/viewtopic.php?p=8332&amp;amp;sid=59949ea74e390f79b18e0c570e058173#8332&#34;&gt;run the VS macro, created by Travis Illig, to remove the extra items&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently noticed that I had a few too many ‘Typemock Isolator Quickstart’ help menu items in Visual Studio 2010</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_619FEC6B.jpg"><img alt="clip_image002" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_thumb_7F31C76A.jpg" title="clip_image002"></a></p>
<p>After a quick check with Typemock it seems this was a known issue with previous versions of Isolator, you should not see it on a new installation (6.0.10). If you do have the problem do as I was advised, <a href="http://forums.typemock.com/viewtopic.php?p=8332&amp;sid=59949ea74e390f79b18e0c570e058173#8332">run the VS macro, created by Travis Illig, to remove the extra items</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Links to Videos of my Presentations</title>
      <link>https://blog.richardfennell.net/posts/links-to-videos-of-my-presentations/</link>
      <pubDate>Sat, 28 May 2011 15:21:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/links-to-videos-of-my-presentations/</guid>
      <description>&lt;p&gt;I have added page to this blog that has links to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/pages/videos-of-my-presentations.aspx&#34; title=&#34;Videos of my Presentations&#34;&gt;videos of my presentations.&lt;/a&gt; Should make them easier to find them when I need to refer to them.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have added page to this blog that has links to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/pages/videos-of-my-presentations.aspx" title="Videos of my Presentations">videos of my presentations.</a> Should make them easier to find them when I need to refer to them.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Recording Silverlight actions on Microsoft Test Manager</title>
      <link>https://blog.richardfennell.net/posts/recording-silverlight-actions-on-microsoft-test-manager/</link>
      <pubDate>Fri, 27 May 2011 16:13:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/recording-silverlight-actions-on-microsoft-test-manager/</guid>
      <description>&lt;p&gt;Whilst try to record some manual tests in MTM for a new Silverlight application I found I was not getting any actions recorded, just loads of “Fail to record the object corresponding to this action” warnings in the actions window.&lt;/p&gt;
&lt;p&gt;Turns out to fix this you have to do three things&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Install the &lt;a href=&#34;http://go.microsoft.com/fwlink/?LinkId=194188&#34;&gt;&lt;strong&gt;Visual Studio 2010 Feature Pack 2 (MSDN Subscribers Only)&lt;/strong&gt;&lt;/a&gt; – this adds Silverlight support to MTM (this I had already done)&lt;/li&gt;
&lt;li&gt;In your SIlverlLight application you need to reference the &lt;strong&gt;Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll&lt;/strong&gt;  from the folder &lt;strong&gt;C:Program Files (x86)Common Filesmicrosoft sharedVSTT10.0UITestExtensionPackagesSilverlightUIAutomationHelper&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Finally if using IE9 you need to run IE in IE8 mode. To do this in IE9 press F12 and select the browser mode&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_60B1B2B0.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_59927638.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst try to record some manual tests in MTM for a new Silverlight application I found I was not getting any actions recorded, just loads of “Fail to record the object corresponding to this action” warnings in the actions window.</p>
<p>Turns out to fix this you have to do three things</p>
<ol>
<li>Install the <a href="http://go.microsoft.com/fwlink/?LinkId=194188"><strong>Visual Studio 2010 Feature Pack 2 (MSDN Subscribers Only)</strong></a> – this adds Silverlight support to MTM (this I had already done)</li>
<li>In your SIlverlLight application you need to reference the <strong>Microsoft.VisualStudio.TestTools.UITest.Extension.SilverlightUIAutomationHelper.dll</strong>  from the folder <strong>C:Program Files (x86)Common Filesmicrosoft sharedVSTT10.0UITestExtensionPackagesSilverlightUIAutomationHelper</strong></li>
<li>Finally if using IE9 you need to run IE in IE8 mode. To do this in IE9 press F12 and select the browser mode</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_60B1B2B0.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_59927638.png" title="image"></a></p>
<p>Once this was done I got the actions I would expect</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_207B9936.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_527339C0.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS vNext video from Teched up on Channel9</title>
      <link>https://blog.richardfennell.net/posts/tfs-vnext-video-from-teched-up-on-channel9/</link>
      <pubDate>Wed, 18 May 2011 11:40:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-vnext-video-from-teched-up-on-channel9/</guid>
      <description>&lt;p&gt;Cameron Skinner’s TechEd session that goes into more depth on the announcements for &lt;a href=&#34;http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/FDN03&#34;&gt;TFS vNext is up on Channel 9&lt;/a&gt;. Just downloading it for to watch on the train to London this afternoon. I tried streaming live on the East Coast mainline before, their on train WiFi is just not up to it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Cameron Skinner’s TechEd session that goes into more depth on the announcements for <a href="http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/FDN03">TFS vNext is up on Channel 9</a>. Just downloading it for to watch on the train to London this afternoon. I tried streaming live on the East Coast mainline before, their on train WiFi is just not up to it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Announcements at TechEd USA 2011 about TFS vNext and a Java API available now</title>
      <link>https://blog.richardfennell.net/posts/announcements-at-teched-usa-2011-about-tfs-vnext-and-a-java-api-available-now/</link>
      <pubDate>Mon, 16 May 2011 22:40:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/announcements-at-teched-usa-2011-about-tfs-vnext-and-a-java-api-available-now/</guid>
      <description>&lt;p&gt;Watching the &lt;a href=&#34;http://northamerica.msteched.com/?fbid=AVplQ40jAXr#tab1&#34;&gt;keynote of TechEd 2011&lt;/a&gt; there were some interesting announcements on the vNext for TFS (the TFS section starts about 1:20 into the keynote).&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Storyboarding Assistant – A new product using PowerPoint as the core tools to gather requirements&lt;/li&gt;
&lt;li&gt;New web based dashboard to help manage requirement work items, with loads of drag and drop and pulling much of the functionality of the current Excel planning workbooks into the browser.&lt;/li&gt;
&lt;li&gt;A new web based taskboard (that was demo’d using a huge touch screen monitor)&lt;/li&gt;
&lt;li&gt;Giving the developer a task oriented means to say what work they are working on, and behind the scenes setting their workspace appropriately&lt;/li&gt;
&lt;li&gt;New tooling to allow users to provide feedback on the release in an MTM like style&lt;/li&gt;
&lt;li&gt;Also mentioned was code review tooling, intellitrace in production and much more…&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Have a look at the keynote (the video is available now) and breakout session video streams when they appear. &lt;a href=&#34;http://blogs.msdn.com/b/jasonz/archive/2011/05/16/announcing-alm-roadmap-in-visual-studio-vnext-at-teched.aspx&#34;&gt;Also there is more detail on Jason Zanders blog&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Watching the <a href="http://northamerica.msteched.com/?fbid=AVplQ40jAXr#tab1">keynote of TechEd 2011</a> there were some interesting announcements on the vNext for TFS (the TFS section starts about 1:20 into the keynote).</p>
<ul>
<li>Storyboarding Assistant – A new product using PowerPoint as the core tools to gather requirements</li>
<li>New web based dashboard to help manage requirement work items, with loads of drag and drop and pulling much of the functionality of the current Excel planning workbooks into the browser.</li>
<li>A new web based taskboard (that was demo’d using a huge touch screen monitor)</li>
<li>Giving the developer a task oriented means to say what work they are working on, and behind the scenes setting their workspace appropriately</li>
<li>New tooling to allow users to provide feedback on the release in an MTM like style</li>
<li>Also mentioned was code review tooling, intellitrace in production and much more…</li>
</ul>
<p>Have a look at the keynote (the video is available now) and breakout session video streams when they appear. <a href="http://blogs.msdn.com/b/jasonz/archive/2011/05/16/announcing-alm-roadmap-in-visual-studio-vnext-at-teched.aspx">Also there is more detail on Jason Zanders blog</a></p>
<p><a href="http://blogs.msdn.com/b/bharry/archive/2011/05/16/announcing-a-java-sdk-for-tfs.aspx">Also announced today was that the Java version of for TFS has been released</a>, so you can now port all the home grown tools you have development for TFS in .NET to Java, so they can be enjoyed irrespective of your development platform.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Microsoft’s “Application Lifecycle Management for Independent Software Vendors”</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-microsofts-application-lifecycle-management-for-independent-software-vendors/</link>
      <pubDate>Fri, 13 May 2011 10:20:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-microsofts-application-lifecycle-management-for-independent-software-vendors/</guid>
      <description>&lt;p&gt;In the 27th June I will be speaking on Lab Management at Microsoft’s “Application Lifecycle Management for Independent Software Vendors” event at TVP.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032485778&amp;amp;Culture=en-GB&#34;&gt;For more details see the Microsoft event site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In the 27th June I will be speaking on Lab Management at Microsoft’s “Application Lifecycle Management for Independent Software Vendors” event at TVP.</p>
<p><a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032485778&amp;Culture=en-GB">For more details see the Microsoft event site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Cross domain usage of TFS Integration Platform</title>
      <link>https://blog.richardfennell.net/posts/cross-domain-usage-of-tfs-integration-platform/</link>
      <pubDate>Fri, 06 May 2011 11:53:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cross-domain-usage-of-tfs-integration-platform/</guid>
      <description>&lt;p&gt;Whist trying to do a cross domain migration of some source between two TFS2010 servers, I got the less than helpful runtime exception&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;System.ArgumentNullException: Value cannot be null.&lt;br&gt;
Parameter name: activeWorkspace&lt;br&gt;
at Microsoft.TeamFoundation.Migration.Tfs2010VCAdapter.TfsUtil.CleanWorkspace(Workspace activeWorkspace)&lt;br&gt;
at Microsoft.TeamFoundation.Migration.Tfs2010VCAdapter.TfsVCMigrationProvider.ProcessChangeGroup(ChangeGroup group)&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;On checking the &lt;em&gt;_tfsintegrationservice_&lt;timestamp&gt;.log&lt;/em&gt; I found the more useful&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;TFS::Authenticate : Caught exception : Microsoft.TeamFoundation.TeamFoundationServerUnauthorizedException: TF30063: You are not authorized to access tfsserver.otherdomain.com. &amp;mdash;&amp;gt; System.Net.WebException: The remote server returned an error: (401) Unauthorized.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist trying to do a cross domain migration of some source between two TFS2010 servers, I got the less than helpful runtime exception</p>
<blockquote>
<p><em>System.ArgumentNullException: Value cannot be null.<br>
Parameter name: activeWorkspace<br>
at Microsoft.TeamFoundation.Migration.Tfs2010VCAdapter.TfsUtil.CleanWorkspace(Workspace activeWorkspace)<br>
at Microsoft.TeamFoundation.Migration.Tfs2010VCAdapter.TfsVCMigrationProvider.ProcessChangeGroup(ChangeGroup group)</em></p></blockquote>
<p>On checking the <em>_tfsintegrationservice_<timestamp>.log</em> I found the more useful</p>
<blockquote>
<p><em>TFS::Authenticate : Caught exception : Microsoft.TeamFoundation.TeamFoundationServerUnauthorizedException: TF30063: You are not authorized to access tfsserver.otherdomain.com. &mdash;&gt; System.Net.WebException: The remote server returned an error: (401) Unauthorized.</em></p></blockquote>
<p>The issue is that when you setup the migration you are prompted for credentials for the remote server, but the actual migration does no occur in the same thread as you setup the definitions, so it only has its local credentials and none for the remote system. Hence the 401 error.</p>
<p>There is no way to enter two sets of credentials within the Integration tool itself, but there is a <a href="http://blogs.msdn.com/b/willy-peter_schaub/archive/2009/12/07/tfs-integration-platform-cross-domain-migration-now-what-question-answer-11.aspx">workaround on Willy’s Cave</a>. This is to place the remote credentials in the PC’s credential manager, this work fine for me</p>
]]></content:encoded>
    </item>
    <item>
      <title>Our WP7 TFS Phone Explorer has won first prize in the Red Gate WP7 competition</title>
      <link>https://blog.richardfennell.net/posts/our-wp7-tfs-phone-explorer-has-won-first-prize-in-the-red-gate-wp7-competition/</link>
      <pubDate>Thu, 05 May 2011 16:31:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/our-wp7-tfs-phone-explorer-has-won-first-prize-in-the-red-gate-wp7-competition/</guid>
      <description>&lt;p&gt;Great news the &lt;a href=&#34;http://social.zune.net/redirect?type=phoneApp&amp;amp;id=f82cfdd2-0763-e011-81d2-78e7d1fa76f8&#34;&gt;Black Marble’s WP7 TFS Phone client&lt;/a&gt; has &lt;a href=&#34;http://wp7comp.posterous.com/red-gate-softwares-windows-phone-7-competitio&#34;&gt;won one of the three top prizes on  Red Gate Software’s Windows Phone 7 Competition&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/smallpanoramaWPhone_1136CEB1.png&#34;&gt;      &lt;a href=&#34;http://wp7comp.posterous.com/red-gate-softwares-windows-phone-7-competitio&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_18352E96.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You can read the judges comments on the &lt;a href=&#34;http://wp7comp.posterous.com/red-gate-softwares-windows-phone-7-competitio&#34;&gt;competition blog&lt;/a&gt;, I would stress it only mentions my name, but this was Black Marble team effort.&lt;/p&gt;
&lt;p&gt;If you want to try it out get the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/19/release-of-black-marble-s-wp7-tfs-phone-explorer.aspx&#34;&gt;trial version from the marketplace&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Great news the <a href="http://social.zune.net/redirect?type=phoneApp&amp;id=f82cfdd2-0763-e011-81d2-78e7d1fa76f8">Black Marble’s WP7 TFS Phone client</a> has <a href="http://wp7comp.posterous.com/red-gate-softwares-windows-phone-7-competitio">won one of the three top prizes on  Red Gate Software’s Windows Phone 7 Competition</a></p>
<p><img loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/smallpanoramaWPhone_1136CEB1.png">      <a href="http://wp7comp.posterous.com/red-gate-softwares-windows-phone-7-competitio"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_18352E96.png" title="image"></a></p>
<p>You can read the judges comments on the <a href="http://wp7comp.posterous.com/red-gate-softwares-windows-phone-7-competitio">competition blog</a>, I would stress it only mentions my name, but this was Black Marble team effort.</p>
<p>If you want to try it out get the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/19/release-of-black-marble-s-wp7-tfs-phone-explorer.aspx">trial version from the marketplace</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Free webcast of Architecture and Installation of TFS 2010</title>
      <link>https://blog.richardfennell.net/posts/free-webcast-of-architecture-and-installation-of-tfs-2010/</link>
      <pubDate>Thu, 05 May 2011 13:47:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/free-webcast-of-architecture-and-installation-of-tfs-2010/</guid>
      <description>&lt;p&gt;Are you unsure how TFS 2010 hangs together or what is entailed in installing it?&lt;/p&gt;
&lt;p&gt;Well never fear, Microsoft are hosting a webcast on the 12th May that will be given by Black Marble’s own Robert Hancock one of our ALM consultants, which will explain all.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032485514&amp;amp;EventCategory=4&amp;amp;culture=en-GB&amp;amp;CountryCode=GB&#34;&gt;To register check the Microsoft events site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Are you unsure how TFS 2010 hangs together or what is entailed in installing it?</p>
<p>Well never fear, Microsoft are hosting a webcast on the 12th May that will be given by Black Marble’s own Robert Hancock one of our ALM consultants, which will explain all.</p>
<p><a href="https://msevents.microsoft.com/CUI/WebCastEventDetails.aspx?EventID=1032485514&amp;EventCategory=4&amp;culture=en-GB&amp;CountryCode=GB">To register check the Microsoft events site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Registration open for the “Connected Development” event on the 26th of May</title>
      <link>https://blog.richardfennell.net/posts/registration-open-for-the-connected-development-event-on-the-26th-of-may/</link>
      <pubDate>Thu, 05 May 2011 13:38:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/registration-open-for-the-connected-development-event-on-the-26th-of-may/</guid>
      <description>&lt;p&gt;There are still space on the “Connected Development” event, a free event hosted by Black Marble and supported by Microsoft TechDays, on the 26th of May.&lt;/p&gt;
&lt;p&gt;I am one of a number of speakers from Microsoft and Gold ALM Partners who will be talking about the latest news and announcements for Application Lifecycle Management and Developing for the Microsoft Platform.&lt;/p&gt;
&lt;p&gt;Full details about the event can be found &lt;a href=&#34;http://bit.ly/gRgGdt&#34;&gt;at Microsoft Events&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There are still space on the “Connected Development” event, a free event hosted by Black Marble and supported by Microsoft TechDays, on the 26th of May.</p>
<p>I am one of a number of speakers from Microsoft and Gold ALM Partners who will be talking about the latest news and announcements for Application Lifecycle Management and Developing for the Microsoft Platform.</p>
<p>Full details about the event can be found <a href="http://bit.ly/gRgGdt">at Microsoft Events</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Creating a TFS Team Project Collection using Powershell</title>
      <link>https://blog.richardfennell.net/posts/creating-a-tfs-team-project-collection-using-powershell/</link>
      <pubDate>Thu, 05 May 2011 12:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/creating-a-tfs-team-project-collection-using-powershell/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/b/granth/archive/2010/02/27/tfs2010-create-a-new-team-project-collection-from-powershell.aspx&#34;&gt;Grant Holiday posted on how to create a TPC using Powershell&lt;/a&gt;. However his post did not address how to set the SharePoint or Reporting Services parameters. His example used for the form&lt;/p&gt;
&lt;p&gt;Dictionary&amp;lt;string, string&amp;gt; servicingTokens = new Dictionary&amp;lt;string, string&amp;gt;();&lt;br&gt;
servicingTokens.Add(&amp;ldquo;SharePointAction&amp;rdquo;, &amp;ldquo;None&amp;rdquo;); // don&amp;rsquo;t configure sharepoint&lt;br&gt;
servicingTokens.Add(&amp;ldquo;ReportingAction&amp;rdquo;, &amp;ldquo;None&amp;rdquo;); // don&amp;rsquo;t configure reporting services&lt;/p&gt;
&lt;p&gt;So not much use if like ourselves you have your TFS integrated with the company wide SharePoint farm&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.msdn.com/b/granth/archive/2010/02/27/tfs2010-create-a-new-team-project-collection-from-powershell.aspx">Grant Holiday posted on how to create a TPC using Powershell</a>. However his post did not address how to set the SharePoint or Reporting Services parameters. His example used for the form</p>
<p>Dictionary&lt;string, string&gt; servicingTokens = new Dictionary&lt;string, string&gt;();<br>
servicingTokens.Add(&ldquo;SharePointAction&rdquo;, &ldquo;None&rdquo;); // don&rsquo;t configure sharepoint<br>
servicingTokens.Add(&ldquo;ReportingAction&rdquo;, &ldquo;None&rdquo;); // don&rsquo;t configure reporting services</p>
<p>So not much use if like ourselves you have your TFS integrated with the company wide SharePoint farm</p>
<h3 id="finding-the-parameters-the-hard-way">Finding the Parameters (the hard way)</h3>
<p>If you want configure SharePoint or Reporting Services it turns out there is not source of documentation for the process. So thanks then to <a href="http://blogs.msdn.com/b/chrisid/">Chris Sidi</a> for telling me how to find out the required options. This is his process to find out what parameters a TPC needs to be created with:</p>
<ol>
<li>Create a TPC via the TFS Administration Console, specifying a custom configuration for Sharepoint and Reporting for your system.</li>
<li>In the Administration Console at the bottom of the screen, switch to the Status tab. Double-click “Create Collection” to load the servicing log.</li>
<li>In the first ~15 lines of the servicing log, find the JobId (e.g. “Setting token. Key: JobId. Value: 9638bd57-f494-4ac3-a073-dd1548ab24dc.”)</li>
<li>Using Powershell, query the servicing tokens used:</li>
</ol>
<p>[Reflection.Assembly]::Load(&ldquo;Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a&rdquo;)<br>
$credProvider = new-object Microsoft.TeamFoundation.Client.UICredentialsProvider<br>
#Update the TFS url as necessary<br>
$tfsConnection = new-object Microsoft.TeamFoundation.Client.TfsConfigurationServer &ldquo;<a href="http://localhost:8080/tfs">http://localhost:8080/tfs</a>&rdquo;, $credProvider<br>
$tfsConnection.EnsureAuthenticated()<br>
$jobService = $tfsConnection.GetService([Microsoft.TeamFoundation.Framework.Client.ITeamFoundationJobService])</p>
<p># Replace the JobId. Use the one in the servicing log.<br>
$jobId = &lsquo;9638bd57-f494-4ac3-a073-dd1548ab24dc&rsquo;<br>
$servicingJob = $jobService.QueryJobs([Guid[]] @($jobId))[0]<br>
$servicingJob.Data.ServicingTokens.KeyValueOfStringString</p>
<p>This script gets the list of parameters as shown below which we can pass into the PowerShell script used to create a new  TPC.</p>
<p>SharePointAction</p>
<p>UseExistingSite</p>
<p>SharePointServer</p>
<p>3eYRYkJOok6GHrKam0AcSA==wytV0xS6vE2uow3gjrzAEg==</p>
<p>SharePointSitePath</p>
<p>sites/test</p>
<p>ReportingAction</p>
<p>CreateFolder</p>
<p>ReportServer</p>
<p>3eYRYkJOok6GHrKam0AcSA==KRCi2RTWBk6Cl1wAphaxWA==</p>
<p>ReportFolder</p>
<p>/TfsReports/test</p>
<p>It is shame the SharePoint and Reporting Services servers are returned as hash codes not their URLs, but as these will probably be fixed for any TFS implementation this is not a major issue as they can just be hardcoded.</p>
<h3 id="finding-the-parameters-the-easy-way">Finding the Parameters (the easy way)</h3>
<p>Once I ran this script I actually noticed that I already had access to this information without running  PowerShell or creating any trial TPCs. Isn’t that so often the case the information is under your nose but you don’t recognise it.</p>
<p>Actually all the parameters are actually shown at the start of the creation log for any TPC creation on a given TFS server. Just look for the log files via the TFS admin console.</p>
<p>[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational] Creating dictionary with 14 initial tokens:<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     FinalHostState =&gt; Started<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     VMM_SHARES (Value is an empty string.)<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     VMM_HOSTS (Value is an empty string.)<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     INT_USRACC (Value is an empty string.)<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     SharePointAction =&gt; UseExistingSite<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     SharePointServer =&gt; 3eYRYkJOok6GHrKam0AcSA==wytV0xS6vE2uow3gjrzAEg==<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     SharePointSitePath =&gt; sites/test<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     ReportingAction =&gt; CreateFolder<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     ReportServer =&gt; 3eYRYkJOok6GHrKam0AcSA==KRCi2RTWBk6Cl1wAphaxWA==<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     ReportFolder =&gt; /TfsReports/test<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     DataTierConnectionString =&gt; Data Source=sqlserver;Initial Catalog=Tfs_Configuration;Integrated Security=True<br>
[Info   @11:12:34.876] [2011-05-04 20:32:36Z][Informational]     CollectionName =&gt; test<br>
[Info   @11:12:34.876] [2011-05-04 20:32:37Z][Informational]     InstanceId =&gt; 0cf30e1d-d8a6-4265-a854-9f6c5d288f7e<br>
[Info   @11:12:34.876] [2011-05-04 20:32:37Z][Informational]     DefaultDatabase =&gt; Data Source=sqlserver;Initial Catalog=Tfs_test;Integrated Security=True</p>
<h3 id="the-revised-tpc-creation-script">The revised TPC creation script</h3>
<p>So given this new set of parameters we can edit Grant’s script to pass in all the extra parameters as shown below</p>
<p>param(<br>
    [string]$tpcName = $( throw &ldquo;Missing: parameter tpcname&rdquo;),<br>
    [string]$serverUrl = <a href="http://localhost:8080/tfs/">http://localhost:8080/tfs/</a> ,<br>
    [string]$sqlServer = &ldquo;sqlserver&rdquo; ,<br>
    [string]$spBase = &ldquo;sites&rdquo;, <br>
    [string]$spServer = &ldquo;3eYRYkJOok6GHrKam0AcSA==wytV0xS6vE2uow3gjrzAEg==&quot;, <br>
    [string]$rsBase = &ldquo;TfsReports&rdquo;,<br>
    [string]$rsServer = &ldquo;3eYRYkJOok6GHrKam0AcSA==KRCi2RTWBk6Cl1wAphaxWA==&rdquo; )</p>
<p>Write-Host &ldquo;Using the TPC with the following settings&rdquo;<br>
Write-Host &ldquo;tpcName:   $tpcName&rdquo;<br>
Write-Host &ldquo;serverUrl: $serverUrl&rdquo;<br>
Write-Host &ldquo;sqlServer: $sqlServer&rdquo;<br>
Write-Host &ldquo;spBase:    $spBase&rdquo;<br>
Write-Host &ldquo;spServer:  $spServer&rdquo;<br>
Write-Host &ldquo;rsBase:    $rsBase&rdquo;<br>
Write-Host &ldquo;rsServer:  $rsServer&rdquo;</p>
<p># Load client OM assembly.<br>
[Reflection.Assembly]::Load(&ldquo;Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a&rdquo;);</p>
<p># Get the server<br>
$tfsServer = new-object Microsoft.TeamFoundation.Client.TfsConfigurationServer $serverUrl;</p>
<p># Create the token set<br>
$servicingTokens = New-Object &ldquo;System.Collections.Generic.Dictionary``2[System.String,System.String]&rdquo;</p>
<p>$spPath = $spBase +&rdquo;/&quot; +  $tpcName;<br>
Write-Host &ldquo;Sharepoint path is  $spPath&rdquo;<br>
$servicingTokens.Add(&ldquo;SharePointAction&rdquo;, &ldquo;UseExistingSite&rdquo;);<br>
$servicingTokens.Add(&ldquo;SharePointServer&rdquo;, $spServer);<br>
$servicingTokens.Add(&ldquo;SharePointSitePath&rdquo;, $spPath);</p>
<p>$rsPath = &ldquo;/&rdquo; + $rsBase + &ldquo;/&rdquo; + $tpcName<br>
Write-Host &ldquo;Reporting Services path is  $rsPath&rdquo;<br>
$servicingTokens.Add(&ldquo;ReportingAction&rdquo;, &ldquo;CreateFolder&rdquo;);<br>
$servicingTokens.Add(&ldquo;ReportServer&rdquo;, $rsServer);<br>
$servicingTokens.Add(&ldquo;ReportFolder&rdquo;, $rsPath);</p>
<p># Create and run the job<br>
$tpcSvc = $tfsServer.GetService([Microsoft.TeamFoundation.Framework.Client.ITeamProjectCollectionService]);</p>
<p>$sqlString =&ldquo;Server=$sqlServer;Integrated Security=SSPI;&rdquo;<br>
Write-Host &ldquo;SQL connection string is  $sqlString &quot;</p>
<p>$job = $tpcSvc.QueueCreateCollection(<br>
    $tpcName,      # collection name.<br>
    &ldquo;&quot;,                  # description.<br>
    $false,              # don&rsquo;t make this the default collection.<br>
    &ldquo;~/&rdquo; + $tpcName + &ldquo;/&quot;,   # virtual directory.<br>
    &ldquo;Started&rdquo;,           # State after creation.<br>
    $servicingTokens,               # tokens for other services.<br>
    $sqlString,       # The SQL instance to create the collection on. Specify SERVERINSTANCE if not using default instance<br>
    $null,               # null because the collection database doesn&rsquo;t already exist.<br>
    $null)               # null because the collection database doesn&rsquo;t already exist.</p>
<p>Write-Host &ldquo;Creating TPC (this could take a few minutes)&rdquo;<br>
$collection = $tpcSvc.WaitForCollectionServicingToComplete($job)<br>
Write-Host &ldquo;Compeleted&rdquo;</p>
<p>If you get any errors from the script the best place to go is the TFS Admin console to look at the logs (the same ones you used to find the parameters). You should see the detailed create process log that gives normal TF errors as to why the process has failed; usually rights on SharePoint for Reporting Services from my experience.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems and workaround for the EMC Scrum for Team System Process Template</title>
      <link>https://blog.richardfennell.net/posts/problems-and-workaround-for-the-emc-scrum-for-team-system-process-template-2/</link>
      <pubDate>Wed, 04 May 2011 16:19:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-and-workaround-for-the-emc-scrum-for-team-system-process-template-2/</guid>
      <description>&lt;p&gt;I have recently been looking at the &lt;a href=&#34;http://www.scrumforteamsystem.com/&#34;&gt;EMC Scrum for Team Systems (SFTS) 3 Template for TFS 2010&lt;/a&gt; with SharePoint 2010. The core of it works, and works well for Scrum based teams. If you have not used it before have a read of the &lt;a href=&#34;http://consultingblogs.emc.com/crispinparker/archive/2010/05/18/getting-started-with-scrum-for-team-system-version-3-tfs-2010.aspx&#34;&gt;getting started post&lt;/a&gt; to get a feel of what it can do.&lt;/p&gt;
&lt;p&gt;However, there are some issues when you try to make use of SharePoint 2010 as opposed to SharePoint 2007. Hopefully you will find some of the adventures I have had enlightening&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently been looking at the <a href="http://www.scrumforteamsystem.com/">EMC Scrum for Team Systems (SFTS) 3 Template for TFS 2010</a> with SharePoint 2010. The core of it works, and works well for Scrum based teams. If you have not used it before have a read of the <a href="http://consultingblogs.emc.com/crispinparker/archive/2010/05/18/getting-started-with-scrum-for-team-system-version-3-tfs-2010.aspx">getting started post</a> to get a feel of what it can do.</p>
<p>However, there are some issues when you try to make use of SharePoint 2010 as opposed to SharePoint 2007. Hopefully you will find some of the adventures I have had enlightening</p>
<h3 id="installation">Installation</h3>
<p>The first one is that the installation and configuration tools just ignore SP2010. This means when you try to create a team project using the process template you get the error:</p>
<blockquote>
<p><em>“TF249033: The site template is not available for the locale identifier (LCID).” ”The site template name is: SCRUM.”</em></p></blockquote>
<p>You have to manually download and install the SharePoint template WSP, <a href="http://consultingblogs.emc.com/crispinparker/archive/2011/01/14/scrum-for-team-system-v3-sharepoint-2010-portal.aspx">as detail in this blog post</a>. Once the WSP is deployed on your SharePoint 2010 farm you can create new team projects using the process template.</p>
<h3 id="how-it-looks">How it looks</h3>
<p>There are problems with the way the site renders. The two major issues are the site actions drop down appears behind the main display area (red box in left screen shot), and on some page large blocks of CSS get rendered to the screen as opposed to being dealt with programmatically (right screen shot).</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7D9FC38A.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6DB07EC6.png" title="image"></a> <a href="/wp-content/uploads/sites/2/historic/image_6286F47C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_147E9507.png" title="image"></a></p>
<p>Both of these <a href="https://www.scrumforteamsystem.com/QA/Show?id=455">issues have been reported on the support forum</a></p>
<p>My reading is that the current SFTS SharePoint template WSP has only had minimal changes to port it to SP2010 from SP2007, just enough to get it to load. I think the key issue here maybe that the SFTS SharePoint template has its own master page. This is something all TFS 2005/2008 templates tended to do, but for 2010 there has been a general move to inherit from core SharePoint master pages. Basically the master page structure needs to be rebuild from the ground up by EMC for fix all the issues, but we can address some of them…</p>
<h4 id="change-the-master-page">Change the master page</h4>
<p>Fixing all the issues in the master page is somewhat daunting as 2007 and 2010 master pages are very different in style. However, after a chat with one of our SharePoint developers it was suggested a better solution is to just tell the site just to use a different master page (one of the SP2010 standard ones). This is a technique we have used on bespoke site upgrades and usually will address most of the issues, then it is a matter of fixing the, hopefully smaller, list of outstanding problems.</p>
<p>So below is the process to make the change</p>
<ol>
<li>Install <a href="http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&amp;FamilyID=d88a1505-849b-4587-b854-a7054ee28d66">SharePoint Designer 2010</a> on a PC (this is a free download from Microsoft)</li>
<li>Logging in as a user with admin rights on the site, open SP2010 Designer and open the url of the Scrum for Team System SharePoint Site e.g. <a href="http://tfsdemo/sites/DefaultCollection/team1">http://tfsdemo/sites/DefaultCollection/team1</a></li>
<li>In the site objects tree on the left select the master pages node</li>
<li>You should see four master pages</li>
<li>Right click on the <strong>v4.master</strong> and select ‘set as custom master page’</li>
<li>Load the site in a browser and it should look like a more normal SP2010 site</li>
<li>You can swap it back by making the SFTS.master the custom master page</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_5B67B804.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6D444BD1.png" title="image"></a></p>
<p>Actually on first look the swapping the master page seems to have done the job on the homepage in default view mode without any other edits. However, there are still problem of stray CSS being shown when accessing the other pages, and a trace of green in some of the borders.</p>
<p>So a partial success, but with more work maybe a complete one? But that is not the route I took.</p>
<h3 id="report-title-click-through">Report Title Click-through</h3>
<p>Both the SFTS and standard Microsoft reports are displayed in dashboards the same way, they use a page view webpart and the TFSredirect.aspx page. This shows the report chart and a link to take you to its Reporting Services home when the title is clicked. The rendering of the report works for SFTS, but <a href="https://www.scrumforteamsystem.com/QA/Show?id=370">another reported problem</a> is that when you click the report title links (highlighted in green in above graphic) you get the error</p>
<blockquote>
<p><em>TF250008: This SharePoint site is not currently associated with a team project in Team Foundation Server. To ensure that this site functions correctly, you must configure a team project to refer data for that project to this site. For more information, see this topic on the Microsoft Web site: How to: Add a Team Project Portal. You can use the following querystring argument to specify a specific project: tf:ProjectId.</em></p></blockquote>
<p>The error says a workaround is to pass the TF:ProjectID parameter in the title URL. This is the only solution I have found. To do this</p>
<ol>
<li>Load SQL Management Studio</li>
<li>Open the tfs_defaultcollection DB (or the one for your tema project collection) and the tbl_projects table.</li>
<li>Look for and copy the projectID for the project you want to report on</li>
<li>Open the SharePoint page with the failing chart. Click the small down triangle in the top right of the webpart to get the webpart editor.</li>
<li>In the advanced section add &amp;TF%3aPROJECTID={guid} with your {GUID} to the end of the Title URL (the %3a is the : character)</li>
<li>I also had to remove the &amp;IsDashboard=false else I got a “An attempt was made to set a report parameter &lsquo;IsDashboard&rsquo; that is not defined in this report. (rsUnknownReportParameter)” error.</li>
<li>Press OK to save, the chart should render and the link work</li>
</ol>
<p>Basically the title URL as automatically build is wrong, it has an extra parameter the report does not support and for some reason the automatically passed project ID is being lost. In fact even when you fix the Url is wrong as the report it points to is a dashboard summary when you probably want to take the user to a fuller version of the report. Which of course you could do by altering the Url provided.</p>
<p>I think the root problem here is that the webpart assumes that the report has a dashboard and full version mode, as many of the MSF agile ones do, so this sort of makes sense.If you reports are single mode you need to pass two Urls.</p>
<p>Again this editing is all a bit of pain, but you don’t do it too often, and you could also write a command line tool to easily get the GUID.</p>
<h3 id="but-maybe-a-better-overall-option">But maybe a better overall option?</h3>
<p>However, whilst trying all this I realised that the SFTS created SharePoint site does not really do what much ‘special’. Beyond being a basic SharePoint site it has</p>
<ul>
<li>a link to the process guidance, but is just an HTML file that redirects to <a href="http://www.scrumforteamsystem.com/processguidance/v3/">http://www.scrumforteamsystem.com/processguidance/v3/</a> so can be added as link</li>
<li>a link to the Team Web Access, there is a standard webpart for this or you could just use a link</li>
<li>the front page dashboard, this has the two SFTS chart webparts and a TFS query webpart, but we can recreate this ourselves with the standard TFS webparts</li>
</ul>
<p>Therefore I would suggest the best option to avoid all these SharePoint 2007/2010 issues is to manually create a new SharePoint site and add similar controls to those used by SFTS to make the SharePoint site you want. As long as you are not creating new team projects all the time this should not be too much of a problem.</p>
<p>The steps to do this are as follows:</p>
<ol>
<li>In Team Explorer create a new Team Project using STFS template but set it not to create a Sharepoint site (you can always use an existing SFTS Team project if you want in place of this step if it already exist)</li>
<li>On the default collections SharePoint e.g <a href="http://tfsdemo/sites/DefaultCollection">http://tfsdemo/sites/DefaultCollection</a> create a new site (site actions), give it a name e.g. ‘Team1’ so it’s URL is <a href="http://tfsdemo/sites/DefaultCollection/team1">http://tfsdemo/sites/DefaultCollection/team1</a>. You can select any site template, but the collaboration/team one would seem a good start, nice and generic</li>
<li>In Team Explorer select the STFS created in step1 and right click, select team project settings | portal settings</li>
<li>Check the enable tram project portal (if not already set) press the configure URL and enter the details of the site created in step 2, press OK to exit</li>
<li>Check the ‘reports and dashboards refer to data for this team project’ checkbox and press OK.</li>
<li>Return to the web site created in step 2, it is now wired to the correct team project</li>
</ol>
<p>You can now add pages, links and webpart to the web site to build your portal. The most important are</p>
<ol>
<li>Adding the set of ‘Visual Studio Team Foundation Server Web Parts’ which provide items such a build list, work item list etc.They should all pickup the correct team project.</li>
<li>The page viewer that allows redirections via the TFSRedirect.aspx page as detailed above</li>
</ol>
<p>We can also link directly to the reporting services reports using the SQL Server reporting web part. As SFTS does not ship its reports as Excel workbooks we don’t have to consider Excel Services..</p>
<h3 id="and-finally">And finally</h3>
<p>I hope this post has given you some ideas as to how to address the issues with SFTS 3.0 on SP2010, enough too keep you happy until there is a release of the template what fully supports SP2010.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems and workaround for the EMC Scrum for Team System Process Template</title>
      <link>https://blog.richardfennell.net/posts/problems-and-workaround-for-the-emc-scrum-for-team-system-process-template/</link>
      <pubDate>Wed, 04 May 2011 16:19:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-and-workaround-for-the-emc-scrum-for-team-system-process-template/</guid>
      <description>&lt;p&gt;I have recently been looking at the &lt;a href=&#34;http://www.scrumforteamsystem.com/&#34;&gt;EMC Scrum for Team Systems (SFTS) 3 Template for TFS 2010&lt;/a&gt; with SharePoint 2010. The core of it works, and works well for Scrum based teams. If you have not used it before have a read of the &lt;a href=&#34;http://consultingblogs.emc.com/crispinparker/archive/2010/05/18/getting-started-with-scrum-for-team-system-version-3-tfs-2010.aspx&#34;&gt;getting started post&lt;/a&gt; to get a feel of what it can do.&lt;/p&gt;
&lt;p&gt;However, there are some issues when you try to make use of SharePoint 2010 as opposed to SharePoint 2007. Hopefully you will find some of the adventures I have had enlightening&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently been looking at the <a href="http://www.scrumforteamsystem.com/">EMC Scrum for Team Systems (SFTS) 3 Template for TFS 2010</a> with SharePoint 2010. The core of it works, and works well for Scrum based teams. If you have not used it before have a read of the <a href="http://consultingblogs.emc.com/crispinparker/archive/2010/05/18/getting-started-with-scrum-for-team-system-version-3-tfs-2010.aspx">getting started post</a> to get a feel of what it can do.</p>
<p>However, there are some issues when you try to make use of SharePoint 2010 as opposed to SharePoint 2007. Hopefully you will find some of the adventures I have had enlightening</p>
<h3 id="installation">Installation</h3>
<p>The first one is that the installation and configuration tools just ignore SP2010. This means when you try to create a team project using the process template you get the error:</p>
<blockquote>
<p><em>“TF249033: The site template is not available for the locale identifier (LCID).” ”The site template name is: SCRUM.”</em></p></blockquote>
<p>You have to manually download and install the SharePoint template WSP, <a href="http://consultingblogs.emc.com/crispinparker/archive/2011/01/14/scrum-for-team-system-v3-sharepoint-2010-portal.aspx">as detail in this blog post</a>. Once the WSP is deployed on your SharePoint 2010 farm you can create new team projects using the process template.</p>
<h3 id="how-it-looks">How it looks</h3>
<p>There are problems with the way the site renders. The two major issues are the site actions drop down appears behind the main display area (red box in left screen shot), and on some page large blocks of CSS get rendered to the screen as opposed to being dealt with programmatically (right screen shot).</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7D9FC38A.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6DB07EC6.png" title="image"></a> <a href="/wp-content/uploads/sites/2/historic/image_6286F47C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_147E9507.png" title="image"></a></p>
<p>Both of these <a href="https://www.scrumforteamsystem.com/QA/Show?id=455">issues have been reported on the support forum</a></p>
<p>My reading is that the current SFTS SharePoint template WSP has only had minimal changes to port it to SP2010 from SP2007, just enough to get it to load. I think the key issue here maybe that the SFTS SharePoint template has its own master page. This is something all TFS 2005/2008 templates tended to do, but for 2010 there has been a general move to inherit from core SharePoint master pages. Basically the master page structure needs to be rebuild from the ground up by EMC for fix all the issues, but we can address some of them…</p>
<h4 id="change-the-master-page">Change the master page</h4>
<p>Fixing all the issues in the master page is somewhat daunting as 2007 and 2010 master pages are very different in style. However, after a chat with one of our SharePoint developers it was suggested a better solution is to just tell the site just to use a different master page (one of the SP2010 standard ones). This is a technique we have used on bespoke site upgrades and usually will address most of the issues, then it is a matter of fixing the, hopefully smaller, list of outstanding problems.</p>
<p>So below is the process to make the change</p>
<ol>
<li>Install <a href="http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&amp;FamilyID=d88a1505-849b-4587-b854-a7054ee28d66">SharePoint Designer 2010</a> on a PC (this is a free download from Microsoft)</li>
<li>Logging in as a user with admin rights on the site, open SP2010 Designer and open the url of the Scrum for Team System SharePoint Site e.g. <a href="http://tfsdemo/sites/DefaultCollection/team1">http://tfsdemo/sites/DefaultCollection/team1</a></li>
<li>In the site objects tree on the left select the master pages node</li>
<li>You should see four master pages</li>
<li>Right click on the <strong>v4.master</strong> and select ‘set as custom master page’</li>
<li>Load the site in a browser and it should look like a more normal SP2010 site</li>
<li>You can swap it back by making the SFTS.master the custom master page</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_5B67B804.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6D444BD1.png" title="image"></a></p>
<p>Actually on first look the swapping the master page seems to have done the job on the homepage in default view mode without any other edits. However, there are still problem of stray CSS being shown when accessing the other pages, and a trace of green in some of the borders.</p>
<p>So a partial success, but with more work maybe a complete one? But that is not the route I took.</p>
<h3 id="report-title-click-through">Report Title Click-through</h3>
<p>Both the SFTS and standard Microsoft reports are displayed in dashboards the same way, they use a page view webpart and the TFSredirect.aspx page. This shows the report chart and a link to take you to its Reporting Services home when the title is clicked. The rendering of the report works for SFTS, but <a href="https://www.scrumforteamsystem.com/QA/Show?id=370">another reported problem</a> is that when you click the report title links (highlighted in green in above graphic) you get the error</p>
<blockquote>
<p><em>TF250008: This SharePoint site is not currently associated with a team project in Team Foundation Server. To ensure that this site functions correctly, you must configure a team project to refer data for that project to this site. For more information, see this topic on the Microsoft Web site: How to: Add a Team Project Portal. You can use the following querystring argument to specify a specific project: tf:ProjectId.</em></p></blockquote>
<p>The error says a workaround is to pass the TF:ProjectID parameter in the title URL. This is the only solution I have found. To do this</p>
<ol>
<li>Load SQL Management Studio</li>
<li>Open the tfs_defaultcollection DB (or the one for your tema project collection) and the tbl_projects table.</li>
<li>Look for and copy the projectID for the project you want to report on</li>
<li>Open the SharePoint page with the failing chart. Click the small down triangle in the top right of the webpart to get the webpart editor.</li>
<li>In the advanced section add &amp;TF%3aPROJECTID={guid} with your {GUID} to the end of the Title URL (the %3a is the : character)</li>
<li>I also had to remove the &amp;IsDashboard=false else I got a “An attempt was made to set a report parameter &lsquo;IsDashboard&rsquo; that is not defined in this report. (rsUnknownReportParameter)” error.</li>
<li>Press OK to save, the chart should render and the link work</li>
</ol>
<p>Basically the title URL as automatically build is wrong, it has an extra parameter the report does not support and for some reason the automatically passed project ID is being lost. In fact even when you fix the Url is wrong as the report it points to is a dashboard summary when you probably want to take the user to a fuller version of the report. Which of course you could do by altering the Url provided.</p>
<p>I think the root problem here is that the webpart assumes that the report has a dashboard and full version mode, as many of the MSF agile ones do, so this sort of makes sense.If you reports are single mode you need to pass two Urls.</p>
<p>Again this editing is all a bit of pain, but you don’t do it too often, and you could also write a command line tool to easily get the GUID.</p>
<h3 id="but-maybe-a-better-overall-option">But maybe a better overall option?</h3>
<p>However, whilst trying all this I realised that the SFTS created SharePoint site does not really do what much ‘special’. Beyond being a basic SharePoint site it has</p>
<ul>
<li>a link to the process guidance, but is just an HTML file that redirects to <a href="http://www.scrumforteamsystem.com/processguidance/v3/">http://www.scrumforteamsystem.com/processguidance/v3/</a> so can be added as link</li>
<li>a link to the Team Web Access, there is a standard webpart for this or you could just use a link</li>
<li>the front page dashboard, this has the two SFTS chart webparts and a TFS query webpart, but we can recreate this ourselves with the standard TFS webparts</li>
</ul>
<p>Therefore I would suggest the best option to avoid all these SharePoint 2007/2010 issues is to manually create a new SharePoint site and add similar controls to those used by SFTS to make the SharePoint site you want. As long as you are not creating new team projects all the time this should not be too much of a problem.</p>
<p>The steps to do this are as follows:</p>
<ol>
<li>In Team Explorer create a new Team Project using STFS template but set it not to create a Sharepoint site (you can always use an existing SFTS Team project if you want in place of this step if it already exist)</li>
<li>On the default collections SharePoint e.g <a href="http://tfsdemo/sites/DefaultCollection">http://tfsdemo/sites/DefaultCollection</a> create a new site (site actions), give it a name e.g. ‘Team1’ so it’s URL is <a href="http://tfsdemo/sites/DefaultCollection/team1">http://tfsdemo/sites/DefaultCollection/team1</a>. You can select any site template, but the collaboration/team one would seem a good start, nice and generic</li>
<li>In Team Explorer select the STFS created in step1 and right click, select team project settings | portal settings</li>
<li>Check the enable tram project portal (if not already set) press the configure URL and enter the details of the site created in step 2, press OK to exit</li>
<li>Check the ‘reports and dashboards refer to data for this team project’ checkbox and press OK.</li>
<li>Return to the web site created in step 2, it is now wired to the correct team project</li>
</ol>
<p>You can now add pages, links and webpart to the web site to build your portal. The most important are</p>
<ol>
<li>Adding the set of ‘Visual Studio Team Foundation Server Web Parts’ which provide items such a build list, work item list etc.They should all pickup the correct team project.</li>
<li>The page viewer that allows redirections via the TFSRedirect.aspx page as detailed above</li>
</ol>
<p>We can also link directly to the reporting services reports using the SQL Server reporting web part. As SFTS does not ship its reports as Excel workbooks we don’t have to consider Excel Services..</p>
<h3 id="and-finally">And finally</h3>
<p>I hope this post has given you some ideas as to how to address the issues with SFTS 3.0 on SP2010, enough too keep you happy until there is a release of the template what fully supports SP2010.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My upcoming speaking engagements for Spring/Summer 2011</title>
      <link>https://blog.richardfennell.net/posts/my-upcoming-speaking-engagements-for-springsummer-2011/</link>
      <pubDate>Tue, 03 May 2011 11:50:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-upcoming-speaking-engagements-for-springsummer-2011/</guid>
      <description>&lt;p&gt;I have a couple of community speaking engagements coming up&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://edinburgh.bcs.org.uk/events/2010-11/110511.htm&#34;&gt;May 11 BCS Edinburgh on Agile Methods&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.nxtgenug.net/ViewEvent.aspx?EventID=415&#34;&gt;July 21 NxtGen Southampton on Test Driven development and mocking&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Both are free, so if you are in the either area, I would be surprised if you were at both given the locations, I hope you can come along&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have a couple of community speaking engagements coming up</p>
<ul>
<li><a href="http://edinburgh.bcs.org.uk/events/2010-11/110511.htm">May 11 BCS Edinburgh on Agile Methods</a></li>
<li><a href="http://www.nxtgenug.net/ViewEvent.aspx?EventID=415">July 21 NxtGen Southampton on Test Driven development and mocking</a></li>
</ul>
<p>Both are free, so if you are in the either area, I would be surprised if you were at both given the locations, I hope you can come along</p>
]]></content:encoded>
    </item>
    <item>
      <title>The slow slide to a paperless life</title>
      <link>https://blog.richardfennell.net/posts/the-slow-slide-to-a-paperless-life/</link>
      <pubDate>Tue, 03 May 2011 11:17:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-slow-slide-to-a-paperless-life/</guid>
      <description>&lt;p&gt;I posted in the past about &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/19/should-i-buy-a-kindle.aspx&#34;&gt;my though processes on getting a Kindle&lt;/a&gt;, they boiled down to to&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Why do the books cost virtually as much as paper edition when the author gets no more royalties and the production/distribution costs as far lower?&lt;/li&gt;
&lt;li&gt;I don’t want an extra device to carry about&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Well I have been using the &lt;a href=&#34;http://social.zune.net/External/LaunchZuneProtocol.aspx?pathuri=navigate%3FphoneAppID%3D48195fb4-ee0e-e011-9264-00237de2db9e&#34;&gt;WP7 Kindle Client&lt;/a&gt; to read free classics and actually buying current novels. When I finished my first purchased novel, it was virtually automatic to go and buy another. No going to the bookshop or waiting for Amazon to deliver.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted in the past about <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/19/should-i-buy-a-kindle.aspx">my though processes on getting a Kindle</a>, they boiled down to to</p>
<ol>
<li>Why do the books cost virtually as much as paper edition when the author gets no more royalties and the production/distribution costs as far lower?</li>
<li>I don’t want an extra device to carry about</li>
</ol>
<p>Well I have been using the <a href="http://social.zune.net/External/LaunchZuneProtocol.aspx?pathuri=navigate%3FphoneAppID%3D48195fb4-ee0e-e011-9264-00237de2db9e">WP7 Kindle Client</a> to read free classics and actually buying current novels. When I finished my first purchased novel, it was virtually automatic to go and buy another. No going to the bookshop or waiting for Amazon to deliver.</p>
<p>The reading experience, even on my LG phone was fine. I actually found I was reading more, as my phone is always with me (the novel would have been a bit bulky at 800+ pages).</p>
<p>So I think I am a convert to the format, but I did not really doubt that. It is now whether to get a Kindle itself to make the experience even better. Maybe as the read a home device, but keeping my phone for the quick read at the railway station.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to use the TFS 2010 Community StyleCop Build Actvity (Addendum)</title>
      <link>https://blog.richardfennell.net/posts/how-to-use-the-tfs-2010-community-stylecop-build-actvity-addendum/</link>
      <pubDate>Wed, 27 Apr 2011 16:04:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-use-the-tfs-2010-community-stylecop-build-actvity-addendum/</guid>
      <description>&lt;p&gt;Recently I posted on &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/22/how-to-use-the-tfs-2010-community-stylecop-build-activity.aspx&#34;&gt;How to use the TFS 2010 Community StyleCop Build Activity&lt;/a&gt; and I am sure it all sounded very awkward and complex, well it did to me.&lt;/p&gt;
&lt;p&gt;The point I should have made is that you don’t have to follow this process every time you want to make use of the custom activity. As long as the build process template is the same between to builds you can just copy it, you only need to follow the method in the post the first time.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently I posted on <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/22/how-to-use-the-tfs-2010-community-stylecop-build-activity.aspx">How to use the TFS 2010 Community StyleCop Build Activity</a> and I am sure it all sounded very awkward and complex, well it did to me.</p>
<p>The point I should have made is that you don’t have to follow this process every time you want to make use of the custom activity. As long as the build process template is the same between to builds you can just copy it, you only need to follow the method in the post the first time.</p>
<p>So assuming you have followed the process in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/22/how-to-use-the-tfs-2010-community-stylecop-build-activity.aspx">my last post</a> and want to add the same build process to another project, lets say in another Team Project Collection, you do the following</p>
<h5 id="get-the-files-onto-the-build-box">Get the files onto the build box</h5>
<ol>
<li>In VS2010 open <strong>Source Control Explorer</strong> select your Team Project and map the <strong>BuildProcessTemplates</strong> folder to a location on your local disk.</li>
<li>Create a new folder under the <strong>BuildProcessTemplates</strong> called <strong>Custom Assemblies</strong></li>
<li>In this new folder copy all the assemblies from the equivalent folder you created in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/22/how-to-use-the-tfs-2010-community-stylecop-build-activity.aspx">my last post</a></li>
<li>Into the <strong>BuildProcessTemplates</strong> copy the edited build process template that uses the custom activities, again from the equivalent folder in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/22/how-to-use-the-tfs-2010-community-stylecop-build-activity.aspx">my last post</a></li>
<li>From within <strong>Source Control Explorer</strong> add these new files and check the files into TFS.</li>
<li>Open <strong>Team Explorer</strong>, right-click on <strong>Builds</strong> and select <strong>Manage Build Controllers</strong></li>
<li>Select the controller to configure, and then select Properties</li>
<li>Set the <strong>Version control path to custom assemblies</strong> to the location just created under version control containing your added assemblies</li>
<li>You might want to restart the build service, it should automatically pick up the changes, but I usually do a restart to make sure</li>
</ol>
<p><strong>Making use of the build</strong></p>
<ol>
<li>Create a new build and set it up as normal.</li>
<li>On the process tab press the new button and browse to find the newly added process template</li>
</ol>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_7C4C6D28.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6A03A666.png" title="image"></a></p></blockquote>
<p>Once this is done can now save the build and queue it and all should work.</p>
<p>So the 2nd, 3rd etc. uses of a custom activity are not too bad as the first, as long as you can keep your process templates generic.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Professional Foundation Server 2010</title>
      <link>https://blog.richardfennell.net/posts/professional-foundation-server-2010/</link>
      <pubDate>Wed, 27 Apr 2011 14:15:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/professional-foundation-server-2010/</guid>
      <description>&lt;p&gt;Over the holiday I have been reading &lt;a href=&#34;http://www.amazon.co.uk/dp/B004S82RRE/ref=as_li_tf_til?tag=buitwoonmypc-21&amp;amp;camp=1406&amp;amp;creative=6394&amp;amp;linkCode=as1&amp;amp;creativeASIN=B004S82RRE&amp;amp;adid=1YGFJBQJVE523F0B4BPE&amp;amp;&#34;&gt;Professional Foundation Server 2010 by Ed Blankenship, Martin Woodward, Grant Holiday and Brian Keller&lt;/a&gt;, yes I know how to have time off and have fun!&lt;/p&gt;
&lt;p&gt;So who is this book for? It is a comprehensive guide to TFS 2010, the components and their usage, but this does not mean the book is only for teams new to TFS or people planning to take certification exams. Spread throughout there are useful little titbits of information where you find yourself going ‘&lt;em&gt;I never know that&lt;/em&gt;’ or ‘&lt;em&gt;arr.. that explains so much&lt;/em&gt;’&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the holiday I have been reading <a href="http://www.amazon.co.uk/dp/B004S82RRE/ref=as_li_tf_til?tag=buitwoonmypc-21&amp;camp=1406&amp;creative=6394&amp;linkCode=as1&amp;creativeASIN=B004S82RRE&amp;adid=1YGFJBQJVE523F0B4BPE&amp;">Professional Foundation Server 2010 by Ed Blankenship, Martin Woodward, Grant Holiday and Brian Keller</a>, yes I know how to have time off and have fun!</p>
<p>So who is this book for? It is a comprehensive guide to TFS 2010, the components and their usage, but this does not mean the book is only for teams new to TFS or people planning to take certification exams. Spread throughout there are useful little titbits of information where you find yourself going ‘<em>I never know that</em>’ or ‘<em>arr.. that explains so much</em>’</p>
<p><a href="http://www.amazon.co.uk/dp/B004S82RRE/ref=as_li_tf_til?tag=buitwoonmypc-21&amp;camp=1406&amp;creative=6394&amp;linkCode=as1&amp;creativeASIN=B004S82RRE&amp;adid=1YGFJBQJVE523F0B4BPE&amp;"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_22BF793D.png" title="image"></a></p>
<p>So I would suggest it is well worth a look for anyone who is working, or planning to work, with TFS.</p>
<p>It is even available as Kindle edition, how times change, used to be only novels for the Kindle!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Added a reading list to my blog</title>
      <link>https://blog.richardfennell.net/posts/added-a-reading-list-to-my-blog/</link>
      <pubDate>Wed, 27 Apr 2011 10:01:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/added-a-reading-list-to-my-blog/</guid>
      <description>&lt;p&gt;I am always being asked when at events for details of books I have recommended. So I have added a list of books I have found useful to this blog, it can be found at &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/pages/reading-list.aspx&#34; title=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/pages/reading-list.aspx&#34;&gt;http://blogs.blackmarble.co.uk/blogs/rfennell/pages/reading-list.aspx&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Thus far I have added the ones I have been looking at recently, but will add more as I go along.&lt;/p&gt;
&lt;p&gt;Hope you find them as useful as I have&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am always being asked when at events for details of books I have recommended. So I have added a list of books I have found useful to this blog, it can be found at <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/pages/reading-list.aspx" title="http://blogs.blackmarble.co.uk/blogs/rfennell/pages/reading-list.aspx">http://blogs.blackmarble.co.uk/blogs/rfennell/pages/reading-list.aspx</a>.</p>
<p>Thus far I have added the ones I have been looking at recently, but will add more as I go along.</p>
<p>Hope you find them as useful as I have</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to use the TFS 2010 Community StyleCop Build Activity</title>
      <link>https://blog.richardfennell.net/posts/how-to-use-the-tfs-2010-community-stylecop-build-activity/</link>
      <pubDate>Fri, 22 Apr 2011 14:04:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-use-the-tfs-2010-community-stylecop-build-activity/</guid>
      <description>&lt;p&gt;[Updated 27th April 2011 Also see &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/27/how-to-use-the-tfs-2010-community-stylecop-build-actvity-addendum.aspx&#34;&gt;How to use the TFS 2010 Community StyleCop Build Actvity (Addendum)&lt;/a&gt;]&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/&#34;&gt;Codeplex Community TFS Build Extensions&lt;/a&gt; contains a &lt;a href=&#34;http://stylecop.codeplex.com/&#34;&gt;StyleCop&lt;/a&gt; activity, but the way to use it is less than obvious. This is not helped by the complexity in using any custom activity within TFS 2010 builds. In this post I will try to show how to get going with the StyleCop activity, which might shed some light on using other custom assemblies.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>[Updated 27th April 2011 Also see <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/04/27/how-to-use-the-tfs-2010-community-stylecop-build-actvity-addendum.aspx">How to use the TFS 2010 Community StyleCop Build Actvity (Addendum)</a>]</p>
<p>The <a href="http://tfsbuildextensions.codeplex.com/">Codeplex Community TFS Build Extensions</a> contains a <a href="http://stylecop.codeplex.com/">StyleCop</a> activity, but the way to use it is less than obvious. This is not helped by the complexity in using any custom activity within TFS 2010 builds. In this post I will try to show how to get going with the StyleCop activity, which might shed some light on using other custom assemblies.</p>
<h3 id="get-the-files">Get the files</h3>
<ol>
<li>Download and install <a href="http://stylecop.codeplex.com/">StyleCop</a> on you development PC. The alpha 1.0.0.3 custom activity is built against StyleCop 4.4.0.14.</li>
<li>Download and unzip the <a href="http://tfsbuildextensions.codeplex.com/">Community TFS Build Extensions</a> to a directory on you development PC.</li>
</ol>
<h3 id="get-the-files-onto-the-build-box">Get the files onto the build box</h3>
<p>The assemblies that contain the custom activity and StyleCop need to put under source control so they are available to the build controller and agent(s).</p>
<ol>
<li>In VS2010 open <strong>Source Control Explorer</strong> select your Team Project and map the <strong>BuildProcessTemplates</strong> folder to a location on your local disk.</li>
<li>Create a new folder under the <strong>BuildProcessTemplates</strong> called <strong>Custom Assemblies</strong></li>
<li>In this new folder copy all the assemblies from the unzipped TFS Build Extensions and StyleCop folder (found in Program Files)</li>
<li>From within <strong>Source Control Explorer</strong> add these new files to the new <strong>Custom Assemblies</strong> folder and check the files into TFS.</li>
<li>Open <strong>Team Explorer</strong>, right-click on <strong>Builds</strong> and select <strong>Manage Build Controllers</strong></li>
<li>Select the controller to configure, and then select Properties</li>
<li>Set the <strong>Version control path to custom assemblies</strong> to the location just created under version control containing your added assemblies</li>
</ol>
<h3 id="get-the-custom-activities-into-visual-studio">Get the custom activities into Visual Studio</h3>
<p>Next we need to get the activity into Visual Studio so we can add it to build process</p>
<ol>
<li>Open Visual Studio 2010</li>
<li>Create new Class Library project, this new project is only going to be used as a container for the editing of the build process template.</li>
<li>Delete the <strong>Class1.cs</strong> file</li>
<li>In <strong>Solution Explorer</strong> right click and select <strong>Properties</strong>., make sure the new project does not build in any configuration.</li>
</ol>
<p>Now we have a project to work with, we need to get a process template into it, this can be done by branch the process template into the project for adding it as a link. In this example I used a link to one of the standard template in the <strong>BuildProcessTemplates</strong> folder, for simplicity, but I would normally recommend at least copying the process template in the <strong>BuildProcessTemplates</strong> folder prior to editing it.</p>
<ol>
<li>Make sure <strong>BuildProcessTemplates</strong> is mapped in you current workspace and get a local copy on your development PC</li>
<li>In the new project select Add Existing Item and browse to local folder mapped to the <strong>BuildProcessTemplates,</strong> select the template your wish to edit. But, don’t just press the <strong>Add</strong> button, but use the drop down to select <strong>Add as link</strong>.</li>
<li>On the added file set the build action property to none</li>
<li>Select <strong>Add Reference</strong> and add references to all the assemblies from the unzipped <strong>TFS Build Extensions</strong></li>
<li>Open the newly added process template in VS2010, this can be slow, so wait…..</li>
<li>Open the toolbox and you should see all the standard build activities</li>
<li>Right click in the toolbox and select <strong>Choose Item,</strong> select browse and select the file <strong>TfsBuildExtensions.Activities.StyleCop.dll</strong>.</li>
<li>The StyleCop activity should now be in the toolbox</li>
</ol>
<h3 id="editing-the-process-template">Editing the Process template</h3>
<p>Well at last we can start to edit the process template, that took a while didn’t it!</p>
<p>Using the standard <strong>DefaultBuild</strong> template find the ‘Compile the Project’ sequence and add the activities as show in a graphic below. I have chosen to add the sequence inside the compile the project so we can pickup the solution file’s location, but you could choose a different location of that meets you needs. The StyleCop activity does not require to be after the compile stage as it works against the .CS files, not the compiled assemblies.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2085DA3B.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_460F579C.png" title="image"></a></p>
<p>Here are the breakdown</p>
<ol>
<li>Add a new sequence, I named it “Run StyleCop”</li>
<li>Add the following variable with a scope of the “Run StyleCop” sequence</li>
</ol>
<ul>
<li>StyleCopFiles – iEmumerable<string></li>
<li>StyleCopSettingsFile – string</li>
<li>StyleCopResults – bool</li>
<li>StyleCopViolations - int32</li>
</ul>
<ol start="4">
<li>Add a <strong>FindMatchingFile</strong> activity, set the result to <strong>StyleCopFiles</strong> and the <strong>MatchPattern</strong> to <strong>String.Format(&quot;{0}***.cs&quot;, BuildDirectory).</strong> This will recursively find all the .CS files in the project and add them to the collection.</li>
<li>Add an <strong>Assign</strong> activity, set the to property to <strong>StyleCopSettingsFile</strong> and the value to <strong>String.Format(&quot;{0}Settings.StyleCop&quot;, localProject.Substring(0, localProject.LastIndexOf(&quot;&quot;)))</strong>. We use the name of the .SLN file to find the root folder to find the StyleCop settings file.</li>
<li>Add a <strong>WriteBuildMessage</strong> activity, set the importance to <strong>High</strong> (so we always see the message) and the Message to <strong>String.Format(&ldquo;About to run Stylecop with {0}&rdquo;, StyleCopSettingsFile)</strong></li>
<li>Add a <strong>StyleCop</strong> activity with the following properties (these are a minimum to get it working, to see what the other options do I would suggest you look at the unit tests in the Codeplex activities source.)</li>
</ol>
<ul>
<li>SettingsFile = StyleCopSettingsFile</li>
<li>SourceFiles = StyleCopFiles.ToArray()</li>
<li>Succeeded = StyleCopResults</li>
<li>TreatViolationsErrorASWarnings = True  - setting this is down to how you want violations to appear in the log, for this example I wanted warnings</li>
<li>ViolationCount = StyleCopViolations</li>
</ul>
<ol start="9">
<li>Add another <strong>WriteBuildMessage</strong> activity, again set the importance to <strong>High</strong> and the Message to <strong>String.Format(&ldquo;StyleCop Successed:{0} with {1} violations&rdquo;, StyleCopResults, StyleCopViolations)</strong></li>
<li>Save the edited process template and check it into TFS</li>
</ol>
<h3 id="running-the-build">Running the Build</h3>
<p>If you have not done so already create a build using this process template. Queue a new build using the template. If it is all OK you should see a log similar to the graphic below</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_55CEECA0.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7A800417.png" title="image"></a></p>
<p>Notice that the FXCop (code analysis) results appear within the main build summary (green), but the StyleCop violations (red) appear in the <strong>Other Errors and Warnings</strong> section. Unfortunate this cannot be altered. You cannot add extra bits to the main build summary. However, you could choose to fail the build if there are StyleCop violations</p>
<p>So I hope this has made the use of the StylCop activity more obvious, so you can bolt it into your build process and trigger all the argument in your team as to which rules should be used.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Now I have three TFS build instances on my VM</title>
      <link>https://blog.richardfennell.net/posts/now-i-have-three-tfs-build-instances-on-my-vm/</link>
      <pubDate>Wed, 20 Apr 2011 11:10:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/now-i-have-three-tfs-build-instances-on-my-vm/</guid>
      <description>&lt;p&gt;A while ago I posted on my e&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/09/13/experiences-running-multiple-instances-of-2010-build-service-on-a-single-vm.aspx&#34;&gt;xperiences running multiple instances of 2010 build service on a single VM&lt;/a&gt;. Well a couple more experiences as now one of my VM is running 3 instances.&lt;/p&gt;
&lt;p&gt;Firstly it seems to work OK, you have to have the right build profile i.e. fairly low load but a need to support many Team Project Collections. This is not a solution for a highly loaded build environment. At Black Marble we run a TPC per client model and tend to have fair few projects on the go at any one time, so need plenty of build controllers. However, in general the builds are small, taking minutes not hours If we have a seriously long running build I would still create a dedicated build service VM.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A while ago I posted on my e<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/09/13/experiences-running-multiple-instances-of-2010-build-service-on-a-single-vm.aspx">xperiences running multiple instances of 2010 build service on a single VM</a>. Well a couple more experiences as now one of my VM is running 3 instances.</p>
<p>Firstly it seems to work OK, you have to have the right build profile i.e. fairly low load but a need to support many Team Project Collections. This is not a solution for a highly loaded build environment. At Black Marble we run a TPC per client model and tend to have fair few projects on the go at any one time, so need plenty of build controllers. However, in general the builds are small, taking minutes not hours If we have a seriously long running build I would still create a dedicated build service VM.</p>
<p>So what have I learn since the last post</p>
<ul>
<li>Set the agents working directory differently for each agent on the VM. You will probably be OK if you don’t, as the $(BuildAgenId) should differ, but life is easier of you know what is where. So I use $(SystemDriveBuild1, $(SystemDriveBuild2 etc. as opposed to the default $(SystemDriveBuild.</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_030DAC4A.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_14EA4017.png" title="image"></a></p></blockquote>
<ul>
<li>When you create the new service instance using the Sc.exe command line remember to make sure it starts automatically when the operating system is rebooted. Stupid mistake but I keep make it!</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Release of Black Marble’s WP7 TFS Phone Explorer</title>
      <link>https://blog.richardfennell.net/posts/release-of-black-marbles-wp7-tfs-phone-explorer/</link>
      <pubDate>Tue, 19 Apr 2011 10:59:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/release-of-black-marbles-wp7-tfs-phone-explorer/</guid>
      <description>&lt;p&gt;Over the weekend we have released &lt;a href=&#34;http://social.zune.net/redirect?type=phoneApp&amp;amp;id=f82cfdd2-0763-e011-81d2-78e7d1fa76f8&#34;&gt;Black Marble’s WP7 TFS Phone client to the WP7 Marketplace&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/smallpanoramaWPhone_1136CEB1.png&#34;&gt;&lt;img alt=&#34;smallpanoramaWPhone&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/smallpanoramaWPhone_thumb_54BE5A06.png&#34; title=&#34;smallpanoramaWPhone&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This app allows the user of a WP7 phone to access their TFS server to perform common operations. They can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;See recent builds and drill into the results&lt;/li&gt;
&lt;li&gt;Queue new builds&lt;/li&gt;
&lt;li&gt;View and add work items&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This client talks via a custom web service to a TFS 2008 or 2010 server, which are available from &lt;a href=&#34;mailto:support@blackmarble.co.uk&#34;&gt;Black Marble&lt;/a&gt;. But if you just want a quick look to see what it can do we have setup a demo web service so you can trial the application, just use the default account details stored in the application. These are &lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the weekend we have released <a href="http://social.zune.net/redirect?type=phoneApp&amp;id=f82cfdd2-0763-e011-81d2-78e7d1fa76f8">Black Marble’s WP7 TFS Phone client to the WP7 Marketplace</a>.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/smallpanoramaWPhone_1136CEB1.png"><img alt="smallpanoramaWPhone" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/smallpanoramaWPhone_thumb_54BE5A06.png" title="smallpanoramaWPhone"></a></p>
<p>This app allows the user of a WP7 phone to access their TFS server to perform common operations. They can:</p>
<ul>
<li>See recent builds and drill into the results</li>
<li>Queue new builds</li>
<li>View and add work items</li>
</ul>
<p>This client talks via a custom web service to a TFS 2008 or 2010 server, which are available from <a href="mailto:support@blackmarble.co.uk">Black Marble</a>. But if you just want a quick look to see what it can do we have setup a demo web service so you can trial the application, just use the default account details stored in the application. These are </p>
<ul>
<li>UID demo</li>
<li>PWD demo</li>
<li>DOMAIN demo</li>
<li>URL <a href="https://tfssample.blackmarble.co.uk/tfs/tfs/tfsservice.svc">https://tfssample.blackmarble.co.uk/tfs/tfs/tfsservice.svc</a></li>
</ul>
<p>So if you have WP7 and TFS why not download the trial version and have a look at it? Just search the market place for Black Marble and TFS Phone Explorer or try the <a href="http://social.zune.net/redirect?type=phoneApp&amp;id=f82cfdd2-0763-e011-81d2-78e7d1fa76f8">marketplace deep link</a>. We are interested to hear your thoughts</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF30162: Task &amp;quot;UploadStructure&amp;quot; from Group &amp;quot;Classification&amp;quot; failed</title>
      <link>https://blog.richardfennell.net/posts/tf30162-task-uploadstructure-from-group-classification-failed/</link>
      <pubDate>Fri, 15 Apr 2011 16:06:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf30162-task-uploadstructure-from-group-classification-failed/</guid>
      <description>&lt;p&gt;When trying to create a new Team Project on TFS2010 from a client PC I got the following error&lt;/p&gt;
&lt;p&gt;_Event Description: TF30162: Task &amp;ldquo;UploadStructure&amp;rdquo; from Group &amp;ldquo;Classification&amp;rdquo; failed&lt;br&gt;
Exception Type: Microsoft.TeamFoundation.Client.PcwException&lt;/p&gt;
&lt;p&gt;Inner Exception Details:_&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Exception Message: TF205029: No catalog resource type exists with the following identifier: 41c8b6db-39ec-49db-9db8-0760e836bfbe. (type CatalogResourceTypeDoesNotExistException)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;However if I did the create on the TFS console all was fine.&lt;/p&gt;
&lt;p&gt;Turned out the problem was due to the caching done by Team Explorer in Visual Studio. Exiting the copy of VS2010 on the client and reloading it fixed the problem as the updated team process settings will cached locally.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When trying to create a new Team Project on TFS2010 from a client PC I got the following error</p>
<p>_Event Description: TF30162: Task &ldquo;UploadStructure&rdquo; from Group &ldquo;Classification&rdquo; failed<br>
Exception Type: Microsoft.TeamFoundation.Client.PcwException</p>
<p>Inner Exception Details:_</p>
<p><em>Exception Message: TF205029: No catalog resource type exists with the following identifier: 41c8b6db-39ec-49db-9db8-0760e836bfbe. (type CatalogResourceTypeDoesNotExistException)</em></p>
<p>However if I did the create on the TFS console all was fine.</p>
<p>Turned out the problem was due to the caching done by Team Explorer in Visual Studio. Exiting the copy of VS2010 on the client and reloading it fixed the problem as the updated team process settings will cached locally.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF255115 Access Denied when starting TFSAgent Service</title>
      <link>https://blog.richardfennell.net/posts/tf255115-access-denied-when-starting-tfsagent-service/</link>
      <pubDate>Thu, 14 Apr 2011 20:31:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf255115-access-denied-when-starting-tfsagent-service/</guid>
      <description>&lt;p&gt;Whist configuring a new TFS2010 server I got the error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;TF255115: The following service did not start: TfsJobAgent. Cannot start service TfsJobAgent on computer &amp;lsquo;&lt;COMPUTERNAME&gt;&amp;rsquo;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This was identical to the &lt;a href=&#34;http://social.msdn.microsoft.com/Forums/en-US/tfssetup/thread/75c370fe-7951-44e9-85be-c7d2378f7beb&#34;&gt;thread on MSDN&lt;/a&gt; and it turned out my problem was similar.&lt;/p&gt;
&lt;p&gt;I was installing TFS onto drive D:APPS, not the usual C:PROGRAM FILE. The problem was the &lt;COMPUTERNAME&gt;User group did not have the default Read &amp;amp; Execute, List Folder Contents and Read rights on the D:APPS directory and sub directory, the right all users have on C:PROGRAM FILES.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist configuring a new TFS2010 server I got the error</p>
<p><em>TF255115: The following service did not start: TfsJobAgent. Cannot start service TfsJobAgent on computer &lsquo;<COMPUTERNAME>&rsquo;.</em></p>
<p>This was identical to the <a href="http://social.msdn.microsoft.com/Forums/en-US/tfssetup/thread/75c370fe-7951-44e9-85be-c7d2378f7beb">thread on MSDN</a> and it turned out my problem was similar.</p>
<p>I was installing TFS onto drive D:APPS, not the usual C:PROGRAM FILE. The problem was the <COMPUTERNAME>User group did not have the default Read &amp; Execute, List Folder Contents and Read rights on the D:APPS directory and sub directory, the right all users have on C:PROGRAM FILES.</p>
<p>Once these rights were added the TFS configuration completed without issue.</p>
<p>By the way something else I noticed, you can only change the path TFS will install on if you set the check boxes to install all components (server, proxy build etc.) However, once the path is altered you can deselect any components you don’t want, at which point the path textbox become read only again, but you now have the path you wanted</p>
]]></content:encoded>
    </item>
    <item>
      <title>PC Rebuild time – remembering how to mount a bitlockered VHD</title>
      <link>https://blog.richardfennell.net/posts/pc-rebuild-time-remembering-how-to-mount-a-bitlockered-vhd/</link>
      <pubDate>Thu, 14 Apr 2011 19:27:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pc-rebuild-time-remembering-how-to-mount-a-bitlockered-vhd/</guid>
      <description>&lt;p&gt;When your PC reaches the point that MSI cannot connect to the Install Service you know it is time to repave the PC. This is the time to you have to try remember what you installed on the PC, your license codes and how you actually got things to work.&lt;/p&gt;
&lt;p&gt;So going through this process this week all went OK until I tried to remember how I handled bitlockered VHDs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When your PC reaches the point that MSI cannot connect to the Install Service you know it is time to repave the PC. This is the time to you have to try remember what you installed on the PC, your license codes and how you actually got things to work.</p>
<p>So going through this process this week all went OK until I tried to remember how I handled bitlockered VHDs.</p>
<p>My PC does not have a TPM chip, but I wanted to bitlocker as much of my data as possible. The process to do this was as follows:</p>
<ul>
<li>I created an empty folder <strong>c:projects</strong></li>
<li>I opened Computer Manager, then Disk Management as an administrator.</li>
<li>Via the Actions menu I created a new VHD <strong>c:VHDsprojects.vhd</strong></li>
<li>This will be mounted by default onto a drive letter, I changed this to mount it on a path (the one I created in step 1)</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_4AD1BE5B.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_77B6FB69.png" title="image"></a></p></blockquote>
<ul>
<li>Then via the Disk Management tool I created a partition and then formatted the new disk.</li>
<li>I could then go onto the new disk by changing to the <strong>C:Projects</strong> directory</li>
<li>I now needed to bitlocker the new VHD drive. This is done via the Control Panel | loaded the Bitlocker Drive Encryption</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_1DACABC0.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_5BC5C671.png" title="image"></a></p></blockquote>
<ul>
<li>As I had no TPM chip I have to encrypt the drive with a password.</li>
<li>So when this finishes I have my <strong>C:Projects</strong> directory this is a mount point for the bitlockered VHD</li>
</ul>
<p>But I did not want to go into disk management each time I booted to attach the drive; but I did not want it automatically mounted as that defeats the purpose that I wanted a drive that could only be accessed via a password on a reboot..</p>
<p>To get around this I added a shortcut to the <strong>project.vhd</strong> file to my desktop. To be able to click this to attach the VHD I installed the vhdattach untility(<a href="http://www.jmedved.com/vhdattach/" title="http://www.jmedved.com/vhdattach/">http://www.jmedved.com/vhdattach/</a>). This allowed me to right click the VHD shortcut and attached it, at which would I am prompted to enter the bitlock password for the VHD,.</p>
<p><img loading="lazy" src="http://www.jmedved.com/content/media/vhdattach.png" title="VHD Attach screen"></p>
<p>So I now have a means attach my drive (fairly) easily even though I have no TPM chips. Just wish I had written down how I did it before so I did not have to work it out again.</p>
<p>It is not a perfect solution but at least my important data is bitlockered.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at the BCS in Edinburgh on Agile Methods</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-the-bcs-in-edinburgh-on-agile-methods/</link>
      <pubDate>Thu, 14 Apr 2011 19:12:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-the-bcs-in-edinburgh-on-agile-methods/</guid>
      <description>&lt;p&gt;I am speaking at the BCS group in Edinburgh on the 11th of May in Agile and Lean Methods.&lt;/p&gt;
&lt;p&gt;For more details see &lt;a href=&#34;http://edinburgh.bcs.org.uk/events/2010-11/110511.htm&#34; title=&#34;http://edinburgh.bcs.org.uk/events/2010-11/110511.htm&#34;&gt;http://edinburgh.bcs.org.uk/events/2010-11/110511.htm&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking at the BCS group in Edinburgh on the 11th of May in Agile and Lean Methods.</p>
<p>For more details see <a href="http://edinburgh.bcs.org.uk/events/2010-11/110511.htm" title="http://edinburgh.bcs.org.uk/events/2010-11/110511.htm">http://edinburgh.bcs.org.uk/events/2010-11/110511.htm</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>New home for my Techdays video on Lab Management from last year</title>
      <link>https://blog.richardfennell.net/posts/new-home-for-my-techdays-video-on-lab-management-from-last-year/</link>
      <pubDate>Sun, 10 Apr 2011 19:15:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-home-for-my-techdays-video-on-lab-management-from-last-year/</guid>
      <description>&lt;p&gt;Last year I presented at &lt;a href=&#34;http://www.microsoft.com/uk/techdays&#34;&gt;Microsoft UK’s Techdays&lt;/a&gt; on Visual Studio 2010 Lab Management and the session was videoed. I recently tried to refer someone to this recording and I found that the Techdays site had been rebuilt for this years event and I could not find the video. Well after a bit of searching I found it on MSN at &lt;a href=&#34;http://video.msn.com/video.aspx?mkt=en-gb&amp;amp;vid=3db86a73-16e1-409e-90a0-9fc56c46ce21&#34; title=&#34;Putting some Testing into your TFS Build Process&#34;&gt;Putting some Testing into your TFS Build Process&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Last year I presented at <a href="http://www.microsoft.com/uk/techdays">Microsoft UK’s Techdays</a> on Visual Studio 2010 Lab Management and the session was videoed. I recently tried to refer someone to this recording and I found that the Techdays site had been rebuilt for this years event and I could not find the video. Well after a bit of searching I found it on MSN at <a href="http://video.msn.com/video.aspx?mkt=en-gb&amp;vid=3db86a73-16e1-409e-90a0-9fc56c46ce21" title="Putting some Testing into your TFS Build Process">Putting some Testing into your TFS Build Process</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Doing a webcast for Microsoft next week ‘Lab Management and Automated Build’</title>
      <link>https://blog.richardfennell.net/posts/doing-a-webcast-for-microsoft-next-week-lab-management-and-automated-build/</link>
      <pubDate>Wed, 06 Apr 2011 15:47:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/doing-a-webcast-for-microsoft-next-week-lab-management-and-automated-build/</guid>
      <description>&lt;p&gt;Next Tuesday lunchtime (12th Apr 2011 1pm) I will be doing a webcast on on ‘Lab Management and Automated Build’. I will be looking at the build options in TFS and also how they can be extended with Lab Management.&lt;/p&gt;
&lt;p&gt;The event is free to attend you just need to register at &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-gb/visual-studio-events&#34; title=&#34;http://www.microsoft.com/visualstudio/en-gb/visual-studio-events&#34;&gt;http://www.microsoft.com/visualstudio/en-gb/visual-studio-events&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Next Tuesday lunchtime (12th Apr 2011 1pm) I will be doing a webcast on on ‘Lab Management and Automated Build’. I will be looking at the build options in TFS and also how they can be extended with Lab Management.</p>
<p>The event is free to attend you just need to register at <a href="http://www.microsoft.com/visualstudio/en-gb/visual-studio-events" title="http://www.microsoft.com/visualstudio/en-gb/visual-studio-events">http://www.microsoft.com/visualstudio/en-gb/visual-studio-events</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Why can’t I see my TFS reports?</title>
      <link>https://blog.richardfennell.net/posts/why-cant-i-see-my-tfs-reports/</link>
      <pubDate>Mon, 04 Apr 2011 12:32:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-cant-i-see-my-tfs-reports/</guid>
      <description>&lt;p&gt;Whist recently installing a TFS 2010 system onto a single box server, that was also a domain controller, I had a problem that though everything seemed in order I could not view my reporting services based reports in either SharePoint or directly from the &lt;a href=&#34;http://myserver/reports&#34;&gt;http://myserver/reports&lt;/a&gt; interface.&lt;/p&gt;
&lt;p&gt;During the installation I had verified I had the correct password for my &lt;strong&gt;[domain]tfsreports&lt;/strong&gt; account used to run the reports. If went to the &lt;a href=&#34;http://myserver/reports&#34;&gt;http://myserver/reports&lt;/a&gt; page and edited the &lt;strong&gt;TFS2010ReportsDs&lt;/strong&gt; or &lt;strong&gt;TFS2010OlapReportDS&lt;/strong&gt; and tried to test the &lt;strong&gt;[domain]tfsreports&lt;/strong&gt; login it failed. However, if I swapped to the &lt;strong&gt;[domain]administrator&lt;/strong&gt; all was fine and my reports worked.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist recently installing a TFS 2010 system onto a single box server, that was also a domain controller, I had a problem that though everything seemed in order I could not view my reporting services based reports in either SharePoint or directly from the <a href="http://myserver/reports">http://myserver/reports</a> interface.</p>
<p>During the installation I had verified I had the correct password for my <strong>[domain]tfsreports</strong> account used to run the reports. If went to the <a href="http://myserver/reports">http://myserver/reports</a> page and edited the <strong>TFS2010ReportsDs</strong> or <strong>TFS2010OlapReportDS</strong> and tried to test the <strong>[domain]tfsreports</strong> login it failed. However, if I swapped to the <strong>[domain]administrator</strong> all was fine and my reports worked.</p>
<p>So what was the issue?</p>
<p>The key point is that the server, as it is a PDC, would only allow limited accounts to login to the server console. The actual Reporting Services web services were running as a named domain account (you cannot use Network Service and like on a PDC), but it seems that the connection by the <strong>[domain]tfsreports</strong> account is considered the same as a login via the login screen as far as security systems are concerned.</p>
<p>The immediate fix was to make sure the <strong>[domain]tfsreports</strong> user was in a group listed in the “Allow log on locally&quot;. To check this</p>
<ol>
<li>Run gpedit.msc</li>
<li>Expand Computer ConfigurationWindows SettingsSecurity SettingsLocal Policies</li>
<li>Click on User Rights Assignment</li>
<li>Ensure that &ldquo;Allow log on locally&rdquo; includes user required, or that the user is in one of the listed groups</li>
</ol>
<p>Now I am not sure this is the end of story, I am sure I can waste loads of time to find out exactly the minimum security settings needed, but this is an adequate solution for no for me.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to waste time with Lab Management – Missing the obvious that MTM points to a Team Project</title>
      <link>https://blog.richardfennell.net/posts/how-to-waste-time-with-lab-management-missing-the-obvious-that-mtm-points-to-a-team-project/</link>
      <pubDate>Wed, 30 Mar 2011 09:56:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-waste-time-with-lab-management-missing-the-obvious-that-mtm-points-to-a-team-project/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx&#34;&gt;posted a while ago about common confusion I had seen with Lab Management&lt;/a&gt;. We I have recently managed to get myself completely confused whilst working with Lab Management. It turns out the issue was so obvious I managed to miss it for hours, but as usual I learnt a good deal whilst trying to troubleshoot my stupidity.&lt;/p&gt;
&lt;p&gt;I have a Lab Management system linked up to a a Team Project Collection (TPC). In this TPC there is a Team Project used for SharePoint development and on my Lab Management system I have an environment to allow testing of the SharePoint products. I setup a new Team Project in this TPC. In the new project I created a new MVC solution and created an automated build, which all worked fine.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx">posted a while ago about common confusion I had seen with Lab Management</a>. We I have recently managed to get myself completely confused whilst working with Lab Management. It turns out the issue was so obvious I managed to miss it for hours, but as usual I learnt a good deal whilst trying to troubleshoot my stupidity.</p>
<p>I have a Lab Management system linked up to a a Team Project Collection (TPC). In this TPC there is a Team Project used for SharePoint development and on my Lab Management system I have an environment to allow testing of the SharePoint products. I setup a new Team Project in this TPC. In the new project I created a new MVC solution and created an automated build, which all worked fine.</p>
<p>I wanted to deploy this MVC application to a web server in an environment in my Lab Management system. So I created a basic web server environment and deployed it onto a host in my Lab. I then tried to create a Lab Management workflow build to deploy to the newly created environment. However the combo to select the environment was empty when I ran the wizard.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_75624815.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_63198153.png" title="image"></a></p>
<p>I was confused, I knew I had an environment on the Lab Management system, I had just created it in Test Manager (MTM) I could attach to it in MTM or Remote Desktop.</p>
<p>So I checked again</p>
<ul>
<li>that the TPC was correctly registered with Lab Management</li>
<li>the environment was running in MTM</li>
<li>that all the configuration for the Team Project looked OK via TFSLABCONFIG.EXE</li>
</ul>
<p>All to no avail. Then after far too long I realised I was not looking at the same things in MTM and Visual Studio.</p>
<p>When you are in the Lab Center (green border) pages of MTM there is no obvious indication of the Team Project you are using. I had forgotten this and got it into my head I was looking at all the environments and libraries for my whole TPC. <strong>THIS IS NOT THE CASE</strong>. The environments and libraries are Team Project specific and not TPC specific. I had created my new environment in my SharePoint team project not in my new MVC one.</p>
<p>To swap Team Project I needed to change to the Testing Centre (blue border) view and change the Test Plan (top right)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_151121DE.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_74F61520.png" title="image"></a></p>
<p>to get the dialog to change the Team Project.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_025C2827.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3453C8B1.png" title="image"></a></p>
<p>Once this is done you can go to the Lab Center again and you see the environments for the selected Team Project. This was where I needed to create my environment.</p>
<p>At this point you will notice that all the templates and VMs you imported into the other Team project are not in this one. You have to reimport them from SCVMM and then create the environments in MTM for that Team Project</p>
<p>To the technical tip here is remember that Lab Center in MTM is Team Project specific – <strong>NOT</strong> Team Project Collection specific, but it does its best to not remind you of this fact so it is easy to forget.</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to expose IIS Express to external network connections and use a non-self signed certificate</title>
      <link>https://blog.richardfennell.net/posts/how-to-expose-iis-express-to-external-network-connections-and-use-a-non-self-signed-certificate/</link>
      <pubDate>Tue, 22 Mar 2011 21:02:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-expose-iis-express-to-external-network-connections-and-use-a-non-self-signed-certificate/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://weblogs.asp.net/scottgu/archive/2010/06/28/introducing-iis-express.aspx&#34;&gt;IIS Express&lt;/a&gt; is a great addition to the tools for .NET web developers; it allow a slightly cut down copy of IIS 7.5 to be run without administrative privileges on a developer’s PC. This means we can hopefully get away from the problems associated by either&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Using &lt;a href=&#34;http://en.wikipedia.org/wiki/Cassini_Web_Server&#34;&gt;Cassini&lt;/a&gt; – which is not IIS and does not do any clever&lt;/li&gt;
&lt;li&gt;Using full IIS which means Visual Studio has to run as administrator to debug it and also causes source control issues when a project is shared between multiple developers (their IIS setup must match up)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If you install &lt;a href=&#34;http://support.microsoft.com/kb/983509&#34;&gt;Visual Studio 2010 SP1&lt;/a&gt; and IIS express you now get a new option, that is to use IIS express as your web server. This, via a few clicks, can be configured for SSL and should address 90%+ of the needs of most developers. Once a project is set to use IIS express the key properties are set via the VS Property windows&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://weblogs.asp.net/scottgu/archive/2010/06/28/introducing-iis-express.aspx">IIS Express</a> is a great addition to the tools for .NET web developers; it allow a slightly cut down copy of IIS 7.5 to be run without administrative privileges on a developer’s PC. This means we can hopefully get away from the problems associated by either</p>
<ol>
<li>Using <a href="http://en.wikipedia.org/wiki/Cassini_Web_Server">Cassini</a> – which is not IIS and does not do any clever</li>
<li>Using full IIS which means Visual Studio has to run as administrator to debug it and also causes source control issues when a project is shared between multiple developers (their IIS setup must match up)</li>
</ol>
<p>If you install <a href="http://support.microsoft.com/kb/983509">Visual Studio 2010 SP1</a> and IIS express you now get a new option, that is to use IIS express as your web server. This, via a few clicks, can be configured for SSL and should address 90%+ of the needs of most developers. Once a project is set to use IIS express the key properties are set via the VS Property windows</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_682F6303.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_0705D6E2.png" title="image"></a></p>
<p>However you are not limited to only editing these options; but to do more you need to use some command line tools.</p>
<p><strong>What I wanted to do</strong></p>
<p>My problem was that I wanted to test a Windows Phone 7 application that used a WCF web service. If I switched the WCF project to use IIS Express the WP7 application could not access the web server as, for security reasons, IIS Express is by default limited to only responding to requests from the localhost. The WP7 application is on another device (or at least a VM for development), so its request are not handled.</p>
<p>So we need to enable remote access to the server. <a href="http://weblogs.asp.net/scottgu/archive/2010/06/28/introducing-iis-express.aspx">ScottGu said this can be done in his post about IIS Express</a>, but not how to do it.</p>
<p>Also I wanted to test my WP7 application using HTTPS. This raised a second issue. By default IIS express uses a self signed certificate. When this is used the WP7 WCF client throws as error as it cannot validate the certificate. I needed to swap the certificate for a ‘real one’. Again ScottGu’s post says it can be done but not how.</p>
<p><strong>How I got it working</strong></p>
<p><strong>NOTE: I think the process covers all the steps, but it took me a while to get this going so there is a chance I might have missed step. Please treat this as outline guide and not definitive way to get it going. if I find errors I will update the post and highlight them.</strong></p>
<p><strong>Step 1 – Get the right Certificate onto the Development PC</strong></p>
<p>I had already installed  the wildcard SSL certificate we have on my development PC from ita .PFX file.</p>
<p>To confirm this was OK I load MMC (running as a local administrator). Loaded the certificates snap-in, browsed to Personal|Certificates and checked it was there. I then clicked on the certificate and made a note of its thumbprint, you need it later</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_6FF755A5.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_67274D59.png" title="image"></a></p>
<p><strong>Step 2 – List the certificates you have installed for IIS</strong></p>
<p>I opened a command prompt as administrator and run the command</p>
<blockquote>
<p>netsh http show sslcert</p></blockquote>
<p>This will stream past so you probably want to pipe it into a file to look at. Anyway you should find an entry for the self sign certificate that Visual Studio created when you setup IIS Express (on port 44300 in my case). Something like</p>
<blockquote>
<p>IP:port                 : 0.0.0.0:44300<br>
    Certificate Hash        : c3a234250edfb2adcd2b501cf4c44d0281e29476<br>
    Application ID          : {214124cd-d05b-4309-9af9-9caa44b2b74a}<br>
    Certificate Store Name  : MY<br>
    Verify Client Certificate Revocation    : Enabled<br>
    Verify Revocation Using Cached Client Certificate Only    : Disabled<br>
    Usage Check    : Enabled<br>
    Revocation Freshness Time : 0<br>
    URL Retrieval Timeout   : 0<br>
    Ctl Identifier          : (null)<br>
    Ctl Store Name          : (null)<br>
    DS Mapper Usage    : Disabled<br>
    Negotiate Client Certificate    : Disabled</p></blockquote>
<p>We need to remove this self signed certificate so we can re-assign a real one to this port. To do this use the command</p>
<blockquote>
<p>netsh http delete sslcert ipport=0.0.0.0:44300</p></blockquote>
<p>then add the new certificate association using the command</p>
<blockquote>
<p>netsh http add sslcert ipport=0.0.0.0:44300 certstorename=MY certhash=<certificate hash> appid=<appid></p></blockquote>
<p><certificate hash> is the thumbprint of the SSL certificate found in step 1, with the spaces removed<br>
<appid> can be any unique GUID</p>
<p>if you have the command line right it should say added OK. You could then run the list command again to check it is as you want.</p>
<p><strong>So where are we up to…..</strong></p>
<p>So at this point we have associated a real SSL certificate with any call to the port 44300 on this PC, note any call to this port, not just for IIS Express. If we do nothing else to configure IIS Express, let Visual Studio automatically start it, and try to load the site it will work for HTTP but when you try HTTPS it will error</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_79FBD103.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_21066A79.png" title="image"></a></p>
<p>If you inspect the certificate you will see it is using the one you set, but the certificate is linked to a Url, in my case *.blackmarble.co.uk so is deemed invalid when you try to use it with localhost.</p>
<p>We need to set IIS Express to respond on other addresses than localhost.</p>
<p><strong>Step 3 – Making IIS Express respond to requests from the network</strong></p>
<p>If you wish to make IIS Express respond to calls other than for localhost you have to run it as administrator, this is by design for security. Now it is fair to say from here onwards you are at the point where you lose some of the ease of use of the product as it does not ‘just work form Visual Studio’, but needs must.</p>
<p>We now need to edit the bindings of IIS Express. This could be done with command</p>
<blockquote>
<p>c:program filesiis express&gt;appcmd set site &ldquo;SiteName&rdquo; /+bindings.[protocol=&lsquo;https&rsquo;, bindingInformation=&rsquo;*:44300:&rsquo;]</p></blockquote>
<p>But I found it easier just to edit the edit the file C:Users[user name]DocumentsIISExpressconfigapplicationhost.config in notepad. I altered the bindings section as follows</p>
<blockquote>
<bindings>  
      <binding protocol="http" bindingInformation="\*:60213:" />  
      <binding protocol="https" bindingInformation="\*:44300:" />  
</bindings></blockquote>
<p>Basically I removed the :localhost at the end of each line. This allowing IIS Express to bind any Url not just localhost</p>
<p><strong>Step 4 – Running IIS Express</strong></p>
<p>You now need to just start your copy of IIS Express, this again has to be done from the command prompt running with administrative privileges. However, the command line parameters are identical to those used by Visual Studio (you can check task manager if you wish by showing the command line column on the processes tab)</p>
<blockquote>
<p>&ldquo;c:Program Files (x86)IIS Expressiisexpress.exe&rdquo; /config: &ldquo;c:Documents and Settings[username]DocumentsIISExpressconfigapplicationhost.config&rdquo; /site:&ldquo;MyServer&rdquo; /apppool:&ldquo;Clr4IntegratedAppPool&rdquo;</p></blockquote>
<p>When you run this you should see the IIS Express process start-up.</p>
<p><strong>So what have we ended up with?</strong></p>
<p>So we now have IIS Express running the a wildcard certificate and listening to requests from any source. So as long as we use a Url valid for the SSL certificate we should be be able to load an HTTPS Url and get no errors.</p>
<p>However, be warned, due to the way we have had to launch IIS Express we have lost the ability to launch and debug from Visual Studio when not running as administrator. So I am not sure I have addressed the problem I started out try to address, I might as well just use the full version of IIS.</p>
<p>But look on the bright side I learnt something.</p>
<p>Thanks to <a href="http://www.andrewwestgarth.co.uk/Blog/default.aspx">Andy Westgarth</a> for his assistance in getting to the bottom of assigning the right certificate, I was going in circles</p>
]]></content:encoded>
    </item>
    <item>
      <title>A slow experience installing VS2010 SP1, but worth it in the end</title>
      <link>https://blog.richardfennell.net/posts/a-slow-experience-installing-vs2010-sp1-but-worth-it-in-the-end/</link>
      <pubDate>Sat, 19 Mar 2011 14:10:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-slow-experience-installing-vs2010-sp1-but-worth-it-in-the-end/</guid>
      <description>&lt;p&gt;I got round to installing &lt;a href=&#34;http://weblogs.asp.net/scottgu/archive/2011/03/15/visual-studio-2010-sp1.aspx&#34;&gt;Visual Studio 2010 SP1&lt;/a&gt; on my laptop last night; well I started last night via the Web Platform Installer method. It downloaded the bits fast but sat for ages on the first step, installing the actual service pack. There was no obvious activity on the CPU or disk. In the end I gave waiting and went to bed. I was pleased to see it was finished this morning.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I got round to installing <a href="http://weblogs.asp.net/scottgu/archive/2011/03/15/visual-studio-2010-sp1.aspx">Visual Studio 2010 SP1</a> on my laptop last night; well I started last night via the Web Platform Installer method. It downloaded the bits fast but sat for ages on the first step, installing the actual service pack. There was no obvious activity on the CPU or disk. In the end I gave waiting and went to bed. I was pleased to see it was finished this morning.</p>
<p>So the tip here is be patient, applying this service pack is a job you start on your desktop before you go home, not when you come into the office.</p>
<p>So what is the biggest benefit thus far?</p>
<p>I can easily use <a href="http://weblogs.asp.net/scottgu/archive/2010/06/28/introducing-iis-express.aspx">IIS Express</a> from with Visual Studio, so no more having to run Visual Studio as administrator to us my local IIS Server for development.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting the WP7 SDK onto a Windows Server 2008 TFS Build Agent</title>
      <link>https://blog.richardfennell.net/posts/getting-the-wp7-sdk-onto-a-windows-server-2008-tfs-build-agent/</link>
      <pubDate>Thu, 17 Mar 2011 09:25:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-the-wp7-sdk-onto-a-windows-server-2008-tfs-build-agent/</guid>
      <description>&lt;p&gt;If you try to create an automated TFS build of a Windows Phone 7 Silverlight application on a ‘default installed’ build agent you will see errors along the lines of&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The imported project &amp;ldquo;C:Program FilesMSBuildMicrosoftSilverlight for Phonev4.0Microsoft.Silverlight.WindowsPhone.Overrides.targets&amp;rdquo; was not found. Confirm that the path in the &lt;Import&gt; declaration is correct, and that the file exists on disk.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This is because the PC your build agent is running on does not have the &lt;a href=&#34;http://create.msdn.com/en-us/home/getting_started&#34;&gt;WP7 SDK installed&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you try to create an automated TFS build of a Windows Phone 7 Silverlight application on a ‘default installed’ build agent you will see errors along the lines of</p>
<p><em>The imported project &ldquo;C:Program FilesMSBuildMicrosoftSilverlight for Phonev4.0Microsoft.Silverlight.WindowsPhone.Overrides.targets&rdquo; was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.</em></p>
<p>This is because the PC your build agent is running on does not have the <a href="http://create.msdn.com/en-us/home/getting_started">WP7 SDK installed</a>.</p>
<p>Simple you would think, lets just install the SDK. Well, if your build box is based on any Windows Server operating system you quickly hit a problem that you get the error the “Windows 7 or Windows Vista is required”</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_5C1ED2E5.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_42B6CFAB.png" title="image"></a></p>
<p>There is no supported route around this, but there is a hack. The fix is to follow the <strong>UNSUPPORTED</strong> process on <a href="http://blogs.msdn.com/b/astebner/archive/2010/05/02/10005980.aspx">Aaron Stebner&rsquo;s blog</a>. This edits the <strong>baseline.dat</strong> from the installation media to make, in my case, Windows Server 2008 a supported operating system. Once this was done the SDK could be installed and automated builds run.</p>
<p>As usually, as this is unsupported, it is buyer beware, try it at your own risk.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD Scotland registration is now open</title>
      <link>https://blog.richardfennell.net/posts/ddd-scotland-registration-is-now-open/</link>
      <pubDate>Mon, 14 Mar 2011 19:14:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-scotland-registration-is-now-open/</guid>
      <description>&lt;p&gt;For those who have not noticed, DDD Scotland’s registration is now open at &lt;a href=&#34;http://developerdeveloperdeveloper.com/scotland2011/Default.aspx&#34; title=&#34;http://developerdeveloperdeveloper.com/scotland2011/Default.aspx&#34;&gt;http://developerdeveloperdeveloper.com/scotland2011/Default.aspx&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Sad to say my proposed session did not get accepted, but that just means I have the chance to see more sessions myself!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For those who have not noticed, DDD Scotland’s registration is now open at <a href="http://developerdeveloperdeveloper.com/scotland2011/Default.aspx" title="http://developerdeveloperdeveloper.com/scotland2011/Default.aspx">http://developerdeveloperdeveloper.com/scotland2011/Default.aspx</a>.</p>
<p>Sad to say my proposed session did not get accepted, but that just means I have the chance to see more sessions myself!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking on VS 2010 ALM and Testing at events in Belfast and Dublin</title>
      <link>https://blog.richardfennell.net/posts/speaking-on-vs-2010-alm-and-testing-at-events-in-belfast-and-dublin/</link>
      <pubDate>Thu, 10 Mar 2011 16:20:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-on-vs-2010-alm-and-testing-at-events-in-belfast-and-dublin/</guid>
      <description>&lt;p&gt;At the end of the month I will be speaking at a series of free Microsoft events in Belfast and Dublin. There going to be two session at each location&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Managing application lifecycle “From requirements to retirement” with Team Foundation Server 2010&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Dublin - 31/03/2011 (AM): &lt;a href=&#34;https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480694&amp;amp;Culture=en-IE&#34;&gt;https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480694&amp;amp;Culture=en-IE&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Belfast – 01/04/2011 (AM): &lt;a href=&#34;https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480760&amp;amp;Culture=en-IE&#34;&gt;https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480760&amp;amp;Culture=en-IE&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Better testing with lower costs - Saving your teams time and resources using Visual Studio 2010 and Microsoft Test Manager&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the end of the month I will be speaking at a series of free Microsoft events in Belfast and Dublin. There going to be two session at each location</p>
<p><strong>Managing application lifecycle “From requirements to retirement” with Team Foundation Server 2010</strong></p>
<ul>
<li>Dublin - 31/03/2011 (AM): <a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480694&amp;Culture=en-IE">https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480694&amp;Culture=en-IE</a></li>
<li>Belfast – 01/04/2011 (AM): <a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480760&amp;Culture=en-IE">https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480760&amp;Culture=en-IE</a></li>
</ul>
<p><strong>Better testing with lower costs - Saving your teams time and resources using Visual Studio 2010 and Microsoft Test Manager</strong></p>
<ul>
<li>Dublin – 31/03/2011 (PM): <a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480704&amp;Culture=en-IE">https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480704&amp;Culture=en-IE</a></li>
<li>Belfast – 01/04/2011 (PM): <a href="https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480761&amp;Culture=en-IE">https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032480761&amp;Culture=en-IE</a></li>
</ul>
<p>So if you are in the area why not pop by?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Install order guidance for VS2010 SP1</title>
      <link>https://blog.richardfennell.net/posts/install-order-guidance-for-vs2010-sp1/</link>
      <pubDate>Wed, 09 Mar 2011 21:54:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/install-order-guidance-for-vs2010-sp1/</guid>
      <description>&lt;p&gt;Further to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/03/08/a-day-of-new-releases-and-announcements-in-visual-studio-2010-land.aspx&#34;&gt;yesterdays post&lt;/a&gt; on new bits for VS and TFS 2010, there is now some &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2011/03/09/installing-all-the-new-stuff.aspx&#34;&gt;guidance to what order to apply SP1 to server and clients&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Further to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/03/08/a-day-of-new-releases-and-announcements-in-visual-studio-2010-land.aspx">yesterdays post</a> on new bits for VS and TFS 2010, there is now some <a href="http://blogs.msdn.com/b/bharry/archive/2011/03/09/installing-all-the-new-stuff.aspx">guidance to what order to apply SP1 to server and clients</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fixed numerous issues with VS2008 using a /resetskippkgs</title>
      <link>https://blog.richardfennell.net/posts/fixed-numerous-issues-with-vs2008-using-a-resetskippkgs/</link>
      <pubDate>Wed, 09 Mar 2011 21:50:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixed-numerous-issues-with-vs2008-using-a-resetskippkgs/</guid>
      <description>&lt;p&gt;I am doing some work on VS2008 at present and I when I started my VS2008, which I had not used for a while, I was plagued by errors along the lines of &amp;ldquo;The operation could not be completed&amp;rdquo;. These occurred when running major features such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Loading LINQ to SQL Designer&lt;/li&gt;
&lt;li&gt;Running the SQL 2008 Project Wizard&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The fix turned out to be resetting the package skip loading flag. It seems a number of add-ins were not being loaded on startup. This command is run from the &amp;ldquo;Visual Studio 2008 Command Prompt&amp;rdquo; by typing&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am doing some work on VS2008 at present and I when I started my VS2008, which I had not used for a while, I was plagued by errors along the lines of &ldquo;The operation could not be completed&rdquo;. These occurred when running major features such as:</p>
<ul>
<li>Loading LINQ to SQL Designer</li>
<li>Running the SQL 2008 Project Wizard</li>
</ul>
<p>The fix turned out to be resetting the package skip loading flag. It seems a number of add-ins were not being loaded on startup. This command is run from the &ldquo;Visual Studio 2008 Command Prompt&rdquo; by typing</p>
<blockquote>
<p>devenv /resetskippkgs</p></blockquote>
<p>I also had problems trying to run tests using the standard Microsoft Test tools within Visual Studio. I got the error “Exception has been thrown by the target of an invocation”, but running tests using TestDriven.NET worked fine. This problem was not fixed with the /resetskippkgs. However, turns out it is a known issues if VS2008 SP1 and TFS, <a href="http://support.microsoft.com/kb/980216">KB980216</a>. The quick fix is to make sure I was connected to a TFS server. Once this was done the test could be run. There is a hotfix, but I can live without I think for now.</p>
<p>So the moral is, even if you don’t use an IDE everyday you can still break it with all the patching you do around it for other IDEs and underlying frameworks. A reset to defaults can often by just the kick it needs to get it working.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A day of new releases and announcements in Visual Studio 2010 land</title>
      <link>https://blog.richardfennell.net/posts/a-day-of-new-releases-and-announcements-in-visual-studio-2010-land/</link>
      <pubDate>Tue, 08 Mar 2011 17:20:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-day-of-new-releases-and-announcements-in-visual-studio-2010-land/</guid>
      <description>&lt;p&gt;Today we have seen the release of the Visual Studio 2010 SP1 and the TFS-Project Server Integration Feature Pack. Both are available on the &lt;a href=&#34;http://msdn.microsoft.com/subscriptions/downloads/&#34;&gt;MSDN download site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As well was the new downloads they have announced a change to licensing over Load Test Agent. This gives Visual Studio Ultimate with MSDN users the ability to do unlimited load testing. No longer do you need to purchase Load Test Packs, thus making load testing an option for more teams.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today we have seen the release of the Visual Studio 2010 SP1 and the TFS-Project Server Integration Feature Pack. Both are available on the <a href="http://msdn.microsoft.com/subscriptions/downloads/">MSDN download site</a>.</p>
<p>As well was the new downloads they have announced a change to licensing over Load Test Agent. This gives Visual Studio Ultimate with MSDN users the ability to do unlimited load testing. No longer do you need to purchase Load Test Packs, thus making load testing an option for more teams.</p>
<p>For more details see Brian Harry’s blog post on <a href="http://blogs.msdn.com/b/bharry/archive/2011/03/08/vs-tfs-2010-sp1-and-tfs-project-server-integration-feature-pack-have-released.aspx">the new downloads</a> and <a href="http://blogs.msdn.com/b/bharry/archive/2011/03/08/unlimited-load-testing.aspx">load testing</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>What to do with project dependencies?</title>
      <link>https://blog.richardfennell.net/posts/what-to-do-with-project-dependencies/</link>
      <pubDate>Fri, 04 Mar 2011 01:36:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-to-do-with-project-dependencies/</guid>
      <description>&lt;p&gt;Many development teams hit the problem that they have dependencies on libraries that they do not want to have as part of their solutions. If these dependencies are open source projects then there are options using technologies like &lt;a href=&#34;http://nuget.codeplex.com/&#34;&gt;NuGet&lt;/a&gt; or &lt;a href=&#34;http://openwrap.org/&#34;&gt;OpenWrap&lt;/a&gt;. However, in many cases the dependency is to an internal project, such as the company standard logging library, which it is never going to put up into a centralised repository. So normally you end up with either:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Many development teams hit the problem that they have dependencies on libraries that they do not want to have as part of their solutions. If these dependencies are open source projects then there are options using technologies like <a href="http://nuget.codeplex.com/">NuGet</a> or <a href="http://openwrap.org/">OpenWrap</a>. However, in many cases the dependency is to an internal project, such as the company standard logging library, which it is never going to put up into a centralised repository. So normally you end up with either:</p>
<ol>
<li>Adding the project for the shared assembly to the solution and rebuilding with the solution, probably via some branching model in source control to allow fixes to be merged between projects.</li>
<li>Adding the assembly from a known location (maybe under source control)  that the team responsible for then shared library publish the current version to.</li>
</ol>
<p>Both solutions can work, but they both have their pros and cons.</p>
<p>A different and interesting approach has been proposed in Sven Hubert’s post <a href="http://www.tfsblog.de/2010/11/11/extended-dependency-management-with-team-foundation-server/">“Extended Dependency Management with Team Foundation Server”</a> on <a href="http://www.TFSBlog.de">http://www.TFSBlog.de</a>. He suggests using a custom MSBuild task and .Targets files that allows you to cause an external project to be rebuild from source control (addressing option 1 without the need to add the actual VS project to the solutions) or to pick the built assemblies from the a given Team Build (addressing option 2).</p>
<p>If this problem is one you have come across this post makes for interesting reading.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The March 2011 version of the TFS Power Tools have been released</title>
      <link>https://blog.richardfennell.net/posts/the-march-2011-version-of-the-tfs-power-tools-have-been-released/</link>
      <pubDate>Fri, 04 Mar 2011 00:37:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-march-2011-version-of-the-tfs-power-tools-have-been-released/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f&#34;&gt;March 2011 version of the TFS Power Tools&lt;/a&gt; has been released. There are plenty of fixes and enhancements, especially to the TFS backup tool.&lt;/p&gt;
&lt;p&gt;For more details have a &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2011/03/03/mar-11-team-foundation-server-power-tools-are-available.aspx&#34;&gt;look at Brian Harry’s blog&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://visualstudiogallery.msdn.microsoft.com/c255a1e4-04ba-4f68-8f4e-cd473d6b971f">March 2011 version of the TFS Power Tools</a> has been released. There are plenty of fixes and enhancements, especially to the TFS backup tool.</p>
<p>For more details have a <a href="http://blogs.msdn.com/b/bharry/archive/2011/03/03/mar-11-team-foundation-server-power-tools-are-available.aspx">look at Brian Harry’s blog</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Mocking out calls in unit tests to a TFS Server using Typemock</title>
      <link>https://blog.richardfennell.net/posts/mocking-out-calls-in-unit-tests-to-a-tfs-server-using-typemock/</link>
      <pubDate>Fri, 04 Mar 2011 00:06:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mocking-out-calls-in-unit-tests-to-a-tfs-server-using-typemock/</guid>
      <description>&lt;p&gt;If you are developing custom application using the TFS API then there is a good chance you will want to mock out the calls to your TFS server to enable better testing of the business logic in your application. The architecture of the TFS API does not lend itself to mocking using the standard means provided in most auto-mocking frameworks i.e there is not an interface for all the objects you care about. However, with &lt;a href=&#34;http://www.typemock.com&#34;&gt;Typemock Isolator&lt;/a&gt; you can fake the classes required, as Isolator can fake an instance of virtually any class.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are developing custom application using the TFS API then there is a good chance you will want to mock out the calls to your TFS server to enable better testing of the business logic in your application. The architecture of the TFS API does not lend itself to mocking using the standard means provided in most auto-mocking frameworks i.e there is not an interface for all the objects you care about. However, with <a href="http://www.typemock.com">Typemock Isolator</a> you can fake the classes required, as Isolator can fake an instance of virtually any class.</p>
<p>So say we wanted to write a simple build monitor application for TFS Team Build system, we need to connect to a TFS server, get a list of historic builds, then select the last successful one. So our business logic method is as follows</p>
<pre tabindex="0"><code>/// &lt;summary&gt;
</code></pre><p>///  Gets the last successful build</p>
<pre tabindex="0"><code>/// &lt;/summary&gt;
</code></pre><p>public static IBuildDetail GetLastNonFailingBuildDetails(string url, string projectName, string buildName)</p>
<pre tabindex="0"><code>{
</code></pre><pre><code>using (TeamFoundationServer tfs = new TeamFoundationServer(url))
</code></pre>
<pre tabindex="0"><code>    {
</code></pre><pre><code>    IBuildServer buildServer = (IBuildServer)tfs.GetService(typeof(IBuildServer));
</code></pre>
<pre tabindex="0"><code>        return buildServer.QueryBuilds(projectName, buildName).Last(b =&gt; b.Status == BuildStatus.Succeeded || b.Status == BuildStatus.PartiallySucceeded);
</code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code>}
```

To test this, you would usually need a TFS server, with a set of historic build data already on it, but with Typemock you can avoid this requirement. OK we have to write bit of supporting code, but most of it would be common to a suite of tests, so the effort will not be too high overall and by doing it you get a test that can be run as part of the build process.

To be able to unit test our business logic (the last line of code in reality in this sample) we need to mock the call to the TeamFoundationServer (which is usually the blocking point for most mocking frameworks) and then mock the call to get the IBuildServer and return a set of data (which is usually possible with mocking frameworks).

Using Typemock we can get around these problems, the comments for each step are inline with the code.

```
\[TestClass\]
</code></pre><p>public class TFSTests</p>
<pre tabindex="0"><code>{
</code></pre><pre><code>\[TestMethod\]
</code></pre>
<pre tabindex="0"><code>    public void The\_last\_completed\_and\_non\_failed\_build\_can\_be\_found()
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>        // Arrange
</code></pre><pre><code>    // Create the fake TFS server
</code></pre>
<pre tabindex="0"><code>        var fakeTfsServer = Isolate.Fake.Instance&lt;TeamFoundationServer&gt;();
</code></pre><pre><code>    // Swap it in the next time the constructor is run
</code></pre>
<pre tabindex="0"><code>        Isolate.Swap.NextInstance&lt;TeamFoundationServer&gt;().With(fakeTfsServer);
```

```
        // Create a fake build server instance
</code></pre><pre><code>    var fakeBuildServer = Isolate.Fake.Instance&lt;IBuildServer&gt;();
</code></pre>
<pre tabindex="0"><code>        // Set the behaviour on the TFS server to return the build server
</code></pre><pre><code>    Isolate.WhenCalled(() =&gt; fakeTfsServer.GetService(typeof(IBuildServer))).WillReturn(fakeBuildServer);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    // Create some test data for the build server to return
</code></pre>
<pre tabindex="0"><code>        var fakeBuildDetails = CreateResultSet(new List&lt;BuildTestData&gt;() {
</code></pre><pre><code>          new BuildTestData() {BuildName =&quot;Build1&quot;, BuildStatus = BuildStatus.Failed},
</code></pre>
<pre tabindex="0"><code>              new BuildTestData() {BuildName =&#34;Build2&#34;, BuildStatus = BuildStatus.PartiallySucceeded},
</code></pre><pre><code>          new BuildTestData() {BuildName =&quot;Build3&quot;, BuildStatus = BuildStatus.Failed},
</code></pre>
<pre tabindex="0"><code>              new BuildTestData() {BuildName =&#34;Build4&#34;, BuildStatus = BuildStatus.Succeeded},
</code></pre><pre><code>          new BuildTestData() {BuildName =&quot;Build5&quot;, BuildStatus = BuildStatus.PartiallySucceeded},
</code></pre>
<pre tabindex="0"><code>              new BuildTestData() {BuildName =&#34;Build6&#34;, BuildStatus = BuildStatus.Failed}
</code></pre><pre><code>    });
</code></pre>
<pre tabindex="0"><code>        // Set the behaviour on the build server to return the test data, the nulls mean we don’t care about parameters passed
</code></pre><pre><code>    Isolate.WhenCalled(() =&gt; fakeBuildServer.QueryBuilds(null, null)).WillReturn(fakeBuildDetails);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    // Act
</code></pre>
<pre tabindex="0"><code>        // Call the method we want to test, as we are using a fake server the parameters are actually ignored
</code></pre><pre><code>    var actual = TFSMocking.BuildDetails.GetLastNonFailingBuildDetails(&quot;http://FakeURL:8080/tfs&quot;, &quot;FakeTeamProject&quot;, &quot;FakeBuildName&quot;);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    // Assert
</code></pre>
<pre tabindex="0"><code>        Assert.AreEqual(&#34;Build5&#34;, actual.BuildNumber);
</code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>/// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>    /// A helper method to hide the Typemock code used to create each build results set
</code></pre><pre><code>/// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>    /// &lt;param name=&#34;builds&#34;&gt;The parameters to populate into the build results&lt;/param&gt;
</code></pre><pre><code>/// &lt;returns&gt;A set of build results&lt;/returns&gt;
</code></pre>
<pre tabindex="0"><code>    private IBuildDetail\[\] CreateResultSet(List&lt;BuildTestData&gt; builds)
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>        var fakeBuilds = new List&lt;IBuildDetail&gt;();
</code></pre><pre><code>    foreach (var build in builds)
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        // Create a fake build result instance
</code></pre>
<pre tabindex="0"><code>            var fakeBuildDetails = Isolate.Fake.Instance&lt;IBuildDetail&gt;();
</code></pre><pre><code>        // Set the properties, in this sample we only set a couple of properties, but this can be extended
</code></pre>
<pre tabindex="0"><code>            fakeBuildDetails.BuildNumber = build.BuildName;
</code></pre><pre><code>        fakeBuildDetails.Status = build.BuildStatus;
</code></pre>
<pre tabindex="0"><code>            fakeBuilds.Add(fakeBuildDetails);
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code>        return fakeBuilds.ToArray();
</code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code>}
```

```
/// &lt;summary&gt;
</code></pre><p>/// A holding class for the build data we are interested in faking</p>
<pre tabindex="0"><code>/// &lt;/summary&gt;
</code></pre><p>public class BuildTestData</p>
<pre tabindex="0"><code>{
</code></pre><pre><code>public string BuildName {get;set;}
</code></pre>
<pre tabindex="0"><code>    public BuildStatus BuildStatus {get;set;}
</code></pre><p>}</p>
<pre tabindex="0"><code>
This sample is obviously fairly simple, but not that unrealistic. I have certainly written simple logic like this for build status applications. You could of course use some different architecture to make the business logic a bit more testable, but for such as basic requirement it is very tempting to keep it simple.

What I hope this post shows is that there is a way to test this type of logic without the need for a TFS server that has a suitable set of pre-created data and that the basic technique can be extended as much as is required to provide a mocked framework to allow unit testing of more complicated business logic.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Alpha release of TFS 2010 Build Extensions</title>
      <link>https://blog.richardfennell.net/posts/alpha-release-of-tfs-2010-build-extensions/</link>
      <pubDate>Mon, 28 Feb 2011 05:12:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alpha-release-of-tfs-2010-build-extensions/</guid>
      <description>&lt;p&gt;Back in September I asked the question &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/09/03/does-anyone-need-a-vs2010-custom-build-activity-for-stylecop.aspx&#34;&gt;Does anyone need a VS2010 Custom Build Activity for StyleCop?&lt;/a&gt; and a good few people said yes and asked me when the activity would be released.&lt;/p&gt;
&lt;p&gt;Well I had forgotten to say that the &lt;a href=&#34;http://tfsbuildextensions.codeplex.com/&#34;&gt;Codeplex TFS Build Extensions project&lt;/a&gt;, which the activity code got included into, has made it’s first public alpha release i.e. a release that means you don’t have to download the source and build it yourself. There is now a downloadable ZIP will just the built assemblies.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Back in September I asked the question <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/09/03/does-anyone-need-a-vs2010-custom-build-activity-for-stylecop.aspx">Does anyone need a VS2010 Custom Build Activity for StyleCop?</a> and a good few people said yes and asked me when the activity would be released.</p>
<p>Well I had forgotten to say that the <a href="http://tfsbuildextensions.codeplex.com/">Codeplex TFS Build Extensions project</a>, which the activity code got included into, has made it’s first public alpha release i.e. a release that means you don’t have to download the source and build it yourself. There is now a downloadable ZIP will just the built assemblies.</p>
<p>But the StyleCop activity is not the only one in releases, the others are:</p>
<ul>
<li>StyleCop</li>
<li>Email</li>
<li>Twitter</li>
<li>Zip</li>
<li>File Attributes</li>
<li>DateTime</li>
<li>Guid</li>
<li>Run PowerShell Script</li>
<li>Run SQL Command</li>
<li>Numerous TFS Build management activities</li>
<li>Hyper-V Management</li>
<li>Virtual PC Management</li>
<li>IIS Website Management</li>
</ul>
<p>So if you think any of these would be useful in your TFS 2010 Team build please download have a look and feedback any issues you find</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF30040 error when attaching a team project collection</title>
      <link>https://blog.richardfennell.net/posts/tf30040-error-when-attaching-a-team-project-collection/</link>
      <pubDate>Sat, 19 Feb 2011 14:17:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf30040-error-when-attaching-a-team-project-collection/</guid>
      <description>&lt;p&gt;Whilst doing some tidying on a multi-server TFS 2010 system I needed to move a team project collection from one Application Tier (AT) to another. Both ATs (which were not a load balanced pair) shared the same SQL server data tier (DT). It should have been easy using the TFS Administration Console.&lt;/p&gt;
&lt;p&gt;I backed up the TPC database using SQL Management tools, the TPC was then detached without any issue from the first AT. I then backed up the SQL DB again in the detached state.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst doing some tidying on a multi-server TFS 2010 system I needed to move a team project collection from one Application Tier (AT) to another. Both ATs (which were not a load balanced pair) shared the same SQL server data tier (DT). It should have been easy using the TFS Administration Console.</p>
<p>I backed up the TPC database using SQL Management tools, the TPC was then detached without any issue from the first AT. I then backed up the SQL DB again in the detached state.</p>
<p>I then I tried to attach the TPC on the second AT. I entered the SQL DT instance name to get a list of TPCs available for attachment and I got the error</p>
<blockquote>
<p>**TF30040: The database is not correctly configured.<br>
**Contact your Team Foundation Server administrator.</p></blockquote>
<p>So I went back to the original AT and tried a re-attach and got the same error message. Strange I had not changed accounts, the TFSSetup account in use had enough rights to detach the collection but not list one to be attached, strange!</p>
<p>A quick chat with the DBA found the problem, the TFSSetup account in use on both ATs had had its rights trimmed on the SQL server since the system was installed. As soon as it was granted admin rights to the SQL server all was fine with the listing TPCs available for attachment and the actually attaching the TPC on the new server.</p>
<p>Though I did not try it I suspect that as soon as I had the list of available TPCs in the TFS Administration Console I could have removed the extra SQL rights. The TFSService account would be doing the actual attachment, as it had done the detach, the TFSSetup account only need to be used to list the available TPCs.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft.Jet.OLEDB.4.0&#39; provider is not registered on 64bit IIS7 server</title>
      <link>https://blog.richardfennell.net/posts/microsoft-jet-oledb-4-0-provider-is-not-registered-on-64bit-iis7-server/</link>
      <pubDate>Fri, 18 Feb 2011 23:22:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-jet-oledb-4-0-provider-is-not-registered-on-64bit-iis7-server/</guid>
      <description>&lt;p&gt;When loading an ASP.NET 3.5 web application that has been compiler for &lt;strong&gt;Any CPU&lt;/strong&gt; onto an IIS7 server I got the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The &amp;lsquo;Microsoft.Jet.OLEDB.4.0&amp;rsquo; provider is not registered on the local machine.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This was because the server was running a 64bit OS and only the 32bit Access Driver was installed. The quick fix to get this running was to enable 32-bit applications on the AppPools advanced settings.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_11386EA6.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_0F1B6FDD.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When loading an ASP.NET 3.5 web application that has been compiler for <strong>Any CPU</strong> onto an IIS7 server I got the error</p>
<blockquote>
<p><em>The &lsquo;Microsoft.Jet.OLEDB.4.0&rsquo; provider is not registered on the local machine.</em></p></blockquote>
<p>This was because the server was running a 64bit OS and only the 32bit Access Driver was installed. The quick fix to get this running was to enable 32-bit applications on the AppPools advanced settings.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_11386EA6.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_0F1B6FDD.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Another DDD opportunity</title>
      <link>https://blog.richardfennell.net/posts/another-ddd-opportunity/</link>
      <pubDate>Tue, 15 Feb 2011 11:59:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/another-ddd-opportunity/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.dddsouthwest.com&#34;&gt;DDD South West 3.0&lt;/a&gt; has opened for session proposals, got an idea for one? Why not have a go and submit a session.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.dddsouthwest.com&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://www.dddsouthwest.com/images/DDDSouthWest3BadgeSmall.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.dddsouthwest.com">DDD South West 3.0</a> has opened for session proposals, got an idea for one? Why not have a go and submit a session.</p>
<p><a href="http://www.dddsouthwest.com"><img loading="lazy" src="http://www.dddsouthwest.com/images/DDDSouthWest3BadgeSmall.png"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>My experiences getting started with writing custom adaptors for TFS Integration Platform</title>
      <link>https://blog.richardfennell.net/posts/my-experiences-getting-started-with-writing-custom-adaptors-for-tfs-integration-platform/</link>
      <pubDate>Wed, 09 Feb 2011 16:37:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-experiences-getting-started-with-writing-custom-adaptors-for-tfs-integration-platform/</guid>
      <description>&lt;p&gt;The TFS Integration Platform is an &lt;a href=&#34;http://msdn.microsoft.com/en-us/teamsystem/ee358786.aspx&#34;&gt;ALM Rangers&lt;/a&gt; project that provides an excellent set of tools to migrate or synchronise source code and/or work items between different TFS servers or TFS server and third party platforms. For many people the supported release on &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/en-us/5a8d1703-7987-4081-ba2f-9d0b68b0ed3e&#34;&gt;Code Gallery&lt;/a&gt; will do all they need. However if you have a need to connect to a system that there is no adaptor for you need the &lt;a href=&#34;http://tfsintegration.codeplex.com/&#34;&gt;Codeplex version&lt;/a&gt; so you can write it yourself. To get the environment up and running, not unsurprisingly, the best place to start is the &lt;a href=&#34;http://tfsintegration.codeplex.com/releases/view/35476&#34;&gt;Getting Started&lt;/a&gt; documents.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The TFS Integration Platform is an <a href="http://msdn.microsoft.com/en-us/teamsystem/ee358786.aspx">ALM Rangers</a> project that provides an excellent set of tools to migrate or synchronise source code and/or work items between different TFS servers or TFS server and third party platforms. For many people the supported release on <a href="http://visualstudiogallery.msdn.microsoft.com/en-us/5a8d1703-7987-4081-ba2f-9d0b68b0ed3e">Code Gallery</a> will do all they need. However if you have a need to connect to a system that there is no adaptor for you need the <a href="http://tfsintegration.codeplex.com/">Codeplex version</a> so you can write it yourself. To get the environment up and running, not unsurprisingly, the best place to start is the <a href="http://tfsintegration.codeplex.com/releases/view/35476">Getting Started</a> documents.</p>
<p>I did this, I got all the pre-requisites (or so I thought), I download the code and unpack the zip. I ran the  <strong>extract_tfs_assemblies.bat</strong> to get the local copies of the TFS API assemblies and loaded the <strong>MigrationTools.sln</strong> in VS2010.</p>
<p>First thing I noted was that I was asked to convert the solution and project files to 2010 format, though they appeared to be the right format to start with. I did this but the report showed no changes, strange!</p>
<p>On loading the solution, other than the ‘TFS offline’ dialog the instructions mention, it also reported it could not load the <strong>InstallationCS.csproj</strong> file because</p>
<p><em>C:ProjectsITISIntegrationPlatformSetupInstallationCAInstallationCA.csproj(133,3): The imported project &ldquo;C:Program Files (x86)MSBuildMicrosoftWiXv3.5Wix.CA.targets&rdquo; was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.</em></p>
<p>The Wix directory for the current distribution is <em>C:Program Files (x86)MSBuildMicrosoftWiXv3.x</em>, a quick edit of the <strong>InstallationCA.csproj</strong> in notepad to correct the path fixed this load problem (wonder if that was why VS2010 though it needed to do a solution upgrade?, I could not be bother to roll back to find out.)</p>
<p>I then tried to build the solution, and got around 150 errors, starting with the error</p>
<p><em>C:Program Files (x86)MSBuildMicrosoft.Cppv4.0Microsoft.CppBuild.targets(292,5): error MSB8009: .NET Framework 2.0/3.0/3.5 target the v90 platform toolset. Please make sure that Visual Studio 2008 is installed on the machine.</em></p>
<p>I checked and this targets file was there. I had VS2010 and VS2008 installed what could be the problem?</p>
<p>Well it turned out that though I had VS2008 installed I had (for some strange reason lost in the mists of time) not selected the C++ components. Easy to fix you would think. Just go into control panel, into installed products and add the missing features, so I tried to add C++ and got the message</p>
<p><em>A selected drive is no longer valid. Please review your installation path settings before continuing with setup</em></p>
<p>I tried inserting DVDs, mounting ISO etc. all to no avail. The found on this <a href="http://social.msdn.microsoft.com/Forums/en-US/vssetup/thread/2f3d0378-3175-49ae-acb7-012594a1bf3c/" title="http://social.msdn.microsoft.com/Forums/en-US/vssetup/thread/2f3d0378-3175-49ae-acb7-012594a1bf3c/">forum post</a>, turns out you have to remove VS2008 SP1, add the feature and then re-patch.Once this was done I could build the C++ projects.</p>
<p><strong>As a side note here,</strong> I went through this process a couple of time, the first time I also managed to get the TFS adaptor projects references a bit confused. The TFS2008 adaptor projects had ended up referencing the TFS2010 assemblies. This caused a build error due to obsolete calls. This was easily fixed by repointing the references to the copies of the assemblies the <strong>extract_tfs_assemblies.bat</strong> creates. However if you remember to run <strong>extract_tfs_assemblies.bat</strong> before opening the solution for the first time this should not be an issue, as all the right assemblies will be on the correct hint paths.</p>
<p>So I now had 36 build errors. The next one I tackled was</p>
<p>_Unable to copy file &ldquo;C:ProjectsITISTestEnvMigrationTestEnvironment.xml&rdquo; to &ldquo;&hellip;&hellip;BinariesDebug\TestMigrationTestEnvironment.xml&rdquo;. Could not find a part of the path &lsquo;C:ProjectsITISTestEnvMigrationTestEnvironment.xml&rsquo;.    MigrationTestLibrary<br>
_</p>
<p>This  was because the file was actually missing, it is not under the Codeplex source control. This is the XML file defines the adaptors under test. You need to create it based on the test you wish to run, or that is my current understanding. To get around the error (as a start) I just set it ‘copy to output directory’ property to ‘do not copy’ – knowing I would switch this back later</p>
<p>I was now down to 16 errors, all related to the Subversion related projects. These requires, fairly obviously, some Subversion libraries I did not have on my PC. As I did not need to work with Subversion I could have chosen to just remove these projects from the solution, but I thought why not fix it it all. They key again was to follow the <strong>readme.txt</strong> in the <strong>Interop.Subversion</strong> project folder. You need to download and unzip the two sets of source for Subversion and Apr.</p>
<p>I was then left with a missing reference to <strong>SharpSvn</strong>. This I downloaded from <a href="http://www.open.collab.net/files/documents/180/2861/SSvn-1.6006.1373.zip" title="http://www.open.collab.net/files/documents/180/2861/SSvn-1.6006.1373.zip">http://www.open.collab.net/files/documents/180/2861/SSvn-1.6006.1373.zip</a> and unpacked to the <strong>{solution root}/binaries/external/sharpsvn</strong>  and made sure the <strong>SubversionTCAdapter</strong> project reference pointed to this location.</p>
<p>Once all this was done all the projects in the solution built. So nothing too complex really, especially when you follow the instructions in the right order!</p>
<p>So to develop some custom adaptors now then……</p>
]]></content:encoded>
    </item>
    <item>
      <title>Black Marble O7 game up on Windows Phone 7 Marketplace</title>
      <link>https://blog.richardfennell.net/posts/black-marble-o7-game-up-on-windows-phone-7-marketplace/</link>
      <pubDate>Wed, 09 Feb 2011 11:30:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/black-marble-o7-game-up-on-windows-phone-7-marketplace/</guid>
      <description>&lt;p&gt;Black Marble’s first WP7 game has been published onto the Marketplace, a game of strategy. Choose to grow the number of counters you have your jump squares to gain territory.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/screenShot01_5BAD7887.png&#34;&gt;&lt;img alt=&#34;screenShot01&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/screenShot01_thumb_5017BB48.png&#34; title=&#34;screenShot01&#34;&gt;&lt;/a&gt; &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/splashScreenImage_020F5BD3.png&#34;&gt;&lt;img alt=&#34;splashScreenImage&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/splashScreenImage_thumb_5AD5129D.png&#34; title=&#34;splashScreenImage&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It’s free so why not give it a try, assuming you have a WP7 phone!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Black Marble’s first WP7 game has been published onto the Marketplace, a game of strategy. Choose to grow the number of counters you have your jump squares to gain territory.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/screenShot01_5BAD7887.png"><img alt="screenShot01" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/screenShot01_thumb_5017BB48.png" title="screenShot01"></a> <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/splashScreenImage_020F5BD3.png"><img alt="splashScreenImage" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/splashScreenImage_thumb_5AD5129D.png" title="splashScreenImage"></a></p>
<p>It’s free so why not give it a try, assuming you have a WP7 phone!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Renaming branches in TFS2010</title>
      <link>https://blog.richardfennell.net/posts/renaming-branches-in-tfs2010/</link>
      <pubDate>Fri, 04 Feb 2011 10:38:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/renaming-branches-in-tfs2010/</guid>
      <description>&lt;p&gt;I recently was asked why a client had experienced some unexpected results when merging a development branch back into the main trunk on a TFS 2010 installation.&lt;/p&gt;
&lt;p&gt;Turns out the issue was that during some tests that the both the Main and Dev branches had been renamed, and new branches of the same names created. So they had a structure like this:&lt;/p&gt;
&lt;p&gt;$/ProjectX&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;Dev&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Newly created after the rename&lt;/em&gt;&lt;/p&gt;
&lt;p&gt; &lt;/p&gt;
&lt;p&gt;Main&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently was asked why a client had experienced some unexpected results when merging a development branch back into the main trunk on a TFS 2010 installation.</p>
<p>Turns out the issue was that during some tests that the both the Main and Dev branches had been renamed, and new branches of the same names created. So they had a structure like this:</p>
<p>$/ProjectX</p>
<p> </p>
<p> </p>
<p> </p>
<p>Dev</p>
<p><em>Newly created after the rename</em></p>
<p> </p>
<p>Main</p>
<p><em>Newly created after the rename</em></p>
<p> </p>
<p>Old_Dev</p>
<p><em>Renamed of Dev</em></p>
<p> </p>
<p>Old_Main         </p>
<p><em>Renamed of Main</em></p>
<p>In TFS 2010 behind the scenes a rename is actually a branch and delete process, this meant we ended up with the new branch, but also a deleted branch of the old name. This is not obvious unless you have ‘show deleted items in source control explorer’ enabled in the Visual Studio option</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_60E1C443.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_198C6E51.png" title="image"></a></p>
<p>Once you recreate a new branch with the same name as one of the old named branches this deleted branch is replaced by the newly created one, It has taken up the old ‘slot’ (see links at end).This means that when you try to do a merge the merge tools sees this recreated branch as a potential target, and so shows it in the merge dialog. With all the potential confusion that might cause.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_79716193.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_00246B17.png" title="image"></a></p>
<p>So the simple answer is try to avoid renames, and especially try to avoid creating new branches in the same ‘slot’ as deleted/renamed ones.</p>
<p>For a more detailed explanations of that is going on here <a href="http://2010/06/09/renaming-branches-in-tfs-2010.aspx">have a look at this post on renaming branches in TFS</a> and <a href="http://blogs.msdn.com/b/mitrik/archive/2009/05/28/changing-to-slot-mode-in-tfs-2010-version-control.aspx">this one on ‘slot mode’ in 2010 version</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Somewhat nasty upgrade experience with Seesmic Desktop 2</title>
      <link>https://blog.richardfennell.net/posts/somewhat-nasty-upgrade-experience-with-seesmic-desktop-2/</link>
      <pubDate>Wed, 02 Feb 2011 15:53:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/somewhat-nasty-upgrade-experience-with-seesmic-desktop-2/</guid>
      <description>&lt;p&gt;A few days ago I got the regular ‘there is a new version of seesmic’ message.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_5A1C60C8.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_60CF6A4B.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;So I pressed update and got the less than helpful message&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_12C70AD6.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_079D808C.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;After a quick search on the web I found the log files are in &lt;strong&gt;C:Users[username]DocumentsSeesmicSeesmic Desktop 2Logs&lt;/strong&gt;, why that cannot be listed on the error dialog I don’t know!.&lt;/p&gt;
&lt;p&gt;Unfortunately there was nothing obvious in the log, just a list of plug-ins it loaded. So I went back and tried the upgrade again, reading the release notes (I know a strange idea) and noticed that it mentioned a specific version of Silverlight. A quick check of this showed I was not up to date, so I went to &lt;a href=&#34;http://www.microsoftr.com/silverlight&#34;&gt;http://www.microsoftr.com/silverlight&lt;/a&gt; and ran the installer, which updated my install (I would have expected Windows Update to have done this).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A few days ago I got the regular ‘there is a new version of seesmic’ message.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_5A1C60C8.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_60CF6A4B.png" title="image"></a></p>
<p>So I pressed update and got the less than helpful message</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_12C70AD6.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_079D808C.png" title="image"></a></p>
<p>After a quick search on the web I found the log files are in <strong>C:Users[username]DocumentsSeesmicSeesmic Desktop 2Logs</strong>, why that cannot be listed on the error dialog I don’t know!.</p>
<p>Unfortunately there was nothing obvious in the log, just a list of plug-ins it loaded. So I went back and tried the upgrade again, reading the release notes (I know a strange idea) and noticed that it mentioned a specific version of Silverlight. A quick check of this showed I was not up to date, so I went to <a href="http://www.microsoftr.com/silverlight">http://www.microsoftr.com/silverlight</a> and ran the installer, which updated my install (I would have expected Windows Update to have done this).</p>
<p>I tried the update again it failed, I then remembered that IE was running and this could have blocked some of the Silverlight assembly updates so I closed all browsers down. I also deleted the Seesmic log files, so I could get a clean look at any errors. I reloaded Seesmic and was present with an updating dialog.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0BA7CE5E.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_007E4414.png" title="image"></a></p>
<p>And all seems to be working, not sure which step actually fixed it!</p>
<p>Lets seem what the new UI is like to use……</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems connecting a Netgear WG111 USB Wifi Dongle to a Netgear DG834GT router</title>
      <link>https://blog.richardfennell.net/posts/problems-connecting-a-netgear-wg111-usb-wifi-dongle-to-a-netgear-dg834gt-router/</link>
      <pubDate>Mon, 31 Jan 2011 18:44:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-connecting-a-netgear-wg111-usb-wifi-dongle-to-a-netgear-dg834gt-router/</guid>
      <description>&lt;p&gt;Just spent an interesting hour trying to connect a Netgear WG111v2 USB WiFi Dongle to Netgear (Sky Branded) DG834GT router. They are both from the same manufacturer so you would they would work together!&lt;/p&gt;
&lt;p&gt;This router was setup with its default Sky settings so WiFi was setup as WPA.&lt;/p&gt;
&lt;p&gt;I installed the WG111 onto an XP laptop installed the newly downloaded V 5.1.1308 (26 Dec 2007) drivers and tried to connect. The router was spotted without problems and I was prompted to enter my WPA password, which was printed onto the bottom of the router (I had logged in to router via the web admin console to check this was correct). After what seemed like a long delay I was left not corrected to the router, but with no obvious error.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just spent an interesting hour trying to connect a Netgear WG111v2 USB WiFi Dongle to Netgear (Sky Branded) DG834GT router. They are both from the same manufacturer so you would they would work together!</p>
<p>This router was setup with its default Sky settings so WiFi was setup as WPA.</p>
<p>I installed the WG111 onto an XP laptop installed the newly downloaded V 5.1.1308 (26 Dec 2007) drivers and tried to connect. The router was spotted without problems and I was prompted to enter my WPA password, which was printed onto the bottom of the router (I had logged in to router via the web admin console to check this was correct). After what seemed like a long delay I was left not corrected to the router, but with no obvious error.</p>
<p>I fired up my work laptop which has built-in Wifi, this saw the router and connected as soon as the password as entered. Strange I thought, is this an XP or a WG111 problem?</p>
<p>I did a bit of searching and saw this was not an uncommon problem, the WG111 seems a troublesome child. In the end I got it working, this was the process I followed:</p>
<ul>
<li>Via the network connection window in XP I looked at the properties of the WG111</li>
<li>On the wireless tab I switched of ‘Use windows to configure my wireless network setting’.</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_58E8203E.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_32761726.png" title="image"></a></p></blockquote>
<ul>
<li>This allowed me to open the Netgear Wireless Assistance tools to get more diagnostics. I saw that the router was running on the same channel as another local router.</li>
<li>Via the web based admin console of the router I changed the channel to a free one, in my case 10 – <strong>However, I don’t think this actually fixed the problem</strong>.</li>
<li>Via the web based admin console of the router I changed the WiFi mode from ‘b and g’ to ‘g only’ – <strong>This is the important one I think</strong></li>
<li>I saved the changes and rebooted the router and it all worked</li>
<li>Just to tidy up, via the network connection window in XP I went back into the properties of the WG111 and on the wireless tab I switched on ‘Use windows to configure my wireless network setting’</li>
<li>Finally rebooted the laptop just to check it all worked, it did</li>
</ul>
<p>I suspect the issue here is the WG111 getting confused if it is in 802.11b or 802.11g network, so removing the confusion fixed the problem</p>
]]></content:encoded>
    </item>
    <item>
      <title>Gojko Adzic is presenting at the next Agile Yorkshire meeting on acceptance tests</title>
      <link>https://blog.richardfennell.net/posts/gojko-adzic-is-presenting-at-the-next-agile-yorkshire-meeting-on-acceptance-tests/</link>
      <pubDate>Sat, 29 Jan 2011 17:16:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/gojko-adzic-is-presenting-at-the-next-agile-yorkshire-meeting-on-acceptance-tests/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.agileyorkshire.org/event-announcements/08Feb2011&#34;&gt;February’s Agile Yorkshire is on the 8th&lt;/a&gt;. The session will be given by &lt;a href=&#34;http://neuri.com/&#34;&gt;Gojko Adzic&lt;/a&gt; on ’Long term value of acceptance tests’ I have seen Gojko speak at a number of events, including Agile Yorkshire, he is always an engaging speaker so well worth the effort to attend. I know I am going to give it a go, but have been really struggling to make Agile Yorkshire as Tuesday nights are a bit busy of late. My diary is just too full!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.agileyorkshire.org/event-announcements/08Feb2011">February’s Agile Yorkshire is on the 8th</a>. The session will be given by <a href="http://neuri.com/">Gojko Adzic</a> on ’Long term value of acceptance tests’ I have seen Gojko speak at a number of events, including Agile Yorkshire, he is always an engaging speaker so well worth the effort to attend. I know I am going to give it a go, but have been really struggling to make Agile Yorkshire as Tuesday nights are a bit busy of late. My diary is just too full!</p>
<p><img alt="Logo" loading="lazy" src="http://www.agileyorkshire.org/_/rsrc/1278971934358/config/customLogo.gif?revision=1"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Not made it to DDD9</title>
      <link>https://blog.richardfennell.net/posts/not-made-it-to-ddd9/</link>
      <pubDate>Sat, 29 Jan 2011 13:44:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/not-made-it-to-ddd9/</guid>
      <description>&lt;p&gt;I was busy for the few minutes that the &lt;a href=&#34;http://developerdeveloperdeveloper.com/ddd9/&#34;&gt;DDD9&lt;/a&gt; registration was open, so did not manage to get on the attendee list, but I have been try to keep an eye on that is going on at &lt;a href=&#34;http://twitter.com/search?q=%23ddd9&#34;&gt;Twitter&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The good news is that there are more DDD events on the way&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DDD Belfast - March 2011&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://developerdeveloperdeveloper.com/scotland2011&#34;&gt;DDD Scotland - May 7th 2011&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Session submissions are open for Scotland, I have submitted on writing custom build activities for TFS 2010. It is great experience presenting and attending at these events, I really recommend you give both a try.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was busy for the few minutes that the <a href="http://developerdeveloperdeveloper.com/ddd9/">DDD9</a> registration was open, so did not manage to get on the attendee list, but I have been try to keep an eye on that is going on at <a href="http://twitter.com/search?q=%23ddd9">Twitter</a>.</p>
<p>The good news is that there are more DDD events on the way</p>
<ul>
<li>DDD Belfast - March 2011</li>
<li><a href="http://developerdeveloperdeveloper.com/scotland2011">DDD Scotland - May 7th 2011</a></li>
</ul>
<p>Session submissions are open for Scotland, I have submitted on writing custom build activities for TFS 2010. It is great experience presenting and attending at these events, I really recommend you give both a try.</p>
<p>Also it was announced today there will be a DDD North East in Sep/Oct in Newcastle, keep and eye on the <a href="http://developerdeveloperdeveloper.com/">DDD site</a>. This event is being initially co-ordinated by the <a href="http://www.nebytes.net/">NEBytes</a> user group so keep an eye on their <a href="http://twitter.com/nebytes">twitter feed</a>, also Black Marble will be helping out so information will be on <a href="http://www.blackmarble.co.uk/events">our site</a> and <a href="http://twitter.com/blackmarble">twitter feed</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Follow up to yesterdays events on ‘enabling agile development with cool tools’</title>
      <link>https://blog.richardfennell.net/posts/follow-up-to-yesterdays-events-on-enabling-agile-development-with-cool-tools-2/</link>
      <pubDate>Fri, 28 Jan 2011 12:05:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-up-to-yesterdays-events-on-enabling-agile-development-with-cool-tools-2/</guid>
      <description>&lt;p&gt;&lt;em&gt;Thanks to everyone who attended&lt;/em&gt; &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Enabling%20Agile%20Development%20with%20Cool%20Tools&#34;&gt;&lt;em&gt;yesterdays Black Marble event ‘Enabling agile development with cool tools’&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, both&lt;/em&gt; &lt;a href=&#34;http://community.devexpress.com/blogs/garyshort/default.aspx&#34;&gt;&lt;em&gt;Gary Short’s&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and my sessions seemed well received. I was asked if my slides would be available anywhere, well the answer is no. The reason for this is that my session was mostly demo driven, so the slides just set the scene. After a bit of thought, a quick blog post seems a better option;  so this post covers the same basic points as the session. If you are interested in any of the products I would urge you to download them and give them a go. Many are free and all have at least a free fully functional evaluation edition.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>Thanks to everyone who attended</em> <a href="http://www.blackmarble.co.uk/events.aspx?event=Enabling%20Agile%20Development%20with%20Cool%20Tools"><em>yesterdays Black Marble event ‘Enabling agile development with cool tools’</em></a><em>, both</em> <a href="http://community.devexpress.com/blogs/garyshort/default.aspx"><em>Gary Short’s</em></a> <em>and my sessions seemed well received. I was asked if my slides would be available anywhere, well the answer is no. The reason for this is that my session was mostly demo driven, so the slides just set the scene. After a bit of thought, a quick blog post seems a better option;  so this post covers the same basic points as the session. If you are interested in any of the products I would urge you to download them and give them a go. Many are free and all have at least a free fully functional evaluation edition.</em></p>
<p>So the essence of my session was on the project management/administrative side of agile projects. The key here is communication both inside and outside of the immediate project team. How to we capture and distribute information so it assists the project not hampers it?</p>
<p>Traditionally the physical taskboard, with moving moving some form of postcards around has been the answer. This is a great solution as long as the team is co-located and that there is no need for a detailed on going record of the historic state of the tasks (maybe a requirement for legal reasons, but then maybe a daily digital photo would do?). Anyway many teams find they need to capture this information in some electronic form. In my session I looked at some of the options with TFS2010</p>
<p><strong>What is built into TFS2010?</strong></p>
<p>As TFS has a single work item store you can edit work items with a wide variety of clients. In the box you have tools to edit work items via Visual Studio, SharePoint, Team Web Access as well as the ability to manage work items in Excel and Project.</p>
<p><strong>What if I live in Outlook?</strong></p>
<p>If you want to do all you work item management in Outlook then have a look at <a href="http://www.ekobit.com/ProductsDetailView.aspx?id=1">Ekobit’s TeamCompanion</a>. This in effect allows you to treat work items in a similar manner to email, and cross between the two. So you can create a work item from an email and vice versa; it also allows the managing work items in batches. This product strikes me was very well suited to an email based support desk or project manager that is meeting or email orientated, maybe dealing with people who do not themselves have access to TFS, just email.</p>
<p><strong>How can I replicate my physical taskboard?</strong></p>
<p>For many teams the capture of the physical taskboard information is the key. I have always found a good way to make sure TFS work items are up to date is to have all the work items associated with the tasks on the taskboard returned via a TFS query and then in Excel, as the daily stand up is done, make sure each task is up to date.</p>
<p>However, some people like to work more visually than that, so in the session I looked at a couple of desktop applications that allow work item management both in a form editing manner and via taskboard like drag and drop operations. These were <a href="http://www.telerik.com/community/download-free-products.aspx">Telerik’s Work Item Manager</a> and <a href="http://www.scrumforteamsystem.com/version-3/tfs-workbench-v2-1-x64">EMC’s TFS Work Bench</a>.</p>
<p>However for many companies adding another desktop application to a controlled IT PC can be a problem so I also had a look at <a href="http://urbanturtle.com/">Urban Turtle</a> an add-in to Team Web Access that allows a more visual taskboard approach with in a browser by adding a couple of tabs to those  in the standard Team Web Access product.</p>
<p><strong>But what about outside the team?</strong></p>
<p>All the products I showed in the first half of the session were in essence work item editors, a team could choose to use any or all of them. This does not however really help with getting information out to interested parties beyond the team; for this we need publically accessible <a href="http://c2.com/cgi/wiki?InformationRadiator">Information Radiators</a>. The information on these needs to change over time and be easy to understand.</p>
<p>The output of the team focused tools may be just what you need here, maybe a chart printed out and stuck to a notice board will do, but there are some other options.</p>
<p>The first is that there are a rich set of reports in TFS, available both as Reporting Services reports and Excel charts. Reporting Services is particularity interesting as it can deliver reports to interested parties on a scheduled e.g. the CTO get the project burn down emailed as a PDF every Monday morning. There is also the option to deliver reports to central information sites such as Intranet SharePoint servers for everyone to see.</p>
<p>But what do you do if you want something a bit more striking, something that does not require a person to look on a web site or open their email? Maybe a big screen showing what is going on in the project? I showed two products to do this one was <a href="http://www.telerik.com/community/download-free-products.aspx">Telerik’s Project Dashboard</a> and the other a version our <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/12/22/update-in-using-stylecop-in-tfs-team-build.aspx">Black Marble internal BuildWallboard</a>, written using the TFS API.</p>
<p>So in summary, in my opinion the key differentiator for TFS over ALM solutions built for a set of different vendors products is that there is a single store for all work items so a wide range of editing an reporting tools can be bought to bear without having to worry over whether the information you are working with is the going to be passed correctly between the various components of the system.</p>
<p>So again I would urge you that if you use TFS have a look at these product, and the many others that are out there, given them a go and see which ones may assist your process. Remember agile is all about continuous improved isn’t it, so give it a try</p>
]]></content:encoded>
    </item>
    <item>
      <title>Follow up to yesterdays events on ‘enabling agile development with cool tools’</title>
      <link>https://blog.richardfennell.net/posts/follow-up-to-yesterdays-events-on-enabling-agile-development-with-cool-tools/</link>
      <pubDate>Fri, 28 Jan 2011 12:05:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-up-to-yesterdays-events-on-enabling-agile-development-with-cool-tools/</guid>
      <description>&lt;p&gt;&lt;em&gt;Thanks to everyone who attended&lt;/em&gt; &lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Enabling%20Agile%20Development%20with%20Cool%20Tools&#34;&gt;&lt;em&gt;yesterdays Black Marble event ‘Enabling agile development with cool tools’&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, both&lt;/em&gt; &lt;a href=&#34;http://community.devexpress.com/blogs/garyshort/default.aspx&#34;&gt;&lt;em&gt;Gary Short’s&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and my sessions seemed well received. I was asked if my slides would be available anywhere, well the answer is no. The reason for this is that my session was mostly demo driven, so the slides just set the scene. After a bit of thought, a quick blog post seems a better option;  so this post covers the same basic points as the session. If you are interested in any of the products I would urge you to download them and give them a go. Many are free and all have at least a free fully functional evaluation edition.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>Thanks to everyone who attended</em> <a href="http://www.blackmarble.co.uk/events.aspx?event=Enabling%20Agile%20Development%20with%20Cool%20Tools"><em>yesterdays Black Marble event ‘Enabling agile development with cool tools’</em></a><em>, both</em> <a href="http://community.devexpress.com/blogs/garyshort/default.aspx"><em>Gary Short’s</em></a> <em>and my sessions seemed well received. I was asked if my slides would be available anywhere, well the answer is no. The reason for this is that my session was mostly demo driven, so the slides just set the scene. After a bit of thought, a quick blog post seems a better option;  so this post covers the same basic points as the session. If you are interested in any of the products I would urge you to download them and give them a go. Many are free and all have at least a free fully functional evaluation edition.</em></p>
<p>So the essence of my session was on the project management/administrative side of agile projects. The key here is communication both inside and outside of the immediate project team. How to we capture and distribute information so it assists the project not hampers it?</p>
<p>Traditionally the physical taskboard, with moving moving some form of postcards around has been the answer. This is a great solution as long as the team is co-located and that there is no need for a detailed on going record of the historic state of the tasks (maybe a requirement for legal reasons, but then maybe a daily digital photo would do?). Anyway many teams find they need to capture this information in some electronic form. In my session I looked at some of the options with TFS2010</p>
<p><strong>What is built into TFS2010?</strong></p>
<p>As TFS has a single work item store you can edit work items with a wide variety of clients. In the box you have tools to edit work items via Visual Studio, SharePoint, Team Web Access as well as the ability to manage work items in Excel and Project.</p>
<p><strong>What if I live in Outlook?</strong></p>
<p>If you want to do all you work item management in Outlook then have a look at <a href="http://www.ekobit.com/ProductsDetailView.aspx?id=1">Ekobit’s TeamCompanion</a>. This in effect allows you to treat work items in a similar manner to email, and cross between the two. So you can create a work item from an email and vice versa; it also allows the managing work items in batches. This product strikes me was very well suited to an email based support desk or project manager that is meeting or email orientated, maybe dealing with people who do not themselves have access to TFS, just email.</p>
<p><strong>How can I replicate my physical taskboard?</strong></p>
<p>For many teams the capture of the physical taskboard information is the key. I have always found a good way to make sure TFS work items are up to date is to have all the work items associated with the tasks on the taskboard returned via a TFS query and then in Excel, as the daily stand up is done, make sure each task is up to date.</p>
<p>However, some people like to work more visually than that, so in the session I looked at a couple of desktop applications that allow work item management both in a form editing manner and via taskboard like drag and drop operations. These were <a href="http://www.telerik.com/community/download-free-products.aspx">Telerik’s Work Item Manager</a> and <a href="http://www.scrumforteamsystem.com/version-3/tfs-workbench-v2-1-x64">EMC’s TFS Work Bench</a>.</p>
<p>However for many companies adding another desktop application to a controlled IT PC can be a problem so I also had a look at <a href="http://urbanturtle.com/">Urban Turtle</a> an add-in to Team Web Access that allows a more visual taskboard approach with in a browser by adding a couple of tabs to those  in the standard Team Web Access product.</p>
<p><strong>But what about outside the team?</strong></p>
<p>All the products I showed in the first half of the session were in essence work item editors, a team could choose to use any or all of them. This does not however really help with getting information out to interested parties beyond the team; for this we need publically accessible <a href="http://c2.com/cgi/wiki?InformationRadiator">Information Radiators</a>. The information on these needs to change over time and be easy to understand.</p>
<p>The output of the team focused tools may be just what you need here, maybe a chart printed out and stuck to a notice board will do, but there are some other options.</p>
<p>The first is that there are a rich set of reports in TFS, available both as Reporting Services reports and Excel charts. Reporting Services is particularity interesting as it can deliver reports to interested parties on a scheduled e.g. the CTO get the project burn down emailed as a PDF every Monday morning. There is also the option to deliver reports to central information sites such as Intranet SharePoint servers for everyone to see.</p>
<p>But what do you do if you want something a bit more striking, something that does not require a person to look on a web site or open their email? Maybe a big screen showing what is going on in the project? I showed two products to do this one was <a href="http://www.telerik.com/community/download-free-products.aspx">Telerik’s Project Dashboard</a> and the other a version our <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/12/22/update-in-using-stylecop-in-tfs-team-build.aspx">Black Marble internal BuildWallboard</a>, written using the TFS API.</p>
<p>So in summary, in my opinion the key differentiator for TFS over ALM solutions built for a set of different vendors products is that there is a single store for all work items so a wide range of editing an reporting tools can be bought to bear without having to worry over whether the information you are working with is the going to be passed correctly between the various components of the system.</p>
<p>So again I would urge you that if you use TFS have a look at these product, and the many others that are out there, given them a go and see which ones may assist your process. Remember agile is all about continuous improved isn’t it, so give it a try</p>
]]></content:encoded>
    </item>
    <item>
      <title>At last my creature it lives – adventures with Lab Management and VLAN tags</title>
      <link>https://blog.richardfennell.net/posts/at-last-my-creature-it-lives-adventures-with-lab-management-and-vlan-tags/</link>
      <pubDate>Fri, 21 Jan 2011 13:52:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/at-last-my-creature-it-lives-adventures-with-lab-management-and-vlan-tags/</guid>
      <description>&lt;p&gt;After much delay I have at last got our internal Lab Management running on ‘real’ hardware as opposed to it’s initial home on a demo rig PC. We have just been too busy to find the time to reconfigure and redeploy our own kit! You know how it is ‘a plumber’s house is full of drippy taps’. That said I of course still want more hardware, as soon as you start to build up test environments you eat Hyper-V server resources very quickly; memory seems to be my most pressing current limitation on how much I can run at the same time.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After much delay I have at last got our internal Lab Management running on ‘real’ hardware as opposed to it’s initial home on a demo rig PC. We have just been too busy to find the time to reconfigure and redeploy our own kit! You know how it is ‘a plumber’s house is full of drippy taps’. That said I of course still want more hardware, as soon as you start to build up test environments you eat Hyper-V server resources very quickly; memory seems to be my most pressing current limitation on how much I can run at the same time.</p>
<p>You also have to be patient with Lab Management, though it provides many features to ease the life of the test team it cannot work magic. It still takes a while to copy tens of Gigabytes around the network. Though when you rollback an environment to reset a test, though it can take a few minutes you realise how that is much better that is than the older disk imaging techniques you would have to have used. You realise all the time you spend getting the base environment snapshots right is a great investment in time.</p>
<p>The one thing that caused me a few problems was that we use <a href="http://en.wikipedia.org/wiki/Vlan">VLAN tagging</a> on our switches. This means that Hyper-V hosted VMs need to have a suitable VLAN tag assigned, else they cannot see the resources on our LAN. This becomes a problem when <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/25/common-confusion-i-have-seen-with-visual-studio-2010-lab-management.aspx">using network isolation in Lab Management</a> as when the new environment is created the extra adaptor that is automatically added has no VLAN tag, so does not work. However, luckily the fix is simple, you do have to manually set the tag on the VMs network settings via Hyper-V Manager or SCVMM (as far as I can see you cannot do it MTM)</p>
<p>So now I am off to run some test in my nice new environments, what fun.</p>
<p><strong>Updated 29th Jan 2011:</strong> I can now confirm that VLAN tags are not support (see <a href="http://social.msdn.microsoft.com/Forums/en/vslab/thread/2fad399b-01fa-4001-b369-ecb7d1b071e6" title="http://social.msdn.microsoft.com/Forums/en/vslab/thread/2fad399b-01fa-4001-b369-ecb7d1b071e6">http://social.msdn.microsoft.com/Forums/en/vslab/thread/2fad399b-01fa-4001-b369-ecb7d1b071e6</a>). My workaround will work, but it does require whoever deploys a lab has SCVMM access or Hyper-V manager access to the Hyper-V hosts to make the manual edits to the network adaptor settings. So not a solution that scales well.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Preparing for my session next week on ‘enabling agile development with cool tools’</title>
      <link>https://blog.richardfennell.net/posts/preparing-for-my-session-next-week-on-enabling-agile-development-with-cool-tools/</link>
      <pubDate>Thu, 20 Jan 2011 22:56:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/preparing-for-my-session-next-week-on-enabling-agile-development-with-cool-tools/</guid>
      <description>&lt;p&gt;I have spent today preparing my presentation and demos for the Black Marble event next week [Enabling Agile Development with Cool Tools](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Enabling&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Enabling&lt;/a&gt; Agile Development with Cool Tools). I will be presenting with Gary Short of &lt;a href=&#34;http://www.devexpress.com/&#34;&gt;DevExpress&lt;/a&gt;. He is going to be talking about refactoring under the intriguing title ‘How to Eat an Elephant’.&lt;/p&gt;
&lt;p&gt;My session will be on the tools to aid the project management side of the ALM process. Specifically the tools available for TFS 2010 both those ‘out the box’ and from third party vendors. I only have a hour slot, so I have had to be selective as there are may ‘cool tools’ to choose from. So after some thought I have chosen&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have spent today preparing my presentation and demos for the Black Marble event next week [Enabling Agile Development with Cool Tools](<a href="http://www.blackmarble.co.uk/events.aspx?event=Enabling">http://www.blackmarble.co.uk/events.aspx?event=Enabling</a> Agile Development with Cool Tools). I will be presenting with Gary Short of <a href="http://www.devexpress.com/">DevExpress</a>. He is going to be talking about refactoring under the intriguing title ‘How to Eat an Elephant’.</p>
<p>My session will be on the tools to aid the project management side of the ALM process. Specifically the tools available for TFS 2010 both those ‘out the box’ and from third party vendors. I only have a hour slot, so I have had to be selective as there are may ‘cool tools’ to choose from. So after some thought I have chosen</p>
<p><a href="http://urbanturtle.com/">Urban Turtle</a> <br>
<a href="http://www.telerik.com/community/download-free-products.aspx">Telerik Work Item Manager and Project Dashboard</a><br>
<a href="http://www.ekobit.com/ProductsDetailView.aspx?id=1">Ekobit TeamCompanion</a><br>
<a href="http://www.scrumforteamsystem.com/version-3/tfs-workbench-v2-1-x64">EMC TFS Work Bench</a></p>
<p>Should be a good session, there are certainly some great tools in this list.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF215097 error when using a custom build activity</title>
      <link>https://blog.richardfennell.net/posts/tf215097-error-when-using-a-custom-build-activity/</link>
      <pubDate>Thu, 06 Jan 2011 14:59:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf215097-error-when-using-a-custom-build-activity/</guid>
      <description>&lt;p&gt;Whist trying to make use of a custom build activity I got the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;TF215097: An error occurred while initializing a build for build definition Tfsdemo1Candy: Cannot create unknown type &amp;lsquo;{clr-namespace:TfsBuildExtensions.Activities.CodeQuality;assembly=TfsBuildExtensions.Activities.StyleCop}StyleCop&amp;rsquo;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This occurred when the TFS 2010 build controller tried to parse the build process .XAML at the start of the build process. A check of all the logs gave no other information other than this error message, nothing else appeared to have occurred.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist trying to make use of a custom build activity I got the error</p>
<blockquote>
<p>TF215097: An error occurred while initializing a build for build definition Tfsdemo1Candy: Cannot create unknown type &lsquo;{clr-namespace:TfsBuildExtensions.Activities.CodeQuality;assembly=TfsBuildExtensions.Activities.StyleCop}StyleCop&rsquo;</p></blockquote>
<p>This occurred when the TFS 2010 build controller tried to parse the build process .XAML at the start of the build process. A check of all the logs gave no other information other than this error message, nothing else appeared to have occurred.</p>
<p>If I removed the custom activity from the build process all was OK and the build worked fine.</p>
<p>So <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx">my initial though was that the required assembly was not loaded into source control and the ‘version control path to custom assemblies’ set</a>. However on checking the file was there and the path set.</p>
<p>What I had forgotten was that this custom activity assembly had a reference to a TfsBuildExtensions.Activities assembly that contained a base class. It was not that the named assembly was missing but that it could not be loaded because a required assembly was missing. Unfortunately there was no clue to this in the error message or logs.</p>
<p>So if you see this problem check for references you might have forgotten and make sure ALL the required assemblies are loaded into source control on the control path for custom assemblies used by the build controller</p>
]]></content:encoded>
    </item>
    <item>
      <title>Kindle on the Phone 7</title>
      <link>https://blog.richardfennell.net/posts/kindle-on-the-phone-7/</link>
      <pubDate>Thu, 06 Jan 2011 11:10:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/kindle-on-the-phone-7/</guid>
      <description>&lt;p&gt;I asked the question &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/19/should-i-buy-a-kindle.aspx&#34;&gt;a while ago if I should buy a Kindle&lt;/a&gt;? I still think that new books are too expensive, but as there are loads of out of copyright books available for the platform so I did not hesitate to download the Windows Phone 7 Kindle app today. You never know when you need something to read and what could be better to dip into than a bit of Sherlock Holmes?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I asked the question <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/19/should-i-buy-a-kindle.aspx">a while ago if I should buy a Kindle</a>? I still think that new books are too expensive, but as there are loads of out of copyright books available for the platform so I did not hesitate to download the Windows Phone 7 Kindle app today. You never know when you need something to read and what could be better to dip into than a bit of Sherlock Holmes?</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7B7F2194.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_05D045F5.png" title="image"></a></p>
<p>Ok the experience on a phone is never going to be a good as on the Kindle hardware, but first impressions are good. It is nice and clear to read, much like the experience on the older Microsoft Reader on my old Windows Mobile 6.x, but with far easier navigation.</p>
<p>I am sure it will help we pass time when waiting at airports, train stations etc.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD9 Sessions announced</title>
      <link>https://blog.richardfennell.net/posts/ddd9-sessions-announced/</link>
      <pubDate>Tue, 04 Jan 2011 11:40:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd9-sessions-announced/</guid>
      <description>&lt;p&gt;Seems yet again my sessions for a DDD event did not excite the voting public; the &lt;a href=&#34;http://developerdeveloperdeveloper.com/ddd9/Schedule.aspx&#34;&gt;selected sessions for DDD9 are out&lt;/a&gt; and I am not on the list. Again it does look a nice varied selection of sessions. I have tried to see if I can see a trend in what is being selected, but it seems a fair mix between language feature introductions, web technologies, tools and architecture/process/patterns – so something for everyone.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Seems yet again my sessions for a DDD event did not excite the voting public; the <a href="http://developerdeveloperdeveloper.com/ddd9/Schedule.aspx">selected sessions for DDD9 are out</a> and I am not on the list. Again it does look a nice varied selection of sessions. I have tried to see if I can see a trend in what is being selected, but it seems a fair mix between language feature introductions, web technologies, tools and architecture/process/patterns – so something for everyone.</p>
<p>So if you plan to go be quick, the <a href="http://developerdeveloperdeveloper.com/ddd9/EventDates.aspx">timeline says registration opens later today</a> at 1:37pm</p>
]]></content:encoded>
    </item>
    <item>
      <title>While you are stuck at home due to snow why not vote for your favourite DDD9 session?</title>
      <link>https://blog.richardfennell.net/posts/while-you-are-stuck-at-home-due-to-snow-why-not-vote-for-your-favourite-ddd9-session/</link>
      <pubDate>Wed, 01 Dec 2010 11:28:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/while-you-are-stuck-at-home-due-to-snow-why-not-vote-for-your-favourite-ddd9-session/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://developerdeveloperdeveloper.com/ddd9/Users/VoteForSessions.aspx&#34;&gt;voting has opened for DDD9,&lt;/a&gt; As with all the DDD style events the agenda is set by you, the attendees, so get in there any say what you are interested in, it is not as if there is not a great choice this time.&lt;/p&gt;
&lt;p&gt;I have a session up on writing custom build activities for TFS 2010 which I hope some of you will find interesting, but I do fear it is a rather specialist area!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://developerdeveloperdeveloper.com/ddd9/Users/VoteForSessions.aspx">voting has opened for DDD9,</a> As with all the DDD style events the agenda is set by you, the attendees, so get in there any say what you are interested in, it is not as if there is not a great choice this time.</p>
<p>I have a session up on writing custom build activities for TFS 2010 which I hope some of you will find interesting, but I do fear it is a rather specialist area!</p>
<p>So vote early, vote often….</p>
]]></content:encoded>
    </item>
    <item>
      <title>Reminder about by web session on mocking Sharepoint with Typemock Isolator next week</title>
      <link>https://blog.richardfennell.net/posts/reminder-about-by-web-session-on-mocking-sharepoint-with-typemock-isolator-next-week/</link>
      <pubDate>Thu, 25 Nov 2010 16:44:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/reminder-about-by-web-session-on-mocking-sharepoint-with-typemock-isolator-next-week/</guid>
      <description>&lt;p&gt;Did you miss the Typemock session on mocking legacy system this week? If you you can watch a recording at &lt;a href=&#34;http://www.typemock.com/webinar&#34;&gt;the Typemock site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you are interested a more SharePoint specific session check out &lt;a href=&#34;http://lidnug-dec01.eventbrite.com/&#34;&gt;my session next week&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Did you miss the Typemock session on mocking legacy system this week? If you you can watch a recording at <a href="http://www.typemock.com/webinar">the Typemock site</a>.</p>
<p>If you are interested a more SharePoint specific session check out <a href="http://lidnug-dec01.eventbrite.com/">my session next week</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where can you learn more on Typemock after last night session at NEBytes?</title>
      <link>https://blog.richardfennell.net/posts/where-can-you-learn-more-on-typemock-after-last-night-session-at-nebytes/</link>
      <pubDate>Thu, 18 Nov 2010 14:16:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-can-you-learn-more-on-typemock-after-last-night-session-at-nebytes/</guid>
      <description>&lt;p&gt;Thanks to everyone who turned out from &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth&#34;&gt;Rik’s&lt;/a&gt; and my session last night at &lt;a href=&#34;http://www.nebytes.net/&#34;&gt;NEBytes&lt;/a&gt;. I have not bothered uploading my slides as it was really a demo driven session, but there is a video of a similar session I did at &lt;a href=&#34;http://www.ndc2010.no/&#34;&gt;NDC2010&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;However, if you want to learn more about using Typemock Isolator with legacy system why not &lt;a href=&#34;http://events.linkedin.com/Be-Legacy-Code-Unit-Test-Ninja-Typemock/pub/488276&#34;&gt;attended Roy Osherove’s ‘Be a Legacy Code Unit Test Ninja with Typemock Isolator’ web session next week&lt;/a&gt;?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who turned out from <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth">Rik’s</a> and my session last night at <a href="http://www.nebytes.net/">NEBytes</a>. I have not bothered uploading my slides as it was really a demo driven session, but there is a video of a similar session I did at <a href="http://www.ndc2010.no/">NDC2010</a></p>
<p>However, if you want to learn more about using Typemock Isolator with legacy system why not <a href="http://events.linkedin.com/Be-Legacy-Code-Unit-Test-Ninja-Typemock/pub/488276">attended Roy Osherove’s ‘Be a Legacy Code Unit Test Ninja with Typemock Isolator’ web session next week</a>?</p>
<p>Also I will be doing another web session, similar to last nights, on <a href="http://lidnug-dec01.eventbrite.com/">Typemock and SharePoint on the 1st December</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>0x80004004 when trying to upgrade Live Writer and Messenger</title>
      <link>https://blog.richardfennell.net/posts/0x80004004-when-trying-to-upgrade-live-writer-and-messenger/</link>
      <pubDate>Wed, 17 Nov 2010 11:33:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/0x80004004-when-trying-to-upgrade-live-writer-and-messenger/</guid>
      <description>&lt;p&gt;For ages now I have have been prompted when I loaded &lt;a href=&#34;http://explore.live.com/windows-live-writer?os=other&#34;&gt;Live Writer&lt;/a&gt; that there was an upgrade available, and every time I tried it get it, at the end of the install it failed and rolled back. As I did not have time to dig into it I just used the older version.&lt;/p&gt;
&lt;p&gt;Well today, due to upgrades in our LAN, I need to upgraded Live Messenger and as this is of part of the same &lt;a href=&#34;http://explore.live.com/windows-live-essentials?os=other&#34;&gt;Live Essentials 2011 package&lt;/a&gt; it not unsurprising I hit the same problem. A bit of experimentation showed the issues was that the upgrade was not able to remove the old version. If i tried to remove it via Control Panel it failed with a 0x80004004 error. In the error log I saw&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For ages now I have have been prompted when I loaded <a href="http://explore.live.com/windows-live-writer?os=other">Live Writer</a> that there was an upgrade available, and every time I tried it get it, at the end of the install it failed and rolled back. As I did not have time to dig into it I just used the older version.</p>
<p>Well today, due to upgrades in our LAN, I need to upgraded Live Messenger and as this is of part of the same <a href="http://explore.live.com/windows-live-essentials?os=other">Live Essentials 2011 package</a> it not unsurprising I hit the same problem. A bit of experimentation showed the issues was that the upgrade was not able to remove the old version. If i tried to remove it via Control Panel it failed with a 0x80004004 error. In the error log I saw</p>
<p>Product: Windows Live Messenger &ndash; Error 1402. Could not open key: UNKNOWNComponentsA49B6681220C2EA49826913B104EE03BB55DF58AB1984134795AAE690CDB085B.  System error 5.  Verify that you have sufficient access to that key, or contact your support personnel.</p>
<p>A bit of web research show this seems to be related to 32/64bt issues and maybe debris from the beta version of Live Writer.</p>
<p>The answer was to use <a href="http://download.microsoft.com/download/e/9/d/e9d80355-.../msicuu2.exe">Windows Clean Up Utility</a> (remember this is take no prisoners tool so use it with care) and remove all the package with the words ‘Microsoft’ and ‘Live’ in their names. Once this was done the Live Essentials 2011 installer was happy to do a new install, and it even remembered my blog settings!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Adding a Visual Basic 6 project to a TFS 2010 Build</title>
      <link>https://blog.richardfennell.net/posts/adding-a-visual-basic-6-project-to-a-tfs-2010-build/</link>
      <pubDate>Tue, 16 Nov 2010 22:30:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/adding-a-visual-basic-6-project-to-a-tfs-2010-build/</guid>
      <description>&lt;p&gt;Adding a Visual Basic 6 project to your TFS 2010 build process is not are hard as I had expected it to be. I had assumed I would have to write a custom build workflow template, but it turned out I was able to use the default template with just a few parameters changed from their defaults. This is the process I followed.&lt;/p&gt;
&lt;p&gt;I created a basic ‘Hello world’ VB6 application. I had previously made sure that my copy of VB6 (SP6) could connect to my TFS 2010 server using the &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/en-us/bce06506-be38-47a1-9f29-d3937d3d88d6&#34;&gt;Team Foundation Server MSSCCI Provider&lt;/a&gt; so was able to check this project into source control.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Adding a Visual Basic 6 project to your TFS 2010 build process is not are hard as I had expected it to be. I had assumed I would have to write a custom build workflow template, but it turned out I was able to use the default template with just a few parameters changed from their defaults. This is the process I followed.</p>
<p>I created a basic ‘Hello world’ VB6 application. I had previously made sure that my copy of VB6 (SP6) could connect to my TFS 2010 server using the <a href="http://visualstudiogallery.msdn.microsoft.com/en-us/bce06506-be38-47a1-9f29-d3937d3d88d6">Team Foundation Server MSSCCI Provider</a> so was able to check this project into source control.</p>
<p>Next I created a MSbuild script capable building the VB project, as follows</p>
<pre tabindex="0"><code>&lt;Project ToolsVersion\=&#34;4.0&#34; DefaultTargets\=&#34;Default&#34; xmlns\=&#34;http://schemas.microsoft.com/developer/msbuild/2003&#34;\&gt;
</code></pre><pre tabindex="0"><code>  &lt;PropertyGroup\&gt;
</code></pre><pre><code>&lt;TPath\&gt;C:Program FilesMSBuildExtensionPack4.0MSBuild.ExtensionPack.tasks&lt;/TPath\&gt;
</code></pre>
<pre tabindex="0"><code>    &lt;TPath Condition\=&#34;Exists(&#39;C:Program Files (x86)MSBuildExtensionPack4.0MSBuild.ExtensionPack.tasks&#39;)&#34;\&gt;C:Program Files (x86)MSBuildExtensionPack4.0MSBuild.ExtensionPack.tasks &lt;/TPath\&gt;
</code></pre><p>&lt;/PropertyGroup&gt;</p>
<pre tabindex="0"><code>  &lt;Import Project\=&#34;$(TPath)&#34;/&gt;
```

```
  &lt;PropertyGroup\&gt;
</code></pre><pre><code>&lt;VBPath\&gt;C:Program FilesMicrosoft Visual StudioVB98VB6.exe&lt;/VBPath\&gt;
</code></pre>
<pre tabindex="0"><code>    &lt;VBPath Condition\=&#34;Exists(&#39;C:Program Files (x86)Microsoft Visual StudioVB98VB6.exe&#39;)&#34;\&gt;C:Program Files (x86)Microsoft Visual StudioVB98VB6.exe&lt;/VBPath\&gt;
</code></pre><p>&lt;/PropertyGroup&gt;</p>
<pre tabindex="0"><code></code></pre><p>&lt;ItemGroup&gt;</p>
<pre tabindex="0"><code>    &lt;ProjectsToBuild Include\=&#34;Project1.vbp&#34;\&gt;
</code></pre><pre><code>  &lt;OutDir\&gt;$(OutDir)&lt;/OutDir\&gt;
</code></pre>
<pre tabindex="0"><code>      &lt;!-- Note the special use of ChgPropVBP metadata to change project properties at Build Time --&gt;
</code></pre><pre><code>  &lt;ChgPropVBP\&gt;RevisionVer=4;CompatibleMode=&quot;0&quot;&lt;/ChgPropVBP\&gt;
</code></pre>
<pre tabindex="0"><code>    &lt;/ProjectsToBuild\&gt;
</code></pre><p>&lt;/ItemGroup&gt;</p>
<pre tabindex="0"><code>  &lt;Target Name\=&#34;Default&#34;\&gt;
</code></pre><pre><code>&lt;!-- Build a collection of VB6 projects --&gt;
</code></pre>
<pre tabindex="0"><code>    &lt;MSBuild.ExtensionPack.VisualStudio.VB6 TaskAction=&#34;Build&#34; Projects=&#34;@(ProjectsToBuild)&#34; VB6Path=&#34;$(VBPath)&#34;/&gt;
</code></pre><p>&lt;/Target&gt;</p>
<pre tabindex="0"><code></code></pre><p>&lt;Target Name=&ldquo;clean&rdquo;&gt;</p>
<pre tabindex="0"><code>    &lt;Message Text\=&#34;Cleaning - this is where the deletes would go&#34;/&gt;
```

```
  &lt;/Target\&gt;
```

```
&lt;/Project\&gt;
```

This used the [MSBuildExtensions task](//msbuildextensionpack.codeplex.com) to call VB6 from MSBuild, this MSI needed to be installed on the PC being used for development. Points to note about this script are:

*   I wanted this build to work on both 32bit and 64bit machines so I had to check both the “Program Files” and “Program Files (x86)” directories, the Condition flag is useful for this (I could have used an environment variable as an alternative method).
*   The output directory is set to $(OutDir). This is a parameter that will be passed into the MSBuild process (and is in turn set to a Team Build variable by the workflow template so that the build system can find the built files and copy them to the TFS drop directory).

This MSBuild script file can be tested locally on a development PC using the MSBUILD.EXE from the .NET Framework directory. When I was happy with the build script, I stored it under source control in the same location as the VB project files (though any location in source control would have done)

The next step was to create a new Team Build using the default build template with a workspace containing my VB6 project.

The first thing to edit was the ‘Items to Build’. I deleted whatever was in the list (sorry can’t remember what was there by default). I then added the build.xml file I had just created and stored in source control

[![image](/wp-content/uploads/sites/2/historic/image_thumb_7D77D4F7.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_4E09662B.png)

I then tried to run the build, this if course failed as I needed to install VB6 (SP6) and the [MSBuildExtensions](//msbuildextensionpack.codeplex.com) on the build server. Once this was done I tried the build again and it work. The only issue was I got a warning that there were no assemblies that Code Analysis could be run against. So I went into the build’s parameters and switched of code analysis and testing as these were not required on this build.

So the process of build ingVB6 on TFS 2010 turned out to much easier than I expect, it just goes to show how flexible the build system in TFS 2010 is. As long as you can express your build as an MSBUILD file it should just work.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>You can’t edit a TFS 2010 build workflow template with just Team Explorer installed</title>
      <link>https://blog.richardfennell.net/posts/you-cant-edit-a-tfs-2010-build-workflow-template-with-just-team-explorer-installed/</link>
      <pubDate>Fri, 12 Nov 2010 09:26:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/you-cant-edit-a-tfs-2010-build-workflow-template-with-just-team-explorer-installed/</guid>
      <description>&lt;p&gt;I tried to open a TFS 2010 build template within the Visual Studio shell (the bit that gets installed when you put Team Explorer onto a PC) and saw the error “The document contains errors that must be fixed before the designer can be loaded”.&lt;/p&gt;
&lt;p&gt;”&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_2E1B09DD.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_1D535F2F.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;At the bottom of the screen it showed that all the underling assemblies could not be found.&lt;/p&gt;
&lt;p&gt;The solution is simple, install a ‘real version’ of Visual Studio, I put on Premium. It seems that the shell does not provide all the assemblies that are needed. Once I did this I could edit the XAML with no problems&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I tried to open a TFS 2010 build template within the Visual Studio shell (the bit that gets installed when you put Team Explorer onto a PC) and saw the error “The document contains errors that must be fixed before the designer can be loaded”.</p>
<p>”<a href="/wp-content/uploads/sites/2/historic/image_2E1B09DD.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1D535F2F.png" title="image"></a></p>
<p>At the bottom of the screen it showed that all the underling assemblies could not be found.</p>
<p>The solution is simple, install a ‘real version’ of Visual Studio, I put on Premium. It seems that the shell does not provide all the assemblies that are needed. Once I did this I could edit the XAML with no problems</p>
]]></content:encoded>
    </item>
    <item>
      <title>[More] Fun with WCF, SharePoint and Kerberos</title>
      <link>https://blog.richardfennell.net/posts/more-fun-with-wcf-sharepoint-and-kerberos/</link>
      <pubDate>Wed, 10 Nov 2010 16:27:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-fun-with-wcf-sharepoint-and-kerberos/</guid>
      <description>&lt;p&gt;This is a follow up to the post &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/29/fun-with-wcf-sharepoint-and-kerberos-well-it-looks-like-fun-with-hindsight.aspx&#34;&gt;Fun with WCF, SharePoint and Kerberos – well it looks like fun with hindsight&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;When I wrote the last post I thought I had our WCF Kerberos issues sorted, I was wrong. I had not checked what happened when I tried to access the webpart from outside our &lt;a href=&#34;http://www.microsoft.com/forefront/threat-management-gateway/en/us/&#34;&gt;TMG firewall&lt;/a&gt;. When I did this I was back with the error that I had no security token. To sort this we had to make some more changes.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This is a follow up to the post <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/29/fun-with-wcf-sharepoint-and-kerberos-well-it-looks-like-fun-with-hindsight.aspx">Fun with WCF, SharePoint and Kerberos – well it looks like fun with hindsight</a></p>
<p>When I wrote the last post I thought I had our WCF Kerberos issues sorted, I was wrong. I had not checked what happened when I tried to access the webpart from outside our <a href="http://www.microsoft.com/forefront/threat-management-gateway/en/us/">TMG firewall</a>. When I did this I was back with the error that I had no security token. To sort this we had to make some more changes.</p>
<p>This is the architecture we ended  with.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_6449A9FF.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4AE1A6C5.png" title="image"></a></p>
<p>The problem was that the Sharepoint access rule used a listener in TMG that was setup to HTML form authentication against our AD</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_71AFBD05.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_23A75D90.png" title="image"></a></p>
<p>and the rule then tried to authenticate our Sharepoint server via Kerberos using the negotiated setting in the rule. This worked for accessing the Sharepoint site itself but the second hop to the WCF service failed. This was due to use transitioning between authentication methods.</p>
<p>The solution was to change the access rule to Constrained Kerberos (still with the same Sharepoint server web application SPN)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2A5A6713.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_233B2A9B.png" title="image"></a></p>
<p>The TMG gateway computer (in the AD) then needed to be set to allow delegation. In my previous post we had just set up any machines requiring delegation to ‘Trust this computer for delegation to any service’. This did not work this time as we had forms authentication in the mix. We had to use ‘Trust this computer for delegation to specific services only’ <strong>AND</strong> ‘use any authentication protocol’. We then added the server hosting the WCF web service and the Sharepoint front end into the list of services that could be delegated too</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_10F263D9.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_5BE5D4A8.png" title="image"></a></p>
<p>So now we had it so that the firewall could delegate to the Sharepoint server SPN, but this was the wrong SPN for the webpart to use when trying to talk to the WCF web service. To address this final problem I had to specifically set the SPN in the programmatic creation of the WCF endpoint</p>
<pre tabindex="0"><code>this.callServiceClient = new CallService.CallsServiceClient(
</code></pre><pre><code>callServiceBinding, 
</code></pre>
<pre tabindex="0"><code>    new EndpointAddress(new Uri(&#34;http://mywcfbox:8080/CallsService.svc&#34;), EndpointIdentity.CreateSpnIdentity(&#34;http/mywcfbox:8080&#34;)));
```

By doing this a different SPN is used to connect to the WCF web service (from inside the webpart hosted in Sharepoint) to the one used by the firewall to connect to the Sharepoint server itself.

Simple isn’t it! The key is that you never authenticated with the firewall using Kerberos, so it could not delegate what it did not have.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>A week with a Windows Phone 7 – It just works!</title>
      <link>https://blog.richardfennell.net/posts/a-week-with-a-windows-phone-7-it-just-works/</link>
      <pubDate>Tue, 09 Nov 2010 10:10:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-week-with-a-windows-phone-7-it-just-works/</guid>
      <description>&lt;p&gt;I have had my LG WP7 phone about a week now. The best thing about it, it just works. Ok I have had a few learning issues over the Metro UI, but usually the answer is so obvious I have just overlooked it. Stupid things like I was looking for the button that was equivalent to the long press on the end key on my HTC Windows 6.5 phone to lock the phone; I had not consider the off button! A classic UTS (user too stupid) error, or maybe a user dragging old habits over to a very different way of working?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have had my LG WP7 phone about a week now. The best thing about it, it just works. Ok I have had a few learning issues over the Metro UI, but usually the answer is so obvious I have just overlooked it. Stupid things like I was looking for the button that was equivalent to the long press on the end key on my HTC Windows 6.5 phone to lock the phone; I had not consider the off button! A classic UTS (user too stupid) error, or maybe a user dragging old habits over to a very different way of working?</p>
<p>The thing that has impressed me the most so far is the voice control. In the past using a voice system usually involved recorded names and phrases and tagging them to applications or people so calls can be made. On WP7 it just works, press the button say CALL FRED BLOGGS MOBILE and it dos the rest. I have not had it fail one yet, very impressive.</p>
<p>The main thing I dislike most is the one I expect to dislike and that is the lack of tethering, I used my old phone as 3G modem when at client sites. It was great that I could just use my standard phone voice and data package for both phone internet access and PC internet access. I tended to not use this feature every week, but when I needed it I tended to use a lot of data, but it did not matter it was just covered by a single package at no extra charge. However, now I need an extra 3G dongle, so I have to work out whether it is more cost effective to get a PAYG dongle or an extra contract.</p>
<p>Another minor irritant, and this is not the operating system but the electronics, is that there is no led to show if the phone is charging or low on battery. It is all shown via the main screen. At one point I let the phone go completely flat and when I plugged it into a PC via USB to charge I saw nothing for about half an hour until there was enough power to show the screen. I though the phone was broken;, a nice little led would have fixed this for me,</p>
<p>So thus far it is a success, far better than my old HTC Diamond2 (but I am having to keep that in the glove box of my car to do SatNav for a while until there is WP7 solution)</p>
]]></content:encoded>
    </item>
    <item>
      <title>November Agile Yorkshire meeting “William Hill Agile Case Study”</title>
      <link>https://blog.richardfennell.net/posts/november-agile-yorkshire-meeting-william-hill-agile-case-study/</link>
      <pubDate>Thu, 04 Nov 2010 16:50:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/november-agile-yorkshire-meeting-william-hill-agile-case-study/</guid>
      <description>&lt;p&gt;The title of the November Agile Yorkshire meeting (on Tuesday the 9th) is &amp;ldquo;A case study exploring how agile methodologies were used to help change the way sports are traded at William Hill forever.&amp;rdquo;. As usual this is a free event and is hosted at the Old Broadcasting house in Leeds starting at 6:30pm and I am sure people will go to the pub afterwards.&lt;/p&gt;
&lt;p&gt;I have to go to Edinburgh that day to present at a &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-gb/visual-studio-events&#34;&gt;Microsoft ALM event (still space available if you are in the Edinburgh area)&lt;/a&gt; and I doubt I will be back in time. A shame as this subject looks like a very interesting one.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The title of the November Agile Yorkshire meeting (on Tuesday the 9th) is &ldquo;A case study exploring how agile methodologies were used to help change the way sports are traded at William Hill forever.&rdquo;. As usual this is a free event and is hosted at the Old Broadcasting house in Leeds starting at 6:30pm and I am sure people will go to the pub afterwards.</p>
<p>I have to go to Edinburgh that day to present at a <a href="http://www.microsoft.com/visualstudio/en-gb/visual-studio-events">Microsoft ALM event (still space available if you are in the Edinburgh area)</a> and I doubt I will be back in time. A shame as this subject looks like a very interesting one.</p>
<p>For more details look at <a href="http://www.agileyorkshire.org">www.agileyorkshire.org</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Fun with WCF, SharePoint and Kerberos – well it looks like fun with hindsight</title>
      <link>https://blog.richardfennell.net/posts/fun-with-wcf-sharepoint-and-kerberos-well-it-looks-like-fun-with-hindsight/</link>
      <pubDate>Fri, 29 Oct 2010 12:08:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fun-with-wcf-sharepoint-and-kerberos-well-it-looks-like-fun-with-hindsight/</guid>
      <description>&lt;p&gt;[Updated 10 Nov 2010: Also see &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/11/10/more-fun-with-wcf-sharepoint-and-kerberos.aspx&#34;&gt;[More] Fun with WCF, SharePoint and Kerberos&lt;/a&gt;]&lt;/p&gt;
&lt;p&gt;I have been battling some WCF authentication problems for a while now; I have been migrating our internal support desk call tracking system so that it runs as webpart hosted inside Sharepoint 2010 and uses WCF to access the backend services all using AD authentication. This means both our staff and customers can use a single sign on for all SharePoint and support desk operations. This replaced our older architecture using forms authentication and an complex mix of WCF and ASMX webservices that have grown up over time; this call tracking system started as an Access DB with a VB6 front end well over 10 years ago!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>[Updated 10 Nov 2010: Also see <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/11/10/more-fun-with-wcf-sharepoint-and-kerberos.aspx">[More] Fun with WCF, SharePoint and Kerberos</a>]</p>
<p>I have been battling some WCF authentication problems for a while now; I have been migrating our internal support desk call tracking system so that it runs as webpart hosted inside Sharepoint 2010 and uses WCF to access the backend services all using AD authentication. This means both our staff and customers can use a single sign on for all SharePoint and support desk operations. This replaced our older architecture using forms authentication and an complex mix of WCF and ASMX webservices that have grown up over time; this call tracking system started as an Access DB with a VB6 front end well over 10 years ago!</p>
<p>As with most of our SharePoint development I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx">try not work inside a SharePoint environment when developing</a>, for this project this was easy as the webpart is hosted in SharePoint but makes no calls to any SharePoint artefacts. This meant I could host the webpart within a test .ASPX web page for my development without the need to mock out SharePoint. This I did, refactoring my old collection of web services to the new WCF AD secured based architecture.</p>
<p>So at the end of this refactoring I thought I had a working webpart, but when I deployed it to our SharePoint 2010 farm it did not work. If I checked my logs I saw I had WCF authentication errors. The webpart programmatically created WCF bindings, worked in my test harness, but failed when in production.</p>
<p>A bit of reading soon showed the problem lay in the Kerberos double hop issues, and this is where the fun began. In this post I have tried to detail the solution not all the dead ends I went down to get there. The problem is that for this type of issue there is one valid solution, and millions of incorrect ones, and the diagnostic options are few and far between.</p>
<p>So you may be asking what is the kerberos double hop issue? Well a look at my test setup shows the problem.</p>
<p>[It is worth at this point getting an understanding of Kerberos, The Teched session ‘<a href="http://technet.microsoft.com/en-us/ff606447.aspx">Kerberos with Mark Minasi’ is good primer</a>]</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0B1DCF24.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3A8C3DF0.png" title="image"></a></p>
<p>The problem with this test setup is that the browser and the webserver, that hosts the test webpage (and hence webpart), are on the same box and running under the same account. Hence have full access to the credentials and so can pass them onto the WCF host, so no double hop.</p>
<p>However when we look at the production SharePoint architecture</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7A562475.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_08286A71.png" title="image"></a></p>
<p>We see that we do have a double hope. The PC (browser) passes credentials to the SharePoint server. This needs to be able to pass them onto the WCF hosted services so it can use them to access data for the original client account (the one logged into the PC), but by default this is not allowed. This is a classic Kerberos double hop. The SharePoint server must be setup such that is allow to delegate the Kerberos tickets to the next host, and the WCF host must be setup to accept the Kerberos ticket.</p>
<p>Frankly we fiddled for ages trying to sort this in SharePoint, but was getting nowhere. The key step for me was to modify my test harness so I could get the same issues outside SharePoint. As with all technical problems the answer is usually to create a simpler model that can exhibit the same problem. The main features of this change being that I had to have three boxes and needed to be running the web pages inside a web server I could control the account it was running as i.e. not Visual Studio’s default Cassini development web server.</p>
<p>So I built this system</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_15FAB06C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_60EE213B.png" title="image"></a></p>
<p>Using this model I could get the same errors inside and outside of the SharePoint. I could then build up to a solution step by step. It is worth noting that I found the best debugging option was to run <a href="http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx">DebugView</a> on the middle Development PC hosting the IIS server. This showed all the logging information from my webpart, I saw no errors on the WCF host as the failure was at the WCF authentication level, well before my code was accessed.</p>
<p>Next I started from the <a href="http://marbie.wordpress.com/2008/05/30/kerberos-delegation-and-service-identity-in-wcf/">WCF kerberos sample on Marbie’s blog</a>. I modified the programmatic binding in the webpart to match this sample</p>
<pre tabindex="0"><code>var callServiceBinding = new WSHttpBinding();
</code></pre><p>callServiceBinding.Security.Mode = SecurityMode.Message;</p>
<pre tabindex="0"><code>callServiceBinding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;
</code></pre><p>callServiceBinding.Security.Message.NegotiateServiceCredential = false;</p>
<pre tabindex="0"><code>callServiceBinding.Security.Message.EstablishSecurityContext = false;
```

```
callServiceBinding.MaxReceivedMessageSize = 2000000;
```

```
this.callServiceClient = new BlackMarble.Sabs.WcfWebParts.CallService.CallsServiceClient(
</code></pre><pre><code>callServiceBinding,
</code></pre>
<pre tabindex="0"><code>    new EndpointAddress(new Uri(“http://mywcfbox:8080/CallsService”)));
```

```
this.callServiceClient.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;
```

I then created a new console application wrapper for my web service. This again used the programmatic binding from the sample.

```
static void Main(string\[\] args)
</code></pre><p>{</p>
<pre tabindex="0"><code>    // create the service host
</code></pre><pre><code>ServiceHost myServiceHost = new ServiceHost(typeof(CallsService));
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>// create the binding
</code></pre>
<pre tabindex="0"><code>    var binding = new WSHttpBinding();
```

```
    binding.Security.Mode = SecurityMode.Message;
</code></pre><pre><code>binding.Security.Message.ClientCredentialType = MessageCredentialType.Windows;
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>// disable credential negotiation and establishment of the security context
</code></pre>
<pre tabindex="0"><code>    binding.Security.Message.NegotiateServiceCredential = false;
</code></pre><pre><code>binding.Security.Message.EstablishSecurityContext = false;
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>// Creata a URI for the endpoint address
</code></pre>
<pre tabindex="0"><code>    Uri httpUri = new Uri(&#34;http://mywcfbox:8080/CallsService&#34;);
```

```
    // Create the Endpoint Address with the SPN for the Identity
</code></pre><pre><code>EndpointAddress ea = new EndpointAddress(httpUri,
</code></pre>
<pre tabindex="0"><code>                      EndpointIdentity.CreateSpnIdentity(&#34;HOST/mywcfbox.blackmarble.co.uk:8080&#34;));
```

```
    // Get the contract from the interface
</code></pre><pre><code>ContractDescription contract = ContractDescription.GetContract(typeof(ICallsService));
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>// Create a new Service Endpoint
</code></pre>
<pre tabindex="0"><code>    ServiceEndpoint se = new ServiceEndpoint(contract, binding, ea);
```

```
    // Add the Service Endpoint to the service
</code></pre><pre><code>myServiceHost.Description.Endpoints.Add(se);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>// Open the service
</code></pre>
<pre tabindex="0"><code>    myServiceHost.Open();
</code></pre><pre><code>Console.WriteLine(&quot;Listening... &quot; + myServiceHost.Description.Endpoints\[0\].ListenUri.ToString());
</code></pre>
<pre tabindex="0"><code>    Console.ReadLine();
```

```
    // Close the service
</code></pre><pre><code>myServiceHost.Close();
</code></pre>
<pre tabindex="0"><code>}
```

I then needed to run the console server application on the WCF host. I had made sure the the console server was using the same ports as I had been using in IIS. Next I needed to run the server as a service account. I copied this server application to the WCF server I had been running my services within IIS on, obviously I stopped the IIS hosted site first to free up the IP port for my end point.

As [Marbie’s blog](http://marbie.wordpress.com/2008/05/30/kerberos-delegation-and-service-identity-in-wcf/) stated I needed run my server console application as a service account (Network Service or Local System), to do this I used the at command to schedule it starting, this is because you cannot login as either of these accounts and also cannot use runas as they have no passwords. So my start command was as below, where the time was a minute or two in the future.

&gt; at 15:50 cmd /c c:tmpWCFServer.exe

To check the server was running I used task manager and netstat –a to make sure something was listening on the expect account and port, in my case local service and 8080. To stop the service I also used task manager.

I next need to register the SPN of the WCF end point. This was done with the command

&gt; ```
&gt; setspn -a HOST/mywcfbox.blackmarble.co.uk:8080 mywcfbox
&gt; ```

Note that as the final parameter was mywcfbox (the server name). In effect I was saying that my service would run as a system service account (Network Service or Local System), which for me was fine. So what had this command done? It put an entry in the Active Directory to say that this host and this account are running an approved service.

Note: Do make sure you only declare a given SPN once, if you duplicate an SPN neither works, this is a by design security feature. You can check the SPN defined using

&gt; setspn –l mywcfbox

I then tried to run load  my test web page, but it still do not work. This was because the DevelopmentPC, hosting the web server, was not set to allow delegation. This is again set in the AD. To set It I:

1.  connected to the Domain Server
2.  selected ‘Manage users and computers in Active Directory’.
3.  browsed to the computer name (DevelopmentPC) in the ‘Computers’ tree
4.  right click to select ‘properties’
5.  selected the ‘Delegation’ tab.
6.  and set ‘Trust this computer for delegation to any service’.

I also made sure the the IIS server setting on the DevelopmentPC were set as follows, to make sure the credentials were captured and passed on.

[![image](/wp-content/uploads/sites/2/historic/image_thumb_5F3D5567.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_59CEE4C3.png)

Once all this was done it all leap into life. I could load and use my test web page from a browser on either the DevelopmentPC itself or the other PC.

The next step was to put the programmatically declared WCF bindings into the IIS web server’s web.config, as I still wanted to host my web service in IIS. This gave me web.config servicemodel section of

```
&lt;system.serviceModel\&gt;
</code></pre><p>&lt;bindings&gt;</p>
<pre tabindex="0"><code>     &lt;wsHttpBinding\&gt;
</code></pre><pre><code>   &lt;binding name\=&quot;SabsBinding&quot;\&gt;
</code></pre>
<pre tabindex="0"><code>         &lt;security mode\=&#34;Message&#34;\&gt;
</code></pre><pre><code>        &lt;message clientCredentialType\=&quot;Windows&quot; negotiateServiceCredential\=&quot;false&quot; establishSecurityContext\=&quot;false&quot; /&gt;
</code></pre>
<pre tabindex="0"><code>         &lt;/security\&gt;
</code></pre><pre><code>   &lt;/binding\&gt;
</code></pre>
<pre tabindex="0"><code>     &lt;/wsHttpBinding\&gt;
</code></pre><p>&lt;/bindings&gt;</p>
<pre tabindex="0"><code></code></pre><p>&lt;services&gt;</p>
<pre tabindex="0"><code>     &lt;service behaviorConfiguration\=&#34;BlackMarble.Sabs.WcfService.CallsServiceBehavior&#34; name\=&#34;BlackMarble.Sabs.WcfService.CallsService&#34;\&gt;
</code></pre><pre><code>   &lt;endpoint address\=&quot;&quot; binding\=&quot;wsHttpBinding&quot; contract\=&quot;BlackMarble.Sabs.WcfService.ICallsService&quot; bindingConfiguration\=&quot;SabsBinding&quot;\&gt;
</code></pre>
<pre tabindex="0"><code>       &lt;/endpoint\&gt;
</code></pre><pre><code>   &lt;endpoint address\=&quot;mex&quot; binding\=&quot;mexHttpBinding&quot; contract\=&quot;IMetadataExchange&quot; /&gt;
</code></pre>
<pre tabindex="0"><code>     &lt;/service\&gt;
</code></pre><p>&lt;/services&gt;</p>
<pre tabindex="0"><code></code></pre><p>&lt;behaviors&gt;</p>
<pre tabindex="0"><code>     &lt;serviceBehaviors\&gt;
</code></pre><pre><code>   &lt;behavior name\=&quot;BlackMarble.Sabs.WcfService.CallsServiceBehavior&quot;\&gt;
</code></pre>
<pre tabindex="0"><code>         &lt;serviceMetadata httpGetEnabled\=&#34;true&#34; /&gt;
</code></pre><pre><code>     &lt;serviceDebug includeExceptionDetailInFaults\=&quot;true&quot; /&gt;
</code></pre>
<pre tabindex="0"><code>         &lt;serviceAuthorization impersonateCallerForAllOperations\=&#34;true&#34; /&gt;
</code></pre><pre><code>   &lt;/behavior\&gt;
</code></pre>
<pre tabindex="0"><code>     &lt;/serviceBehaviors\&gt;
</code></pre><p>&lt;/behaviors&gt;</p>
<pre tabindex="0"><code> &lt;/system.serviceModel\&gt;
```

I then stopped the EXE based server, made sure I had the current service code on my IIS hosted version and restarted IIS, so my WCF web service was running as network service under IIS7 and .NET4. It still worked, so I now had an end to end solution using Kerberos. I knew both my server and client had valid configurations and in the format I wanted.

Next I upgraded my Sharepoint solution that it included the revised webpart code and tested again, and guess what, it did not work. So it was time to think was was different between my test harness and Sharepoint?

The basic SharePoint logical stack is as follows

[![image](/wp-content/uploads/sites/2/historic/image_thumb_38030C32.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_3F2248AA.png)

The key was the account which the webpart was running under. In my test box the IIS server was running as Network Server, hence it was correct to set in the AD that delegation was allowed for the computer DevelopmentPC. On our Sharepoint farm we had allowed similar delegation for SharepointServer1 and SharepointServer2 (hence Network Service on these servers). However our webpart was not running under a Network Service account, but under a domain named account. It was this account blackmarblespapp that needed to be granted delegation rights in the AD.

Still this was not the end of it, all these changes need to be synchronised out to the various box, but after a repadmin on the domain controller an IISreset on both the SharePoint front end server it all started working.

So I have the solution  was after, I can start to shut off all the old system I was using and more importantly I have a simpler stable model for future development. But what have I learnt? Well Kerberos is not as mind bending as it first appears, but you do need a good basic understanding of what is going on. Also that there are great tools like Klist to help look at Kerberos tickets, but for problems like this the issue is more a complete lack of ticket. The only solution is to build up you system step by step. Trust me you will learn more doing this way, there is no quick fix, and you learn far more than failure rather than success.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>PDC 2010 thoughts - the next morning</title>
      <link>https://blog.richardfennell.net/posts/pdc-2010-thoughts-the-next-morning/</link>
      <pubDate>Fri, 29 Oct 2010 09:02:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pdc-2010-thoughts-the-next-morning/</guid>
      <description>&lt;p&gt;I sat in the office yesterday with a beer in my hand watching the &lt;a href=&#34;http://player.microsoftpdc.com/session&#34;&gt;PDC2010 keynote&lt;/a&gt;. I have to say I preferred this to the option of a flight, jet lag and a less than comfortable seat in a usually overly cooled conference hall. With the Silverlight streaming the experience was excellent, especially as we connected an &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/archive/2010/10/22/living-with-the-acer-aspire-1420p.aspx&#34;&gt;Acer 1420P&lt;/a&gt; to our projector/audio via a single HDMI cable and it just worked.&lt;/p&gt;
&lt;p&gt;So what do you lose by not flying out? Well the obvious is the ‘free’ Windows Phone 7 the attendees got; too many people IMHO get hooked up on the swag at conferences, you go for knowledge not toys. They also forget they (or their company) paid for item anyway in their conference fee. More seriously you miss out on the chats between the sessions, and as the conference is on campus the easier access to the Microsoft staff. Also the act of travelling to a conference isolates you from the day to day interruptions of the office, the online experience does not and you will have to stay up late to view sessions live due to timezones. The whole travelling experience still cannot be replaced by the online experience, not matter how good the streaming.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I sat in the office yesterday with a beer in my hand watching the <a href="http://player.microsoftpdc.com/session">PDC2010 keynote</a>. I have to say I preferred this to the option of a flight, jet lag and a less than comfortable seat in a usually overly cooled conference hall. With the Silverlight streaming the experience was excellent, especially as we connected an <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/archive/2010/10/22/living-with-the-acer-aspire-1420p.aspx">Acer 1420P</a> to our projector/audio via a single HDMI cable and it just worked.</p>
<p>So what do you lose by not flying out? Well the obvious is the ‘free’ Windows Phone 7 the attendees got; too many people IMHO get hooked up on the swag at conferences, you go for knowledge not toys. They also forget they (or their company) paid for item anyway in their conference fee. More seriously you miss out on the chats between the sessions, and as the conference is on campus the easier access to the Microsoft staff. Also the act of travelling to a conference isolates you from the day to day interruptions of the office, the online experience does not and you will have to stay up late to view sessions live due to timezones. The whole travelling experience still cannot be replaced by the online experience, not matter how good the streaming.</p>
<p>However, even though I don’t get the ‘conference corridor experience’ it does not mean I cannot <a href="http://player.microsoftpdc.com/session">check out sessions</a>, it is great to see they are all available free and live, or immediately available recordings if I don’t want to stay up.</p>
<p>The keynote was pretty much as I had expected. There were new announcements but nothing that was ground breaking, but good vNext steps. I thought the best place to start for me was the session “Lessons learned from moving team foundation server to the cloud”, this was on TFS, and obvious area of interest for me, but more importantly no real world experience to move a complex application to Azure. This is something that is going to effect all of us if Microsoft’s bet on the cloud is correct. Seems, though there are many gottas, the process was not as bad as you would expect. For me the most interesting point was the port to Azure caused changes to the codebase that actually improved the original implementation either in manageability or performance. Also that many of the major stumbling blocks were business/charging models not technology. This is going to effect us all as we move to service platforms like Azure or even internally host equivalents like AppFabic</p>
<p>So one session watched, what to watch next?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Common confusion I have seen with Visual Studio 2010 Lab Management</title>
      <link>https://blog.richardfennell.net/posts/common-confusion-i-have-seen-with-visual-studio-2010-lab-management/</link>
      <pubDate>Mon, 25 Oct 2010 21:34:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/common-confusion-i-have-seen-with-visual-studio-2010-lab-management/</guid>
      <description>&lt;p&gt;With any new product there can be some confusion over the exact range and scope of features, this is just as true for &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/ee712698.aspx&#34;&gt;VS2010 Lab Management&lt;/a&gt; as any other. In fact given the number of moving parts (infrastructure you need in place to get it running) it can be more confusing than average. In this post I will cover the questions I have seen most often.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What does ‘Network Isolation’ really mean?&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With any new product there can be some confusion over the exact range and scope of features, this is just as true for <a href="http://msdn.microsoft.com/en-us/vstudio/ee712698.aspx">VS2010 Lab Management</a> as any other. In fact given the number of moving parts (infrastructure you need in place to get it running) it can be more confusing than average. In this post I will cover the questions I have seen most often.</p>
<p><strong>What does ‘Network Isolation’ really mean?</strong></p>
<p>The biggest confusion I have seen is that Lab Management allows you to run a number of copies of a given test environment, each instance of an environment is ‘network isolated’ from the others. This means that each instance of the environment can have server VMs named the same without errors being generated. <strong>WHAT IS DOES NOT MEAN</strong> is that each of these environments are fully isolated from your corporate or test LAN. Think about it, how could this work? I am sad to say it there is still no shipment date for Microsoft Magic Pixie Net (MMPN), until this is available then we will still need a logical connection to any virtual machine under test else we cannot control/monitor it.</p>
<p>So what does ‘Network Isolation’ actually mean? Well it basically means Lab Manager will add a second network card to each VM in your environment (with the exception of domain controllers, I will come back to that). These secondary connections are the way you usually manage the VMs in the environment, so you end up with something like the following</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_728F5573.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_34D247EA.png" title="image"></a></p>
<p>Lab Manager creates the 192.168.23.x virtual LAN which all the VMs in the environment connect to. If you want to change the IP address range this is set in the <a href="http://msdn.microsoft.com/en-us/library/dd380687.aspx">TFS administration console</a>, but I suspect needing to change this will be rare.</p>
<p>If the PCs in your environment are in a workgroup the is no more to do, but if you have a domain within your environment (i.e. you included a test domain controller VM in your environment, as shown above) you also need to tell the Lab Management environment which server is the domain controller. <strong>THIS IS VERY IMPORTANT.</strong> This is done in the Visual Studio 2010 Test Manager application where you setup the environment.</p>
<p>When all this is done and the environment is deployed, a second LAN card is added to all VMs in the environment (with the exception of the domain controller you told it about, if present). These LAN cards are connected to the corporate LAN and an IP address is provided using your corporate LAN DHCP server and assign a name in the form something like LAB[Guid].corpdomain.com (you can alter this domain name to something like LAB[Guid].test.corpdomain.com if you want in <a href="http://msdn.microsoft.com/en-us/library/dd380687.aspx">TFS administration console</a>). This second LAN is a special connection in that the VMs NETBIOS names are not broadcast over it onto the corporate LAN, thus allowing multiple copies of the ‘network isolated’ to be run. In each case each VM will have a unique name on the corporate LAN, but the original names within the test (192.168.x.x) environment.</p>
<p>Other than blocking NETBIOS, the ‘special connection’ is in no other way restricted. So any of the test VMs can use their own connection to the corporate LAN to access any corporate (or internet resources) such as patch update servers. The only requirement will be to login to the corporate domain if authentication is required, remember on the test environment you will be logged into the test domain or local workgroup.</p>
<p>I mentioned that the test domain controller is not connected to the corporate LAN. This is to make sure corporate users don’t try to authenticate against it by mistake and to stop different copies of the test domain controller trying to sync.</p>
<p>All clear? So ‘network isolated’ does not mean fully isolated, but the ability to have multiple copies of the same environment running at the same time with the magic done behind the scenes auto-magically by Lab Management. Maybe not the best piece of feature naming in the world!</p>
<p><strong>So how does a tester actually connect to the test VMs from their PC?</strong></p>
<p>Well obviously they don’t use a magic MMPN connection, there has to be a valid logical connection. There are actually two possible answers here; I suspect the most common will be via remote desktop straight to the guest test VMs, this will be via the LAB[Guid].corpdomain.com name. You might be thinking how do I know these IDs, well you can get them from the Test Manager application looking at VMs system info in any running environment. Because you can look them up in this way, a tester can either use the Windows RDP application itself or more probably just connect to the VMs from within Test Manager where it will use RDP behind the scenes.</p>
<p>The other option is to use what is called a host connection. This is when Test Manager connects to the test VMs via the Hyper-V host. For this to work the tester needs suitable Hyper-V rights and the correct tools on their local PC, not just the Test Manager. This could also be achieved using the Hyper-V manager or SCVMM console. Host mode is the way you would use to connect to a test domain controller that has no direct connection to a corporate LAN.</p>
<p>The choice of connection and tool will depend on what the tester is trying to do. I would expect Test Manager to be the tool of choice in most cases.</p>
<p><strong>Do I need Network Isolation – is there another option?</strong></p>
<p>This all depends on what you want to do, there are <a href="http://msdn.microsoft.com/en-us/library/ff756575.aspx">good description of the possible architectures in Lab Management documentation</a>. If you don’t think ‘network isolation’ as described above is right for you the only other option that can provide similar environment separation is to not run them ‘network isolate’ but to provided the environment with a single explicit connection to the corporate LAN via a firewall such as <a href="http://www.microsoft.com/forefront/threat-management-gateway/en/us/default.aspx">TMG</a> allow there connection.</p>
<p>This goes without saying is more complex than using the standard ‘network isolated’ model built into Lab Management, so make sure it is really worth the effort before starting down this route.</p>
<p><strong>What agents do I need to install?</strong></p>
<p>There are a number of agents involved in Lab Management, these allow network isolation management, deployment and testing. <a href="http://msdn.microsoft.com/en-us/library/dd648127.aspx">The ones you need depend on what you are trying to do</a>. If you want all the feature, not unsurprisingly you need them all. If this is what you want to do then use the <a href="http://code.msdn.microsoft.com/vslabmgmt">VMPrep tool</a>, it makes life easier. If you don’t want it all (and it might be easier to just install all of them as standard) you can choose.</p>
<p>If you want to gather test data you need the test agent, and you want to deploy code you need the lab workflow agent. The less obvious one is that for ‘network isolation’ you need the Lab Agent installed, it is though this agent that network isolation LAN is configured.</p>
<p><strong>Any other limitations I might have missed?</strong></p>
<p>The most obvious is that many companies will use failover clustering and a SAN to make a resilient Hyper-V cluster. Unfortunately technology his is not currently supported by Lab Management. This is easy to miss as it is only referred to once in the documentation to my knowledge in an <a href="http://msdn.microsoft.com/en-us/library/ff756575.aspx">FAQ section</a>.</p>
<p>The effect of this is to not allow shared SAN storage between any Hyper-V hosts or more importantly between the VMM Library and the Hyper-V hosts. This means that all deployment of environments has to be over the LAN, the faster SAN to SAN operations cannot be used as these need clustering.</p>
<p>I suppose there is also the limitation of no clustering that you cannot hot migrate environments around between Hyper-V hosts, but I don’t see this as much of an issue, these are meant to be lab test environments, not live production high resilience VMs.</p>
<p>This is a good reason to make sure that you separate you production Hyper-V hosts from your test ones,. Make the production servers a failover cluster and the test one just a hosts group. Let Lab Manager work out which server in the host group (assuming there is more than one) to place the environment on.</p>
<p>So I hope that helps a bit. I am sure I will find more common question, I will post about them as they emerge.</p>
]]></content:encoded>
    </item>
    <item>
      <title>First look at Postsharp AOP framework for .NET</title>
      <link>https://blog.richardfennell.net/posts/first-look-at-postsharp-aop-framework-for-net-2/</link>
      <pubDate>Mon, 25 Oct 2010 15:13:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-look-at-postsharp-aop-framework-for-net-2/</guid>
      <description>&lt;p&gt;At the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/07/thoughts-on-the-software-craftsmanship-2010.aspx&#34;&gt;Software Craftsmanship 2010 conference&lt;/a&gt; I met Gael Fraiteur of &lt;a href=&#34;http://www.sharpcrafters.com/&#34;&gt;Sharpcrafters&lt;/a&gt;, he had given a talk on &lt;a href=&#34;http://en.wikipedia.org/wiki/Aspect-oriented_programming&#34;&gt;Aspect Oriented Programming AOP&lt;/a&gt;.Since the conference I have had a chance to look at his &lt;a href=&#34;http://www.sharpcrafters.com/postsharp&#34;&gt;Postsharp AOP product for .NET.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I decided to do a quick spike project for a tender I have been working on, the requirement is to add a security model to an existing .NET assembly. Usually this would have entailed adding some security logic at the start of each public method to implement the security model. Using AOP I hoped I would be able to get the same effect by adding an attribute to the classes/methods, hopefully making the changes easier to read and quicker to develop.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/07/thoughts-on-the-software-craftsmanship-2010.aspx">Software Craftsmanship 2010 conference</a> I met Gael Fraiteur of <a href="http://www.sharpcrafters.com/">Sharpcrafters</a>, he had given a talk on <a href="http://en.wikipedia.org/wiki/Aspect-oriented_programming">Aspect Oriented Programming AOP</a>.Since the conference I have had a chance to look at his <a href="http://www.sharpcrafters.com/postsharp">Postsharp AOP product for .NET.</a></p>
<p>I decided to do a quick spike project for a tender I have been working on, the requirement is to add a security model to an existing .NET assembly. Usually this would have entailed adding some security logic at the start of each public method to implement the security model. Using AOP I hoped I would be able to get the same effect by adding an attribute to the classes/methods, hopefully making the changes easier to read and quicker to develop.</p>
<p>So I have the following business logic I wish to added security too. All I did was add the [Security] attribute to the business logic method</p>
<pre tabindex="0"><code>public class BusinessLogic
</code></pre><p>{</p>
<pre tabindex="0"><code>    IDataProvider data;
```

```
    public BusinessLogic(IDataProvider data)
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>        this.data = data;
</code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>\[Security\]
</code></pre>
<pre tabindex="0"><code>    public DataRecord GetItem(int customerId)
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>        Debug.WriteLine(&#34;BusinessLogic.GetItem&#34;);
</code></pre><pre><code>    return this.data.GetItemFromDB(customerId);
</code></pre>
<pre tabindex="0"><code>    }
</code></pre><p>}</p>
<pre tabindex="0"><code>
So what is in the AOP attribute? Basically I use the AOP framework to intercept the method call, and before the method is invoked I make a call to a factory method to get an instance of the security provider and check if I have the rights to run the method.
</code></pre><p>[Serializable]</p>
<pre tabindex="0"><code> public class SecurityAttribute :MethodInterceptionAspect
</code></pre><p>{</p>
<pre tabindex="0"><code>     public override void OnInvoke(MethodInterceptionArgs args)
</code></pre><pre><code> {
</code></pre>
<pre tabindex="0"><code>         Debug.WriteLine(&#34;SecurityAttribute.OnInvoke&#34;);
```

```
         // this assumes we know the type of arguement and that we can 
</code></pre><pre><code>     if (MembershipProviderFactory.GetProvider().CanCurrentUserViewThisItem((int)args.Arguments\[0\]) == true)
</code></pre>
<pre tabindex="0"><code>         {
</code></pre><pre><code>         Debug.WriteLine(&quot;SecurityAttribute.OnInvoke: We have rights to view&quot;);
</code></pre>
<pre tabindex="0"><code>             base.OnInvoke(args);
</code></pre><pre><code>     }
</code></pre>
<pre tabindex="0"><code>         else
</code></pre><pre><code>     {
</code></pre>
<pre tabindex="0"><code>             Debug.WriteLine(&#34;SecurityAttribute.OnInvoke: We dont have rights to view&#34;);
</code></pre><pre><code>     }
</code></pre>
<pre tabindex="0"><code>     }
</code></pre><p>}</p>
<pre tabindex="0"><code>
As it was a spike project I did not bother to write the security provider (or the DB provider for that matter). I used [Typemock Isolator](http://www.typemock.com/index-b.php) to fake it all, so my tests were as shown below. I found this way of working much quicker for my purposes.
</code></pre><p>/// <summary></p>
<pre tabindex="0"><code>  /// test for both the success and failure paths of the attribute
</code></pre><p>/// </summary></p>
<pre tabindex="0"><code>  \[TestClass\]
</code></pre><p>public class Tests</p>
<pre tabindex="0"><code>  {
</code></pre><pre><code>  \[Isolated\]
</code></pre>
<pre tabindex="0"><code>      \[TestMethod\]
</code></pre><pre><code>  public void When\_the\_membership\_provider\_gives\_access\_the\_data\_is\_returned()
</code></pre>
<pre tabindex="0"><code>      {
</code></pre><pre><code>      // arrange
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // create a fake objects
</code></pre>
<pre tabindex="0"><code>          var fakeIMembershipProvider = Isolate.Fake.Instance&lt;IMembershipProvider&gt;();
</code></pre><pre><code>      var fakeISqlProvider = Isolate.Fake.Instance&lt;ISqlProvider&gt;();
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // create real objects
</code></pre>
<pre tabindex="0"><code>          var fakeData = new DataRecord();
</code></pre><pre><code>      var bl = new BusinessLogic(fakeISqlProvider);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // Set that when we call the factory method we get the fake membership system
</code></pre>
<pre tabindex="0"><code>          Isolate.WhenCalled(() =&gt; MembershipProviderFactory.GetProvider()).WillReturn(fakeIMembershipProvider);
</code></pre><pre><code>                  // Set when we call the DB layer we get the fake object
</code></pre>
<pre tabindex="0"><code>          Isolate.WhenCalled(() =&gt; fakeISqlProvider.GetItemFromDB(0)).WillReturn(fakeData);
</code></pre><pre><code>      // Set that we are allowed to see the item
</code></pre>
<pre tabindex="0"><code>          Isolate.WhenCalled(() =&gt; fakeIMembershipProvider.CanCurrentUserViewThisItem(0)).WillReturn(true);
```

```
          // act
</code></pre><pre><code>      var actual = bl.GetItem(1);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // assert
</code></pre>
<pre tabindex="0"><code>          Assert.AreEqual(fakeData, actual);
</code></pre><pre><code>      Isolate.Verify.WasCalledWithExactArguments(() =&gt; fakeISqlProvider.GetItemFromDB(1));
</code></pre>
<pre tabindex="0"><code>      }
```

```
      \[Isolated\]
</code></pre><pre><code>  \[TestMethod\]
</code></pre>
<pre tabindex="0"><code>      public void When\_the\_membership\_provider\_does\_not\_give\_access\_the\_data\_is\_returned()
</code></pre><pre><code>  {
</code></pre>
<pre tabindex="0"><code>          // arrange
```

```
          // create a fake objects
</code></pre><pre><code>      var fakeIMembershipProvider = Isolate.Fake.Instance&lt;IMembershipProvider&gt;();
</code></pre>
<pre tabindex="0"><code>          var fakeISqlProvider = Isolate.Fake.Instance&lt;ISqlProvider&gt;();
```

```
          // create real objects
</code></pre><pre><code>      var fakeData = new DataRecord();
</code></pre>
<pre tabindex="0"><code>          var bl = new BusinessLogic(fakeISqlProvider);
```

```
          // Set that when we call the factory method we get the fake membership system
</code></pre><pre><code>      Isolate.WhenCalled(() =&gt; MembershipProviderFactory.GetProvider()).WillReturn(fakeIMembershipProvider);
</code></pre>
<pre tabindex="0"><code>          // Set when we call the DB layer we get the fake object
</code></pre><pre><code>      Isolate.WhenCalled(() =&gt; fakeISqlProvider.GetItemFromDB(0)).WillReturn(fakeData);
</code></pre>
<pre tabindex="0"><code>          // Set that we are not allowed to see the item
</code></pre><pre><code>      Isolate.WhenCalled(() =&gt; fakeIMembershipProvider.CanCurrentUserViewThisItem(0)).WillReturn(false);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // act
</code></pre>
<pre tabindex="0"><code>          var actual = bl.GetItem(1);
```

```
          // assert
</code></pre><pre><code>      Assert.AreEqual(null, actual);
</code></pre>
<pre tabindex="0"><code>          Isolate.Verify.WasNotCalled(() =&gt; fakeISqlProvider.GetItemFromDB(1));
</code></pre><pre><code>  }
</code></pre>
<pre tabindex="0"><code></code></pre><p>}</p>
<pre tabindex="0"><code>
This all work beautifully and I have to say this was nice and straight forward to code. The code looks clean and using [Reflector](http://www.red-gate.com/products/reflector/) the generated code is OK tool.

My only worries are

1.  That of performance, but after looking at the code I can’t see that the AOP framework generated code is any great deal less efficient that me adding security methods calls in all the business method. Using Postsharp would certainly require much less repetitive coding. In my spike the security factory strikes me as the bottleneck, but this is my problem, not the frameworks, and can be addressed with a better design pattern to make sure it is not created on every method call.
2.  I can see complexity appearing depending on handling the parameters being passed between the attribute and method being invoked. In my spike I need to know order of the parameters so I could pass the correct one to my security methods, however again I don’t see this as being a major stumbling block, the framework could provide something I am unaware of or I just need to write few forms of the security aspect constructor.

So will I be using Postsharp? I suppose immediately it depends if I win this tender, but I have to say I like what I saw from this first usage.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>First look at Postsharp AOP framework for .NET</title>
      <link>https://blog.richardfennell.net/posts/first-look-at-postsharp-aop-framework-for-net/</link>
      <pubDate>Mon, 25 Oct 2010 15:13:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-look-at-postsharp-aop-framework-for-net/</guid>
      <description>&lt;p&gt;At the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/07/thoughts-on-the-software-craftsmanship-2010.aspx&#34;&gt;Software Craftsmanship 2010 conference&lt;/a&gt; I met Gael Fraiteur of &lt;a href=&#34;http://www.sharpcrafters.com/&#34;&gt;Sharpcrafters&lt;/a&gt;, he had given a talk on &lt;a href=&#34;http://en.wikipedia.org/wiki/Aspect-oriented_programming&#34;&gt;Aspect Oriented Programming AOP&lt;/a&gt;.Since the conference I have had a chance to look at his &lt;a href=&#34;http://www.sharpcrafters.com/postsharp&#34;&gt;Postsharp AOP product for .NET.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I decided to do a quick spike project for a tender I have been working on, the requirement is to add a security model to an existing .NET assembly. Usually this would have entailed adding some security logic at the start of each public method to implement the security model. Using AOP I hoped I would be able to get the same effect by adding an attribute to the classes/methods, hopefully making the changes easier to read and quicker to develop.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/10/07/thoughts-on-the-software-craftsmanship-2010.aspx">Software Craftsmanship 2010 conference</a> I met Gael Fraiteur of <a href="http://www.sharpcrafters.com/">Sharpcrafters</a>, he had given a talk on <a href="http://en.wikipedia.org/wiki/Aspect-oriented_programming">Aspect Oriented Programming AOP</a>.Since the conference I have had a chance to look at his <a href="http://www.sharpcrafters.com/postsharp">Postsharp AOP product for .NET.</a></p>
<p>I decided to do a quick spike project for a tender I have been working on, the requirement is to add a security model to an existing .NET assembly. Usually this would have entailed adding some security logic at the start of each public method to implement the security model. Using AOP I hoped I would be able to get the same effect by adding an attribute to the classes/methods, hopefully making the changes easier to read and quicker to develop.</p>
<p>So I have the following business logic I wish to added security too. All I did was add the [Security] attribute to the business logic method</p>
<pre tabindex="0"><code>public class BusinessLogic
</code></pre><p>{</p>
<pre tabindex="0"><code>    IDataProvider data;
```

```
    public BusinessLogic(IDataProvider data)
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>        this.data = data;
</code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>\[Security\]
</code></pre>
<pre tabindex="0"><code>    public DataRecord GetItem(int customerId)
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>        Debug.WriteLine(&#34;BusinessLogic.GetItem&#34;);
</code></pre><pre><code>    return this.data.GetItemFromDB(customerId);
</code></pre>
<pre tabindex="0"><code>    }
</code></pre><p>}</p>
<pre tabindex="0"><code>
So what is in the AOP attribute? Basically I use the AOP framework to intercept the method call, and before the method is invoked I make a call to a factory method to get an instance of the security provider and check if I have the rights to run the method.
</code></pre><p>[Serializable]</p>
<pre tabindex="0"><code> public class SecurityAttribute :MethodInterceptionAspect
</code></pre><p>{</p>
<pre tabindex="0"><code>     public override void OnInvoke(MethodInterceptionArgs args)
</code></pre><pre><code> {
</code></pre>
<pre tabindex="0"><code>         Debug.WriteLine(&#34;SecurityAttribute.OnInvoke&#34;);
```

```
         // this assumes we know the type of arguement and that we can 
</code></pre><pre><code>     if (MembershipProviderFactory.GetProvider().CanCurrentUserViewThisItem((int)args.Arguments\[0\]) == true)
</code></pre>
<pre tabindex="0"><code>         {
</code></pre><pre><code>         Debug.WriteLine(&quot;SecurityAttribute.OnInvoke: We have rights to view&quot;);
</code></pre>
<pre tabindex="0"><code>             base.OnInvoke(args);
</code></pre><pre><code>     }
</code></pre>
<pre tabindex="0"><code>         else
</code></pre><pre><code>     {
</code></pre>
<pre tabindex="0"><code>             Debug.WriteLine(&#34;SecurityAttribute.OnInvoke: We dont have rights to view&#34;);
</code></pre><pre><code>     }
</code></pre>
<pre tabindex="0"><code>     }
</code></pre><p>}</p>
<pre tabindex="0"><code>
As it was a spike project I did not bother to write the security provider (or the DB provider for that matter). I used [Typemock Isolator](http://www.typemock.com/index-b.php) to fake it all, so my tests were as shown below. I found this way of working much quicker for my purposes.
</code></pre><p>/// <summary></p>
<pre tabindex="0"><code>  /// test for both the success and failure paths of the attribute
</code></pre><p>/// </summary></p>
<pre tabindex="0"><code>  \[TestClass\]
</code></pre><p>public class Tests</p>
<pre tabindex="0"><code>  {
</code></pre><pre><code>  \[Isolated\]
</code></pre>
<pre tabindex="0"><code>      \[TestMethod\]
</code></pre><pre><code>  public void When\_the\_membership\_provider\_gives\_access\_the\_data\_is\_returned()
</code></pre>
<pre tabindex="0"><code>      {
</code></pre><pre><code>      // arrange
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // create a fake objects
</code></pre>
<pre tabindex="0"><code>          var fakeIMembershipProvider = Isolate.Fake.Instance&lt;IMembershipProvider&gt;();
</code></pre><pre><code>      var fakeISqlProvider = Isolate.Fake.Instance&lt;ISqlProvider&gt;();
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // create real objects
</code></pre>
<pre tabindex="0"><code>          var fakeData = new DataRecord();
</code></pre><pre><code>      var bl = new BusinessLogic(fakeISqlProvider);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // Set that when we call the factory method we get the fake membership system
</code></pre>
<pre tabindex="0"><code>          Isolate.WhenCalled(() =&gt; MembershipProviderFactory.GetProvider()).WillReturn(fakeIMembershipProvider);
</code></pre><pre><code>                  // Set when we call the DB layer we get the fake object
</code></pre>
<pre tabindex="0"><code>          Isolate.WhenCalled(() =&gt; fakeISqlProvider.GetItemFromDB(0)).WillReturn(fakeData);
</code></pre><pre><code>      // Set that we are allowed to see the item
</code></pre>
<pre tabindex="0"><code>          Isolate.WhenCalled(() =&gt; fakeIMembershipProvider.CanCurrentUserViewThisItem(0)).WillReturn(true);
```

```
          // act
</code></pre><pre><code>      var actual = bl.GetItem(1);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // assert
</code></pre>
<pre tabindex="0"><code>          Assert.AreEqual(fakeData, actual);
</code></pre><pre><code>      Isolate.Verify.WasCalledWithExactArguments(() =&gt; fakeISqlProvider.GetItemFromDB(1));
</code></pre>
<pre tabindex="0"><code>      }
```

```
      \[Isolated\]
</code></pre><pre><code>  \[TestMethod\]
</code></pre>
<pre tabindex="0"><code>      public void When\_the\_membership\_provider\_does\_not\_give\_access\_the\_data\_is\_returned()
</code></pre><pre><code>  {
</code></pre>
<pre tabindex="0"><code>          // arrange
```

```
          // create a fake objects
</code></pre><pre><code>      var fakeIMembershipProvider = Isolate.Fake.Instance&lt;IMembershipProvider&gt;();
</code></pre>
<pre tabindex="0"><code>          var fakeISqlProvider = Isolate.Fake.Instance&lt;ISqlProvider&gt;();
```

```
          // create real objects
</code></pre><pre><code>      var fakeData = new DataRecord();
</code></pre>
<pre tabindex="0"><code>          var bl = new BusinessLogic(fakeISqlProvider);
```

```
          // Set that when we call the factory method we get the fake membership system
</code></pre><pre><code>      Isolate.WhenCalled(() =&gt; MembershipProviderFactory.GetProvider()).WillReturn(fakeIMembershipProvider);
</code></pre>
<pre tabindex="0"><code>          // Set when we call the DB layer we get the fake object
</code></pre><pre><code>      Isolate.WhenCalled(() =&gt; fakeISqlProvider.GetItemFromDB(0)).WillReturn(fakeData);
</code></pre>
<pre tabindex="0"><code>          // Set that we are not allowed to see the item
</code></pre><pre><code>      Isolate.WhenCalled(() =&gt; fakeIMembershipProvider.CanCurrentUserViewThisItem(0)).WillReturn(false);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>      // act
</code></pre>
<pre tabindex="0"><code>          var actual = bl.GetItem(1);
```

```
          // assert
</code></pre><pre><code>      Assert.AreEqual(null, actual);
</code></pre>
<pre tabindex="0"><code>          Isolate.Verify.WasNotCalled(() =&gt; fakeISqlProvider.GetItemFromDB(1));
</code></pre><pre><code>  }
</code></pre>
<pre tabindex="0"><code></code></pre><p>}</p>
<pre tabindex="0"><code>
This all work beautifully and I have to say this was nice and straight forward to code. The code looks clean and using [Reflector](http://www.red-gate.com/products/reflector/) the generated code is OK tool.

My only worries are

1.  That of performance, but after looking at the code I can’t see that the AOP framework generated code is any great deal less efficient that me adding security methods calls in all the business method. Using Postsharp would certainly require much less repetitive coding. In my spike the security factory strikes me as the bottleneck, but this is my problem, not the frameworks, and can be addressed with a better design pattern to make sure it is not created on every method call.
2.  I can see complexity appearing depending on handling the parameters being passed between the attribute and method being invoked. In my spike I need to know order of the parameters so I could pass the correct one to my security methods, however again I don’t see this as being a major stumbling block, the framework could provide something I am unaware of or I just need to write few forms of the security aspect constructor.

So will I be using Postsharp? I suppose immediately it depends if I win this tender, but I have to say I like what I saw from this first usage.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Typemock Isolator&#43;&#43; Video</title>
      <link>https://blog.richardfennell.net/posts/typemock-isolator-video/</link>
      <pubDate>Mon, 25 Oct 2010 12:36:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/typemock-isolator-video/</guid>
      <description>&lt;p&gt;I mentioned at the &lt;a href=&#34;http://dotnetuk.net/&#34;&gt;Developer Group meeting&lt;/a&gt; I was speaking at last week at the Typemock had a new C++ version of their product. &lt;a href=&#34;http://blog.typemock.com/2010/10/video-slides-easier-unit-testing-with.html?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A&amp;#43;Typemock&amp;#43;%28The&amp;#43;Typemock&amp;#43;Insider%29&#34;&gt;They have now published the video of their webinar on this product.&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I mentioned at the <a href="http://dotnetuk.net/">Developer Group meeting</a> I was speaking at last week at the Typemock had a new C++ version of their product. <a href="http://blog.typemock.com/2010/10/video-slides-easier-unit-testing-with.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A&#43;Typemock&#43;%28The&#43;Typemock&#43;Insider%29">They have now published the video of their webinar on this product.</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Video interviews from Software Craftsman 2010</title>
      <link>https://blog.richardfennell.net/posts/video-interviews-from-software-craftsman-2010/</link>
      <pubDate>Fri, 22 Oct 2010 21:25:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/video-interviews-from-software-craftsman-2010/</guid>
      <description>&lt;p&gt;Gael Fraiteur of &lt;a href=&#34;http://www.sharpcrafters.com/&#34;&gt;Sharpcrafters&lt;/a&gt; recorded some interviews at the Software Craftsmen conference a couple of weeks ago. The interviewees include Ben Hall, Jason Gorman, Sandro Mancuso, Zi Makki and myself. &lt;a href=&#34;http://www.youtube.com/user/sharpcrafters&#34;&gt;They are all up on Youtube&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Gael Fraiteur of <a href="http://www.sharpcrafters.com/">Sharpcrafters</a> recorded some interviews at the Software Craftsmen conference a couple of weeks ago. The interviewees include Ben Hall, Jason Gorman, Sandro Mancuso, Zi Makki and myself. <a href="http://www.youtube.com/user/sharpcrafters">They are all up on Youtube</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>I am speaking at NEBytes in November on Mocking with Typemock Isolator</title>
      <link>https://blog.richardfennell.net/posts/i-am-speaking-at-nebytes-in-november-on-mocking-with-typemock-isolator-2/</link>
      <pubDate>Tue, 19 Oct 2010 20:51:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-am-speaking-at-nebytes-in-november-on-mocking-with-typemock-isolator-2/</guid>
      <description>&lt;p&gt;On the 17th of November I will be speaking in Newcastle at the &lt;a href=&#34;http://www.nebytes.net/&#34;&gt;NEBytes&lt;/a&gt; user group on the subject ‘Using Typemock Isolator to enable testing of both well designed code and nasty legacy systems’.&lt;/p&gt;
&lt;p&gt;NEBytes meetings have an interesting format of a developer and an IT Pro talk at each meeting. The IT Pro session in November is to be given by another member of Black Marble staff, &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx&#34;&gt;Rik Hepworth&lt;/a&gt;, and is on Sharepoint I think.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On the 17th of November I will be speaking in Newcastle at the <a href="http://www.nebytes.net/">NEBytes</a> user group on the subject ‘Using Typemock Isolator to enable testing of both well designed code and nasty legacy systems’.</p>
<p>NEBytes meetings have an interesting format of a developer and an IT Pro talk at each meeting. The IT Pro session in November is to be given by another member of Black Marble staff, <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx">Rik Hepworth</a>, and is on Sharepoint I think.</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I am speaking at NEBytes in November on Mocking with Typemock Isolator</title>
      <link>https://blog.richardfennell.net/posts/i-am-speaking-at-nebytes-in-november-on-mocking-with-typemock-isolator/</link>
      <pubDate>Tue, 19 Oct 2010 20:51:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-am-speaking-at-nebytes-in-november-on-mocking-with-typemock-isolator/</guid>
      <description>&lt;p&gt;On the 17th of November I will be speaking in Newcastle at the &lt;a href=&#34;http://www.nebytes.net/&#34;&gt;NEBytes&lt;/a&gt; user group on the subject ‘Using Typemock Isolator to enable testing of both well designed code and nasty legacy systems’.&lt;/p&gt;
&lt;p&gt;NEBytes meetings have an interesting format of a developer and an IT Pro talk at each meeting. The IT Pro session in November is to be given by another member of Black Marble staff, &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx&#34;&gt;Rik Hepworth&lt;/a&gt;, and is on Sharepoint I think.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On the 17th of November I will be speaking in Newcastle at the <a href="http://www.nebytes.net/">NEBytes</a> user group on the subject ‘Using Typemock Isolator to enable testing of both well designed code and nasty legacy systems’.</p>
<p>NEBytes meetings have an interesting format of a developer and an IT Pro talk at each meeting. The IT Pro session in November is to be given by another member of Black Marble staff, <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx">Rik Hepworth</a>, and is on Sharepoint I think.</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Should I buy a Kindle?</title>
      <link>https://blog.richardfennell.net/posts/should-i-buy-a-kindle/</link>
      <pubDate>Tue, 19 Oct 2010 11:05:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/should-i-buy-a-kindle/</guid>
      <description>&lt;p&gt;I have always read a lot of novels, and I like to have a book with me for those unexpected moments when I get a chance to read. Of late this has meant I use the &lt;a href=&#34;http://www.microsoft.com/reader/&#34;&gt;Microsoft Reader&lt;/a&gt; on my phone. It is not too bad an experience, using &lt;a href=&#34;http://www.gutenberg.org/wiki/Main_Page&#34;&gt;Project Gutenberg&lt;/a&gt; I can get a book (fiddle a bit in Word) and export to the Reader format. However I would like a slicker experience and be able to read new releases, so the &lt;a href=&#34;http://www.amazon.co.uk/Kindle-Store/b/ref=topnav_storetab_kinh?ie=UTF8&amp;amp;node=341677031&#34;&gt;Kindle&lt;/a&gt; seems just the job.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have always read a lot of novels, and I like to have a book with me for those unexpected moments when I get a chance to read. Of late this has meant I use the <a href="http://www.microsoft.com/reader/">Microsoft Reader</a> on my phone. It is not too bad an experience, using <a href="http://www.gutenberg.org/wiki/Main_Page">Project Gutenberg</a> I can get a book (fiddle a bit in Word) and export to the Reader format. However I would like a slicker experience and be able to read new releases, so the <a href="http://www.amazon.co.uk/Kindle-Store/b/ref=topnav_storetab_kinh?ie=UTF8&amp;node=341677031">Kindle</a> seems just the job.</p>
<p>As a device it seems perfect, about the size and weight of a paperback, excellent battery life (as power is only used to turn/display the page, not to view pages), excellent in natural light and now the price has dropped to the point that if you did lose it you are pissed off, but not bankrupt. Oh and dropping in the bath, though it might ruin the device will not electrocute you!</p>
<p>My problem is the price of new books, take <a href="http://www.amazon.co.uk/gp/product/B003ZUXXBA?pf_rd_p=215862067&amp;pf_rd_s=center-4&amp;pf_rd_t=101&amp;pf_rd_i=341677031&amp;pf_rd_m=A3P5ROKL5A1OLE&amp;pf_rd_r=064BSH6QGZF5H8YDSWY8">William Gibson’s Zero History</a> as an example. On Amazon this is £12.29 in hardback but £9.99 for the Kindle edition. So from this we assume the production costs, shipping warehousing etc for the physical copy total £2.30, seems a bit low to me! How is the £9.99 justified, there will be the writer’s royalty, the file production costs and the marketing and other publishing overheads but £9.99 seems steep, especially give the royalty rate that I know friends who are writers gets for their novels. Someone is making a tidy profit, and it is not the writer.</p>
<p>If we look at one of Gibson’s older books, <a href="http://www.amazon.co.uk/Spook-Country/dp/B002TJLF3Q/ref=pd_sim_kinc_1">Spook Country</a>, now in Paperpack for £2.99 we see the Kindle price is £2.84. So it seems the Kindle price is set at (very roughly) 10% below the lowest physical edition cost.</p>
<p>So I am being asked to buy a eBook at almost the same cost as I can get a paper copy, when the publisher/supplier chain do not have to make the physical copy or ship it. I get the convenience that I can carry around 3500 books at a time, but I can only read one! Also I cannot lend a book to a friend, thus I admit reducing the potential royalties of a writer, but also removing any viral marketing opportunities.</p>
<p>So should I buy a Kindle? At this price for the eBooks I think not. I will stick to buying my new books on paper and keep a selection of out of copyright classics on my phone. I will wait until the publishing industry reviews it sales model for these editions, maybe increasing the writers royalties to reflect that it is their efforts that are being purchased not examples of the book binders art!.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Do you work with C&#43;&#43;? Typemock Isolator&#43;&#43; webinar</title>
      <link>https://blog.richardfennell.net/posts/do-you-work-with-c-typemock-isolator-webinar/</link>
      <pubDate>Tue, 19 Oct 2010 10:20:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/do-you-work-with-c-typemock-isolator-webinar/</guid>
      <description>&lt;p&gt;I don’t work with C++, but if you do you might be interested in Typemock’s free webinar this Thursday (21st October) on Isolator++. It will cover:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The basic API design principles of Isolator++ (from AAA to recursive fakes, loose fakes, sticky behaviour etc..)&lt;/li&gt;
&lt;li&gt;What can Isolator++ do that others can’t?&lt;/li&gt;
&lt;li&gt;Code examples using Google Test framework and Isolator++, to test nasty untestable C++ code&lt;/li&gt;
&lt;li&gt;And as it is being given by Roy Osherove, possibly a short song, suitable for the occasion, performed live.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Also as an added incentive Typemock, in honor of the first release of Isolator++, Typemock are giving away &lt;strong&gt;a free Isolator++ license to all webinar attendees&lt;/strong&gt; .&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I don’t work with C++, but if you do you might be interested in Typemock’s free webinar this Thursday (21st October) on Isolator++. It will cover:</p>
<ul>
<li>The basic API design principles of Isolator++ (from AAA to recursive fakes, loose fakes, sticky behaviour etc..)</li>
<li>What can Isolator++ do that others can’t?</li>
<li>Code examples using Google Test framework and Isolator++, to test nasty untestable C++ code</li>
<li>And as it is being given by Roy Osherove, possibly a short song, suitable for the occasion, performed live.</li>
</ul>
<p>Also as an added incentive Typemock, in honor of the first release of Isolator++, Typemock are giving away <strong>a free Isolator++ license to all webinar attendees</strong> .</p>
<p><a href="https://www2.gotomeeting.com/register/973812899">To attend the webinar, you can register here.</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>All our futures behind us?</title>
      <link>https://blog.richardfennell.net/posts/all-our-futures-behind-us/</link>
      <pubDate>Mon, 18 Oct 2010 19:58:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/all-our-futures-behind-us/</guid>
      <description>&lt;p&gt;I had a strangely thought provoking weekend, I took my son to do the &lt;a href=&#34;http://www.manchesterairport.co.uk/manweb.nsf/Content/ConcordeInformation&#34;&gt;tour of Concorde at Manchester Airport&lt;/a&gt;, and whilst in the area popped into &lt;a href=&#34;http://www.jb.man.ac.uk/visitorcentre/&#34;&gt;Jodrell Bank to look at the Radio Telescope and the arboretum&lt;/a&gt;. Two great technological achievement, well worth a visit, but I felt both seemed to be in our past. I remember Concorde, I remember Apollo (just) and I remember sitting in a room at school to watch the first Shuttle launch, but where is the equivalent today? I started to feel that this ‘thrusting to the future’ style of project no longer exists; there seem to be few children saying ‘I want to be an engineer’ or ‘an astronaut’. I fear they are too often now saying ‘I just want to be famous’.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I had a strangely thought provoking weekend, I took my son to do the <a href="http://www.manchesterairport.co.uk/manweb.nsf/Content/ConcordeInformation">tour of Concorde at Manchester Airport</a>, and whilst in the area popped into <a href="http://www.jb.man.ac.uk/visitorcentre/">Jodrell Bank to look at the Radio Telescope and the arboretum</a>. Two great technological achievement, well worth a visit, but I felt both seemed to be in our past. I remember Concorde, I remember Apollo (just) and I remember sitting in a room at school to watch the first Shuttle launch, but where is the equivalent today? I started to feel that this ‘thrusting to the future’ style of project no longer exists; there seem to be few children saying ‘I want to be an engineer’ or ‘an astronaut’. I fear they are too often now saying ‘I just want to be famous’.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2C1412E9.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1CED0E42.png" title="image"></a></p>
<p>But then I thought a bit more and I think these projects are still there; we have had the <a href="http://www.lhc.ac.uk/">LHC</a> switched on last year and just last week BBC News covered the break through of the <a href="http://www.bbc.co.uk/news/world-europe-11548845?utm_source=twitterfeed&amp;utm_medium=twitter&amp;utm_campaign=News&amp;utm_term=Private&amp;utm_content=World&#43;News">Gotthard Rail Tunnel</a>. Big science/technology is still a news story, but I have to say more usually not in the positive sense, too many stories are presented in the ‘science gone mad’ category. We (or should I have said the media) have lost the awe for big science and replaced it with fear or at least a mistrust.</p>
<p>Maybe I am just looking at the past with rose tinted spectacles, Jodrell Bank was over budget about 10x and people complained ‘why send men to the moon when people are starving on earth’, so maybe the coverage was the same. The current mainstream tone of reporting could just be a factor of living in a less deferential age. For me there is no question it is good to question the value of science, but this has to be done from an informed position, you have to least start to understand the question to give an reasonable opinion (or even ask a reasonable question in the first place).</p>
<p>What I worry is that this move, this lack of awe and excitement in science, will drive children away from wanted to be involved in science and technology. At least we are seeing a return to accessible science on the BBC (worth every penny of the license fee) in <a href="http://www.bbc.co.uk/bang/">Bang Goes the Theory</a>, the <a href="http://www.bbc.co.uk/bbcone/wallaceandgromit/">World of Invention</a> and the new archive of <a href="http://www.bbc.co.uk/archive/great_egg_race/">The Great Egg Race</a> (proper 1970’s mad scientists, I doubt you would get a 30 minute programme today with people fiddling with bits of string and rubber bands whist wearing wing collar nylon shirts, think of the health and safety static risk alone!).</p>
<p>So I guess my initial fear is unfounded, there is the sense of wonder out there, maybe we just have to make more of an effort go to find it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>ALM Rangers ship SCOM Management pack for TFS2010</title>
      <link>https://blog.richardfennell.net/posts/alm-rangers-ship-scom-management-pack-for-tfs2010/</link>
      <pubDate>Fri, 15 Oct 2010 20:26:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alm-rangers-ship-scom-management-pack-for-tfs2010/</guid>
      <description>&lt;p&gt;Want to monitor the health of your TFS system? Then wait no longer, the &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/ee358786.aspx&#34;&gt;ALM Rangers&lt;/a&gt; have just shipped a SCOM Management pack for Visual Studio 2010.&lt;/p&gt;
&lt;p&gt;The management pack provides availability and configuration monitoring, performance data collection, and default thresholds.  So if you use any SKU of &lt;a href=&#34;http://technet.microsoft.com/en-us/evalcenter/bb738014.aspx&#34;&gt;SCOM 2007&lt;/a&gt; and TFS 2010 why not &lt;a href=&#34;http://www.microsoft.com/downloads/en/details.aspx?FamilyID=97ca3b31-3653-4d60-bdad-3f2017febdc3&amp;amp;displaylang=en&#34;&gt;download it&lt;/a&gt; and have a look.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Want to monitor the health of your TFS system? Then wait no longer, the <a href="http://msdn.microsoft.com/en-us/vstudio/ee358786.aspx">ALM Rangers</a> have just shipped a SCOM Management pack for Visual Studio 2010.</p>
<p>The management pack provides availability and configuration monitoring, performance data collection, and default thresholds.  So if you use any SKU of <a href="http://technet.microsoft.com/en-us/evalcenter/bb738014.aspx">SCOM 2007</a> and TFS 2010 why not <a href="http://www.microsoft.com/downloads/en/details.aspx?FamilyID=97ca3b31-3653-4d60-bdad-3f2017febdc3&amp;displaylang=en">download it</a> and have a look.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Leeds Hack Day 2010 registration open</title>
      <link>https://blog.richardfennell.net/posts/leeds-hack-day-2010-registration-open/</link>
      <pubDate>Thu, 14 Oct 2010 14:08:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/leeds-hack-day-2010-registration-open/</guid>
      <description>&lt;p&gt;To quote their site….&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Leeds Hack is a 24 hour hack day in the city of Leeds (surprise!). 100+ people in a room – Feed them, water them, take them out for walks every now and them and let them create some amazing things.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Best of all… It’s free..&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;For more details have a look at &lt;a href=&#34;http://leedshack.com/&#34; title=&#34;http://leedshack.com/&#34;&gt;http://leedshack.com/&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>To quote their site….</p>
<p><em>Leeds Hack is a 24 hour hack day in the city of Leeds (surprise!). 100+ people in a room – Feed them, water them, take them out for walks every now and them and let them create some amazing things.</em></p>
<p><em>Best of all… It’s free..</em></p>
<p>For more details have a look at <a href="http://leedshack.com/" title="http://leedshack.com/">http://leedshack.com/</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at the Developer Group on Typemock Isolator on the 20th of October</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-the-developer-group-on-typemock-isolator-on-the-20th-of-october/</link>
      <pubDate>Thu, 07 Oct 2010 21:46:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-the-developer-group-on-typemock-isolator-on-the-20th-of-october/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://dotnetuk.net/meetings/20101020.pdf&#34;&gt;detailed agenda for the Developer Group Meeting on the 20th of October in London&lt;/a&gt; has been published. I am speaking  in the afternoon on ‘Using Typemock Isolator to enable testing of both well designed code and nasty legacy systems’&lt;/p&gt;
&lt;p&gt;Hope to see you there&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://dotnetuk.net/meetings/20101020.pdf">detailed agenda for the Developer Group Meeting on the 20th of October in London</a> has been published. I am speaking  in the afternoon on ‘Using Typemock Isolator to enable testing of both well designed code and nasty legacy systems’</p>
<p>Hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>Thoughts on the Software Craftsmanship 2010</title>
      <link>https://blog.richardfennell.net/posts/thoughts-on-the-software-craftsmanship-2010/</link>
      <pubDate>Thu, 07 Oct 2010 21:38:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/thoughts-on-the-software-craftsmanship-2010/</guid>
      <description>&lt;p&gt;I have reached my hotel after the days events at &lt;a href=&#34;http://parlezuml.com/softwarecraftsmanship/&#34;&gt;Software Craftsmanship 2010&lt;/a&gt; at &lt;a href=&#34;http://www.bletchleypark.org.uk/&#34;&gt;Bletchley Park&lt;/a&gt; and got a chance to write up my thoughts. I had planned to &lt;a href=&#34;http://twitter.com/richardfennell&#34;&gt;Tweet&lt;/a&gt; during the day, but just never got round to it, too busy.&lt;/p&gt;
&lt;p&gt;So how was the event? I have to say I don’t think I enjoyed it as much as &lt;a href=&#34;https://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/02/26/intent-is-the-key-thoughts-on-the-way-home-form-software-craftsmanship-2009.aspx&#34;&gt;last year’s event&lt;/a&gt;. That is not to say this was not a good event, but this years seemed to focus on hands on programming tasks. This is great but I always feel that I could have been doing this at home, or in a dojo session in the office (not that I do this  oftan enough). Last year the sessions were more &lt;a href=&#34;http://codingdojo.org/cgi-bin/wiki.pl?back=RandoriKata&#34;&gt;Randori Kata&lt;/a&gt; format and the group discussion this engendered I found very useful. I think conferences like this are probably at their best when they open your mind to new technique and ideas; yes coding kata can do this, but I feel that a conference needs to focus on the meta level of ‘why do this kata’ and how to ‘best run a kata’ and leave the actual kata sessions to usergroups meetings.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have reached my hotel after the days events at <a href="http://parlezuml.com/softwarecraftsmanship/">Software Craftsmanship 2010</a> at <a href="http://www.bletchleypark.org.uk/">Bletchley Park</a> and got a chance to write up my thoughts. I had planned to <a href="http://twitter.com/richardfennell">Tweet</a> during the day, but just never got round to it, too busy.</p>
<p>So how was the event? I have to say I don’t think I enjoyed it as much as <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/02/26/intent-is-the-key-thoughts-on-the-way-home-form-software-craftsmanship-2009.aspx">last year’s event</a>. That is not to say this was not a good event, but this years seemed to focus on hands on programming tasks. This is great but I always feel that I could have been doing this at home, or in a dojo session in the office (not that I do this  oftan enough). Last year the sessions were more <a href="http://codingdojo.org/cgi-bin/wiki.pl?back=RandoriKata">Randori Kata</a> format and the group discussion this engendered I found very useful. I think conferences like this are probably at their best when they open your mind to new technique and ideas; yes coding kata can do this, but I feel that a conference needs to focus on the meta level of ‘why do this kata’ and how to ‘best run a kata’ and leave the actual kata sessions to usergroups meetings.</p>
<p>Maybe there should be more sessions on the discussion on how to best evangelise software craftsmanship. Today‘s event was full of people who have already decided a craftsmanship approach is the answer for them and our industry, the big question is how to bring more people with us, especially the group who don’t attend conferences, user group or even try to keep up with current trends. I suppose it is down to all of us who do attend such groups to spread the word, so why not get down to your local group and help improve our industry, if you are in Yorkshire why not try <a href="http://www.agileyorkshire.org/">Agile Yorkshire next week</a>? If you can’t make why not try setting up a lunchtime coding dojo – there are loads of ideas out there on the web.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Error –4002 on Access services on Sharepoint 2010</title>
      <link>https://blog.richardfennell.net/posts/error-4002-on-access-services-on-sharepoint-2010/</link>
      <pubDate>Mon, 04 Oct 2010 09:45:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/error-4002-on-access-services-on-sharepoint-2010/</guid>
      <description>&lt;p&gt;We have had an internal timesheeting system written in Access services running without any problems for the past through months. At the end of last week, when people tried to submit their timesheets they started getting a -4002 error saying the macro (that saves the weekly sheet) could not be started.&lt;/p&gt;
&lt;p&gt;Checking the server event logs, Sharepoint logs and Access services log tables showed nothing. So as all good IT staff do we tried the traditional IISRESET command (on both our Sharepoint web servers) and it all leapt back into life. The only change on our server in the past week has been been the &lt;a href=&#34;http://weblogs.asp.net/scottgu/archive/2010/09/28/asp-net-security-update-now-available.aspx&#34;&gt;ASP.NET security fix&lt;/a&gt;, and associated reboot, but I cannot see why this should effect Access Services, it looked as if it had basically Access services just failed to restart fully after the server reboot.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have had an internal timesheeting system written in Access services running without any problems for the past through months. At the end of last week, when people tried to submit their timesheets they started getting a -4002 error saying the macro (that saves the weekly sheet) could not be started.</p>
<p>Checking the server event logs, Sharepoint logs and Access services log tables showed nothing. So as all good IT staff do we tried the traditional IISRESET command (on both our Sharepoint web servers) and it all leapt back into life. The only change on our server in the past week has been been the <a href="http://weblogs.asp.net/scottgu/archive/2010/09/28/asp-net-security-update-now-available.aspx">ASP.NET security fix</a>, and associated reboot, but I cannot see why this should effect Access Services, it looked as if it had basically Access services just failed to restart fully after the server reboot.</p>
<p>One to keep an eye on.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD Dublin 2010 Sessions published</title>
      <link>https://blog.richardfennell.net/posts/ddd-dublin-2010-sessions-published/</link>
      <pubDate>Wed, 22 Sep 2010 08:53:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-dublin-2010-sessions-published/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://developerdeveloperdeveloper.com/dddie10/Schedule.aspx&#34;&gt;results of the DDD Dublin vote&lt;/a&gt; is in, and I am sorry to say I did not make it through the selection process. I must come up with at least some more interesting titles and abstracts (of course I can leave the sessions as dull as ever as you have voted for them by then, just like politicians and general elections really).&lt;/p&gt;
&lt;p&gt;The schedule looks really good, but after a bit of thought I have decided not to attend the event; I am speaking or attending events both the week before and  the week after DDD Dublin so I am going to take the chance to reduce my carbon footprint and have a weekend at home.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://developerdeveloperdeveloper.com/dddie10/Schedule.aspx">results of the DDD Dublin vote</a> is in, and I am sorry to say I did not make it through the selection process. I must come up with at least some more interesting titles and abstracts (of course I can leave the sessions as dull as ever as you have voted for them by then, just like politicians and general elections really).</p>
<p>The schedule looks really good, but after a bit of thought I have decided not to attend the event; I am speaking or attending events both the week before and  the week after DDD Dublin so I am going to take the chance to reduce my carbon footprint and have a weekend at home.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD9 Announced</title>
      <link>https://blog.richardfennell.net/posts/ddd9-announced/</link>
      <pubDate>Mon, 20 Sep 2010 11:41:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd9-announced/</guid>
      <description>&lt;p&gt;DDD9  will be on the 29th Jan 2011 at TVP, for more details and to submit sessions see &lt;a href=&#34;http://developerdeveloperdeveloper.com/ddd9/&#34;&gt;http://developerdeveloperdeveloper.com/ddd9/&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>DDD9  will be on the 29th Jan 2011 at TVP, for more details and to submit sessions see <a href="http://developerdeveloperdeveloper.com/ddd9/">http://developerdeveloperdeveloper.com/ddd9/</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Upcoming speaking engagements</title>
      <link>https://blog.richardfennell.net/posts/upcoming-speaking-engagements-2/</link>
      <pubDate>Thu, 16 Sep 2010 12:17:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upcoming-speaking-engagements-2/</guid>
      <description>&lt;ul&gt;
&lt;li&gt;21 Sep – [From Express to Ultimate: Improving Productivity with Visual Studio 2010](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=From&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=From&lt;/a&gt; Express to Ultimate: Improving Productivity with Visual Studio 2010 - 21 Sept) (Black Marble Bradford)&lt;/li&gt;
&lt;li&gt;29 Sep – &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-gb/visual-studio-events&#34;&gt;Microsoft ALM day&lt;/a&gt; (Microsoft Reading) [Cancelled]&lt;/li&gt;
&lt;li&gt;5 Oct – &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-gb/visual-studio-events&#34;&gt;Microsoft ALM day&lt;/a&gt; (ICCI London) [Cancelled]&lt;/li&gt;
&lt;li&gt;7 Oct - Software Craftsmanship Conference (not speaking but attending and it is workshop based so I doubt I will be silent all day!)&lt;/li&gt;
&lt;li&gt;9 Oct - DDD Dublin – The &lt;a href=&#34;http://developerdeveloperdeveloper.com/dddie10/ProposedSessions.aspx&#34;&gt;vote is still open&lt;/a&gt; so not sure I am presenting here yet or just attending [Did not get though the vote so decided not to attend]&lt;/li&gt;
&lt;li&gt;13 Oct - [Mix in the North](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Mix&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Mix&lt;/a&gt; in the North) (Holiday Inn, Tong)&lt;/li&gt;
&lt;li&gt;14 Oct – Integrating Java Developers into TFS with Team explorer Everywhere  (Black Marble Bradford)&lt;/li&gt;
&lt;li&gt;20 Oct - [The Developers Group Meeting](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=The&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=The&lt;/a&gt; Developers Group Meeting &amp;ldquo;The Developers Group Meeting&amp;rdquo;) Using Typemock Isolator (EMC Office London)&lt;/li&gt;
&lt;li&gt;28 Oct - [VBug Autumn Conference – The Step Up to ALM](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=VBug&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=VBug&lt;/a&gt; Autumn Conference – The Step Up to ALM) (Loughborough) Postponed to the spring&lt;/li&gt;
&lt;li&gt;16 Nov - [From Express to Ultimate: Improving Productivity with Visual Studio 2010](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=From&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=From&lt;/a&gt; Express to Ultimate: Improving Productivity with Visual Studio 2010 - 16 Nov) (Black Marble Bradford)&lt;/li&gt;
&lt;li&gt;17 Nov -  Provisionally booked for a session on using Typemock Isolator (NEBtyes Newcastle)&lt;/li&gt;
&lt;li&gt;8 Dec 2010 - [Architecture Forum in the North 2010](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Architecture&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Architecture&lt;/a&gt; Forum in the North 2010) (Holiday Inn, Tong)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and that is just this year&lt;/p&gt;</description>
      <content:encoded><![CDATA[<ul>
<li>21 Sep – [From Express to Ultimate: Improving Productivity with Visual Studio 2010](<a href="http://www.blackmarble.co.uk/events.aspx?event=From">http://www.blackmarble.co.uk/events.aspx?event=From</a> Express to Ultimate: Improving Productivity with Visual Studio 2010 - 21 Sept) (Black Marble Bradford)</li>
<li>29 Sep – <a href="http://www.microsoft.com/visualstudio/en-gb/visual-studio-events">Microsoft ALM day</a> (Microsoft Reading) [Cancelled]</li>
<li>5 Oct – <a href="http://www.microsoft.com/visualstudio/en-gb/visual-studio-events">Microsoft ALM day</a> (ICCI London) [Cancelled]</li>
<li>7 Oct - Software Craftsmanship Conference (not speaking but attending and it is workshop based so I doubt I will be silent all day!)</li>
<li>9 Oct - DDD Dublin – The <a href="http://developerdeveloperdeveloper.com/dddie10/ProposedSessions.aspx">vote is still open</a> so not sure I am presenting here yet or just attending [Did not get though the vote so decided not to attend]</li>
<li>13 Oct - [Mix in the North](<a href="http://www.blackmarble.co.uk/events.aspx?event=Mix">http://www.blackmarble.co.uk/events.aspx?event=Mix</a> in the North) (Holiday Inn, Tong)</li>
<li>14 Oct – Integrating Java Developers into TFS with Team explorer Everywhere  (Black Marble Bradford)</li>
<li>20 Oct - [The Developers Group Meeting](<a href="http://www.blackmarble.co.uk/events.aspx?event=The">http://www.blackmarble.co.uk/events.aspx?event=The</a> Developers Group Meeting &ldquo;The Developers Group Meeting&rdquo;) Using Typemock Isolator (EMC Office London)</li>
<li>28 Oct - [VBug Autumn Conference – The Step Up to ALM](<a href="http://www.blackmarble.co.uk/events.aspx?event=VBug">http://www.blackmarble.co.uk/events.aspx?event=VBug</a> Autumn Conference – The Step Up to ALM) (Loughborough) Postponed to the spring</li>
<li>16 Nov - [From Express to Ultimate: Improving Productivity with Visual Studio 2010](<a href="http://www.blackmarble.co.uk/events.aspx?event=From">http://www.blackmarble.co.uk/events.aspx?event=From</a> Express to Ultimate: Improving Productivity with Visual Studio 2010 - 16 Nov) (Black Marble Bradford)</li>
<li>17 Nov -  Provisionally booked for a session on using Typemock Isolator (NEBtyes Newcastle)</li>
<li>8 Dec 2010 - [Architecture Forum in the North 2010](<a href="http://www.blackmarble.co.uk/events.aspx?event=Architecture">http://www.blackmarble.co.uk/events.aspx?event=Architecture</a> Forum in the North 2010) (Holiday Inn, Tong)</li>
</ul>
<p>and that is just this year</p>
<p>[Updated 22nd &amp; 29th Sep 2010]</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upcoming speaking engagements</title>
      <link>https://blog.richardfennell.net/posts/upcoming-speaking-engagements/</link>
      <pubDate>Thu, 16 Sep 2010 12:17:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upcoming-speaking-engagements/</guid>
      <description>&lt;ul&gt;
&lt;li&gt;21 Sep – [From Express to Ultimate: Improving Productivity with Visual Studio 2010](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=From&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=From&lt;/a&gt; Express to Ultimate: Improving Productivity with Visual Studio 2010 - 21 Sept) (Black Marble Bradford)&lt;/li&gt;
&lt;li&gt;29 Sep – &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-gb/visual-studio-events&#34;&gt;Microsoft ALM day&lt;/a&gt; (Microsoft Reading) [Cancelled]&lt;/li&gt;
&lt;li&gt;5 Oct – &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-gb/visual-studio-events&#34;&gt;Microsoft ALM day&lt;/a&gt; (ICCI London) [Cancelled]&lt;/li&gt;
&lt;li&gt;7 Oct - Software Craftsmanship Conference (not speaking but attending and it is workshop based so I doubt I will be silent all day!)&lt;/li&gt;
&lt;li&gt;9 Oct - DDD Dublin – The &lt;a href=&#34;http://developerdeveloperdeveloper.com/dddie10/ProposedSessions.aspx&#34;&gt;vote is still open&lt;/a&gt; so not sure I am presenting here yet or just attending [Did not get though the vote so decided not to attend]&lt;/li&gt;
&lt;li&gt;13 Oct - [Mix in the North](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Mix&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Mix&lt;/a&gt; in the North) (Holiday Inn, Tong)&lt;/li&gt;
&lt;li&gt;14 Oct – Integrating Java Developers into TFS with Team explorer Everywhere  (Black Marble Bradford)&lt;/li&gt;
&lt;li&gt;20 Oct - [The Developers Group Meeting](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=The&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=The&lt;/a&gt; Developers Group Meeting &amp;ldquo;The Developers Group Meeting&amp;rdquo;) Using Typemock Isolator (EMC Office London)&lt;/li&gt;
&lt;li&gt;28 Oct - [VBug Autumn Conference – The Step Up to ALM](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=VBug&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=VBug&lt;/a&gt; Autumn Conference – The Step Up to ALM) (Loughborough) Postponed to the spring&lt;/li&gt;
&lt;li&gt;16 Nov - [From Express to Ultimate: Improving Productivity with Visual Studio 2010](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=From&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=From&lt;/a&gt; Express to Ultimate: Improving Productivity with Visual Studio 2010 - 16 Nov) (Black Marble Bradford)&lt;/li&gt;
&lt;li&gt;17 Nov -  Provisionally booked for a session on using Typemock Isolator (NEBtyes Newcastle)&lt;/li&gt;
&lt;li&gt;8 Dec 2010 - [Architecture Forum in the North 2010](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Architecture&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Architecture&lt;/a&gt; Forum in the North 2010) (Holiday Inn, Tong)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;and that is just this year&lt;/p&gt;</description>
      <content:encoded><![CDATA[<ul>
<li>21 Sep – [From Express to Ultimate: Improving Productivity with Visual Studio 2010](<a href="http://www.blackmarble.co.uk/events.aspx?event=From">http://www.blackmarble.co.uk/events.aspx?event=From</a> Express to Ultimate: Improving Productivity with Visual Studio 2010 - 21 Sept) (Black Marble Bradford)</li>
<li>29 Sep – <a href="http://www.microsoft.com/visualstudio/en-gb/visual-studio-events">Microsoft ALM day</a> (Microsoft Reading) [Cancelled]</li>
<li>5 Oct – <a href="http://www.microsoft.com/visualstudio/en-gb/visual-studio-events">Microsoft ALM day</a> (ICCI London) [Cancelled]</li>
<li>7 Oct - Software Craftsmanship Conference (not speaking but attending and it is workshop based so I doubt I will be silent all day!)</li>
<li>9 Oct - DDD Dublin – The <a href="http://developerdeveloperdeveloper.com/dddie10/ProposedSessions.aspx">vote is still open</a> so not sure I am presenting here yet or just attending [Did not get though the vote so decided not to attend]</li>
<li>13 Oct - [Mix in the North](<a href="http://www.blackmarble.co.uk/events.aspx?event=Mix">http://www.blackmarble.co.uk/events.aspx?event=Mix</a> in the North) (Holiday Inn, Tong)</li>
<li>14 Oct – Integrating Java Developers into TFS with Team explorer Everywhere  (Black Marble Bradford)</li>
<li>20 Oct - [The Developers Group Meeting](<a href="http://www.blackmarble.co.uk/events.aspx?event=The">http://www.blackmarble.co.uk/events.aspx?event=The</a> Developers Group Meeting &ldquo;The Developers Group Meeting&rdquo;) Using Typemock Isolator (EMC Office London)</li>
<li>28 Oct - [VBug Autumn Conference – The Step Up to ALM](<a href="http://www.blackmarble.co.uk/events.aspx?event=VBug">http://www.blackmarble.co.uk/events.aspx?event=VBug</a> Autumn Conference – The Step Up to ALM) (Loughborough) Postponed to the spring</li>
<li>16 Nov - [From Express to Ultimate: Improving Productivity with Visual Studio 2010](<a href="http://www.blackmarble.co.uk/events.aspx?event=From">http://www.blackmarble.co.uk/events.aspx?event=From</a> Express to Ultimate: Improving Productivity with Visual Studio 2010 - 16 Nov) (Black Marble Bradford)</li>
<li>17 Nov -  Provisionally booked for a session on using Typemock Isolator (NEBtyes Newcastle)</li>
<li>8 Dec 2010 - [Architecture Forum in the North 2010](<a href="http://www.blackmarble.co.uk/events.aspx?event=Architecture">http://www.blackmarble.co.uk/events.aspx?event=Architecture</a> Forum in the North 2010) (Holiday Inn, Tong)</li>
</ul>
<p>and that is just this year</p>
<p>[Updated 22nd &amp; 29th Sep 2010]</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD Dublin voting as opened</title>
      <link>https://blog.richardfennell.net/posts/ddd-dublin-voting-as-opened/</link>
      <pubDate>Tue, 14 Sep 2010 13:05:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-dublin-voting-as-opened/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://developerdeveloperdeveloper.com/dddie10/Users/VoteForSessions.aspx&#34;&gt;vote has opened for DDD Ireland&lt;/a&gt;, get in there an vote for the sessions you would like to see, there is a nice selection.&lt;/p&gt;
&lt;p&gt;May I draw your attention to the two I have submitted&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;How can I add my own custom step to a TFS 2010 build, or do I even need to try? (Oooh a new one!)&lt;/li&gt;
&lt;li&gt;Developing Testable Web Parts for SharePoint (Could also be called Using Typemock with SharePoint)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;No pressure…….&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://developerdeveloperdeveloper.com/dddie10/Users/VoteForSessions.aspx">vote has opened for DDD Ireland</a>, get in there an vote for the sessions you would like to see, there is a nice selection.</p>
<p>May I draw your attention to the two I have submitted</p>
<ul>
<li>How can I add my own custom step to a TFS 2010 build, or do I even need to try? (Oooh a new one!)</li>
<li>Developing Testable Web Parts for SharePoint (Could also be called Using Typemock with SharePoint)</li>
</ul>
<p>No pressure…….</p>
]]></content:encoded>
    </item>
    <item>
      <title>Experiences running multiple instances of 2010 build service on a single VM</title>
      <link>https://blog.richardfennell.net/posts/experiences-running-multiple-instances-of-2010-build-service-on-a-single-vm/</link>
      <pubDate>Mon, 13 Sep 2010 11:50:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-running-multiple-instances-of-2010-build-service-on-a-single-vm/</guid>
      <description>&lt;p&gt;I think my biggest issue with TFS2010 is the problem that a build controller is tied to a single Team Project Collection (TPC). For a company like mine where we run a TPC for each client this means we have had to start to generate a good number of virtualised build controller/agents. It is especially irritating as I know that the volume of builds on any given controller is low.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I think my biggest issue with TFS2010 is the problem that a build controller is tied to a single Team Project Collection (TPC). For a company like mine where we run a TPC for each client this means we have had to start to generate a good number of virtualised build controller/agents. It is especially irritating as I know that the volume of builds on any given controller is low.</p>
<p>A while ago <a href="http://blogs.msdn.com/b/jimlamb/archive/2010/04/13/configuring-multiple-tfs-build-services-on-one-machine.aspx">Jim Lamb blogged about how you could define multiple build services on a single box</a>, but the post was full caveats on how it was not supported/recommended etc. Well since this post there has been some discussion on this technique and I think the general feeling is, yes it is not supported, but there is no reason it will not function perfectly well as long as you consider some basic limitations:</p>
<ol>
<li>The two build controllers don’t know about each other, so you can easily have two build running at the same time, this will have an unpredictable effect on performance.</li>
<li>You have to make sure that the two instances don’t share any workspace disk locations, else they will potentially start overwriting each other</li>
<li>Remember building code is usually IO locked not CPU locked, so when creating your build system think a lot about the disk, throwing memory and CPU will have little effect. The fact we run our build services on VMs and these us a SAN should mitigate much of this potential issue.</li>
<li>The default when you install a controller/agent on a box is for one agent to be created for each core on the box. This rule is still a good idea, but if you are installing two controller/agent sets on a box make sure you don’t define more agents than cores (for me this means on by build VM I have to 2 virtual CPUs as I am running 2 controller/agent pairs)</li>
</ol>
<p>Jims instructions are straight forward, but I did hit a couple of snags:</p>
<ul>
<li>When you enter the command line to create the instance, make sure there a spaces after the equals for the parameters, else you get an error</li>
</ul>
<blockquote>
<p>sc.exe create buildMachine-collection2 binpath= &ldquo;C:Program FilesMicrosoft Team Foundation Server 2010ToolsTfsBuildServiceHost.exe /NamedInstance:buildMachine-collection2&rdquo; DisplayName= &ldquo;Visual Studio Team Foundation Build Service Host (Collection2)&rdquo;</p></blockquote>
<ul>
<li>I cannot stress enough how important it is give the new instances sensible names, especially as their numbers grow. Jim suggested naming after the TPC they service, for me this is bad move as at any given time were are working for a fairly small number of clients, but the list is changing as projects start and stop. It is therefore easier for me to name a controller for the machine is it hosted on as they will be reassigned between TPC based on need. So I settle on the names in the form ‘build1-collection2’ not a TPC base done. These are easy to associate with the VMs in use when you see them in VS2010</li>
<li>When I first tried to get this all up and ran the admin console for the command prompt I got the error shown below</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image6_0A46952D.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image6_thumb_639808DF.png" title="image"></a></p></blockquote>
<blockquote>
<p>After a bit of retyping this went away. I think it was down to stray spaces at end of SET variable, but not 100% sure over this. I would just make sure you strings match if you see this problem.</p>
<p><strong>[Updated 26 Nov 2010]</strong> The batch file to start the management console is in the form</p>
<p>      set TFSBUILDSERVICEHOST=buildMachine-collection2 <br>
      &ldquo;C:Program FilesMicrosoft Team Foundation Server 2010Toolstfsmgmt.exe&rdquo;</p>
<p>Make sure that you run this batch file as administration (right click run as admin) if you don&rsquo;t the management console picks up the default instance</p></blockquote>
<ul>
<li>Also it is a good Idea to go into the PCs service and make sure your new build service instance is set to auto start, to avoid surprises on a reboot.</li>
<li>When you configure the new instance make sure you alter the port it runs on (red box below) I am just incrementing it for each new instance e.g. 9191 –&gt; 9192. If you don’t alter this the service will not start as it’s endpoint will already be in use.</li>
<li>Also remember to set the identity of the build service run as (green box), usually [Domain]TFSBuild, too easy to forget as well as you click through the create dialogs.</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_67A256B1.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_673623BC.png" title="image"></a></p></blockquote>
<p>Once this is set you can start the service and configure the controller and agent(s) exactly as normal.</p>
<p>You might want to consider how the workspace is mapped to the your multiple controllers, so you use different root directories, but that is your call. Thus far leaving it all as it was when I was using a separate VM for each build is working fine for me.</p>
<p>We shall see how many services I can put onto single VM, but it is certainly something I don’t want to push to hard. However that said if you are like use with a relatively low load on the build system this has to be worth looking at to avoid proliferation of build VMs.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Talking about mocking with Typemock a The of The Developers Group in October</title>
      <link>https://blog.richardfennell.net/posts/talking-about-mocking-with-typemock-a-the-of-the-developers-group-in-october/</link>
      <pubDate>Mon, 13 Sep 2010 08:40:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/talking-about-mocking-with-typemock-a-the-of-the-developers-group-in-october/</guid>
      <description>&lt;p&gt;On the 20th October I will be &lt;a href=&#34;http://www.richplum.co.uk/meetings/20101020.pdf&#34;&gt;speaking at an afternoon event hosted by The Developer Group at EMC offices in London.&lt;/a&gt; My subject will be using Typemock Isolator to address testing problems in both well designed code (nice IoC patterns etc) and in nasty legacy code where you have to use all the trick Isolator allows.&lt;/p&gt;
&lt;p&gt;Hope to see you there&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On the 20th October I will be <a href="http://www.richplum.co.uk/meetings/20101020.pdf">speaking at an afternoon event hosted by The Developer Group at EMC offices in London.</a> My subject will be using Typemock Isolator to address testing problems in both well designed code (nice IoC patterns etc) and in nasty legacy code where you have to use all the trick Isolator allows.</p>
<p>Hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>Stupid mistake over Javascript parameters</title>
      <link>https://blog.richardfennell.net/posts/stupid-mistake-over-javascript-parameters/</link>
      <pubDate>Fri, 10 Sep 2010 16:19:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/stupid-mistake-over-javascript-parameters/</guid>
      <description>&lt;p&gt;I have been using the &lt;a href=&#34;http://code.google.com/apis/maps/&#34;&gt;Google Maps JavaScript API&lt;/a&gt; today. I lost too much time over a really stupid error. I was trying to set the zoom level on a map using the call&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;map.setZoom(&lt;number&gt;);&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;I had set my initial zoom level to 5 (the scale is 1-17 I think) in the map load, when I called setZoom to 11 all was fine, but if I set it to any other number is reverted to 5. This different effect for different numbers was a real red herring. The problem was down to how I was handling the variable containing the zoom level prior to passing it to setZoom method. When it was set to 11 it was set explicitly e.g.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been using the <a href="http://code.google.com/apis/maps/">Google Maps JavaScript API</a> today. I lost too much time over a really stupid error. I was trying to set the zoom level on a map using the call</p>
<blockquote>
<p>map.setZoom(<number>);</p></blockquote>
<p>I had set my initial zoom level to 5 (the scale is 1-17 I think) in the map load, when I called setZoom to 11 all was fine, but if I set it to any other number is reverted to 5. This different effect for different numbers was a real red herring. The problem was down to how I was handling the variable containing the zoom level prior to passing it to setZoom method. When it was set to 11 it was set explicitly e.g.</p>
<blockquote>
<p>var zoomNumber = 11;</p></blockquote>
<p>However when it was any other value it was being pulled from the value property of a combo box, so was actually a string. My problem was that setZoom does not return an error if if pass something in it does not understand, it just reverts to it’s initial value.</p>
<p>The solution was simple, cast the value to a string and it works as expected</p>
<blockquote>
<p>map.setZoom(parseInt(ZoomNumber));</p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>Problem faking multiple SPLists with Typemock Isolator in a single test</title>
      <link>https://blog.richardfennell.net/posts/problem-faking-multiple-splists-with-typemock-isolator-in-a-single-test/</link>
      <pubDate>Fri, 10 Sep 2010 15:20:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problem-faking-multiple-splists-with-typemock-isolator-in-a-single-test/</guid>
      <description>&lt;p&gt;I have found a problem with repeated calls to indexed SharePoint Lists with Typemock Isolator 6.0.3. This what I am trying to do…&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I am using Typemock Isolator to allow me to develop a SharePoint Webpart outside of the SharePoint environment  &lt;a href=&#34;http://typemock.squarespace.com/learn-from-the-experts/&#34;&gt;(there is a video about this on the Typemock site)&lt;/a&gt;. My SharePoint Webpart uses data drawn from a pair of SharePoint lists to draw a map using Google maps API; so in my test harness web site page I have the following code in the constructor that fakes out the two SPLists and populates them with test content.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have found a problem with repeated calls to indexed SharePoint Lists with Typemock Isolator 6.0.3. This what I am trying to do…</p>
<p><strong>The Problem</strong></p>
<p>I am using Typemock Isolator to allow me to develop a SharePoint Webpart outside of the SharePoint environment  <a href="http://typemock.squarespace.com/learn-from-the-experts/">(there is a video about this on the Typemock site)</a>. My SharePoint Webpart uses data drawn from a pair of SharePoint lists to draw a map using Google maps API; so in my test harness web site page I have the following code in the constructor that fakes out the two SPLists and populates them with test content.</p>
<pre tabindex="0"><code> 1: public partial class TestPage : System.Web.UI.Page
</code></pre><p>2:  {</p>
<pre tabindex="0"><code> 3:     public TestPage()
</code></pre><p>4:     {</p>
<pre tabindex="0"><code> 5:  
</code></pre><p>6:        var fakeWeb = Isolate.Fake.Instance<SPWeb>();</p>
<pre tabindex="0"><code> 7:        Isolate.WhenCalled(() =&gt; SPControl.GetContextWeb(null)).WillReturn(fakeWeb);
</code></pre><p>8: </p>
<pre tabindex="0"><code> 9:        // return value for 1st call
</code></pre><p>10:        Isolate.WhenCalled(() =&gt; fakeWeb.Lists[&ldquo;Centre Locations&rdquo;].Items).WillReturnCollectionValuesOf(CreateCentreList());</p>
<pre tabindex="0"><code> 11:        // return value for all other calls
</code></pre><p>12:        Isolate.WhenCalled(() =&gt; fakeWeb.Lists[&ldquo;Map Zoom Areas&rdquo;].Items).WillReturnCollectionValuesOf(CreateZoomAreaList());</p>
<pre tabindex="0"><code> 13:     }
</code></pre><p>14: </p>
<pre tabindex="0"><code> 15:     private static List&lt;SPListItem&gt; CreateZoomAreaList()
</code></pre><p>16:     {</p>
<pre tabindex="0"><code> 17:        var fakeZoomAreas = new List&lt;SPListItem&gt;();
</code></pre><p>18:        fakeZoomAreas.Add(CreateZoomAreaSPListItem(&ldquo;London&rdquo;, 51.49275, -0.137722222, 2, 14));</p>
<pre tabindex="0"><code> 19:        return fakeZoomAreas;
</code></pre><p>20:     }</p>
<pre tabindex="0"><code> 21:  
</code></pre><p>22:     private static List<SPListItem> CreateCentreList()</p>
<pre tabindex="0"><code> 23:     {
</code></pre><p>24:        var fakeSites = new List<SPListItem>();</p>
<pre tabindex="0"><code> 25:        fakeSites.Add(CreateCentreSPListItem(&#34;Aberdeen &#34;, &#34;1 The Road,  Aberdeen &#34;, &#34;Aberdeen@test.com&#34;, &#34;www.Aberdeen.test.com&#34;, &#34;1111&#34;, &#34;2222&#34;, 57.13994444, -2.113333333));
</code></pre><p>26:        fakeSites.Add(CreateCentreSPListItem(&ldquo;Altrincham &ldquo;, &ldquo;1 The Road,  Altrincham &ldquo;, &ldquo;<a href="mailto:Altrincham@test.com">Altrincham@test.com</a>&rdquo;, &ldquo;<a href="https://www.Altrincham.test.com">www.Altrincham.test.com</a>&rdquo;, &ldquo;3333&rdquo;, &ldquo;4444&rdquo;, 53.38977778, -2.349916667));</p>
<pre tabindex="0"><code> 27:        return fakeSites;
</code></pre><p>28:     }</p>
<pre tabindex="0"><code> 29:  
</code></pre><p>30:     private static SPListItem CreateCentreSPListItem(string title, string address, string email, string url, string telephone, string fax, double lat, double lng)</p>
<pre tabindex="0"><code> 31:     {
</code></pre><p>32:         var fakeItem = Isolate.Fake.Instance<SPListItem>();</p>
<pre tabindex="0"><code> 33:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Title&#34;\]).WillReturn(title);
</code></pre><p>34:         Isolate.WhenCalled(() =&gt; fakeItem[&ldquo;Address&rdquo;]).WillReturn(address);</p>
<pre tabindex="0"><code> 35:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Email Address&#34;\]).WillReturn(email);
</code></pre><p>36:         Isolate.WhenCalled(() =&gt; fakeItem[&ldquo;Site URL&rdquo;]).WillReturn(url);</p>
<pre tabindex="0"><code> 37:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Telephone&#34;\]).WillReturn(telephone);
</code></pre><p>38:         Isolate.WhenCalled(() =&gt; fakeItem[&ldquo;Fax&rdquo;]).WillReturn(fax);</p>
<pre tabindex="0"><code> 39:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Latitude&#34;\]).WillReturn(lat.ToString());
</code></pre><p>40:         Isolate.WhenCalled(() =&gt; fakeItem[&ldquo;Longitude&rdquo;]).WillReturn(lng.ToString());</p>
<pre tabindex="0"><code> 41:         return fakeItem;
</code></pre><p>42:     }</p>
<pre tabindex="0"><code> 43:  
</code></pre><p>44:     private static SPListItem CreateZoomAreaSPListItem(string areaName, double lat, double lng, double radius, int zoom)</p>
<pre tabindex="0"><code> 45:     {
</code></pre><p>46:         var fakeItem = Isolate.Fake.Instance<SPListItem>();</p>
<pre tabindex="0"><code> 47:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Title&#34;\]).WillReturn(areaName);
</code></pre><p>48:         Isolate.WhenCalled(() =&gt; fakeItem[&ldquo;Latitude&rdquo;]).WillReturn(lat.ToString());</p>
<pre tabindex="0"><code> 49:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Longitude&#34;\]).WillReturn(lng.ToString());
</code></pre><p>50:         Isolate.WhenCalled(() =&gt; fakeItem[&ldquo;Radius&rdquo;]).WillReturn(radius.ToString());</p>
<pre tabindex="0"><code> 51:         Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Zoom&#34;\]).WillReturn(zoom.ToString());
</code></pre><p>52:         return fakeItem;</p>
<pre tabindex="0"><code> 53:     }
</code></pre><p>54: </p>
<pre tabindex="0"><code> 55: }
</code></pre><p>56: </p>
<pre tabindex="0"><code>
The problem is that if I place the following logic in my Webpart
</code></pre><p>1: SPWeb web = SPControl.GetContextWeb(Context);</p>
<pre tabindex="0"><code> 2: Debug.WriteLine (web.Lists\[&#34;Centre Locations&#34;\].Items.Count);
</code></pre><p>3: Debug.WriteLine (web.Lists[&ldquo;Map Zoom Areas&rdquo;].Items.Count);</p>
<pre tabindex="0"><code>
I would expect this code to return

&gt; 2  
&gt; 1

But I get

&gt; 1  
&gt; 1

If I reverse two Isolate.WhenCalled lines in the constructor I get

&gt; 2  
&gt; 2

So basically only the last Isolate.WhenCalled is being used, this is not what I expect from the [Typemock documentation](http://www.typemock.com/Docs/UserGuide/newGuide/Documentation/SettingBehaviorAAA.html). .This states that, worst case, the first Isolate.WhenCalled should be used for the first call and the second for all subsequent calls, and actually the index string should be used to differentiate anyway. This is obviously not working. I actually also tried using null in place of the both the index strings and got the same result.

**A Workaround**

I have managed to workaround this problem with a refactor of my code. In my web part I used to moved all the SPList logic into a pair of methods
</code></pre><p>1: private List<GISPoint> LoadFixedMarkersFromSharepoint(SPWeb web, string listName)</p>
<pre tabindex="0"><code> 2: {
</code></pre><p>3:     var points = new List<GISPoint>();</p>
<pre tabindex="0"><code> 4:  
</code></pre><p>5:     foreach (SPListItem listItem in web.Lists[listName].Items)</p>
<pre tabindex="0"><code> 6:     {
</code></pre><p>7:             points.Add(new GISPoint(</p>
<pre tabindex="0"><code> 8:                 listItem\[&#34;title&#34;\], 
</code></pre><p>9:                 listItem[&ldquo;address&rdquo;],</p>
<pre tabindex="0"><code> 10:                 listItem\[&#34;email addess&#34;\], 
</code></pre><p>11:                 listItem[&ldquo;site Url&rdquo;],</p>
<pre tabindex="0"><code> 12:                 listItem\[&#34;telephone&#34;\], 
</code></pre><p>13:                 listItem[&ldquo;fax&rdquo;],</p>
<pre tabindex="0"><code> 14:                 listItem\[&#34;latitude&#34;\], 
</code></pre><p>15:                 listItem[&ldquo;longitude&rdquo;]));</p>
<pre tabindex="0"><code> 16:     }
</code></pre><p>17:     return points;</p>
<pre tabindex="0"><code> 18: }
</code></pre><p>19: </p>
<pre tabindex="0"><code> 20: private List&lt;ZoomArea&gt; LoadZoomAreasFromSharepoint(SPWeb web, string listName)
</code></pre><p>21: {</p>
<pre tabindex="0"><code> 22:          var points = new List&lt;ZoomArea&gt;();
</code></pre><p>23: </p>
<pre tabindex="0"><code> 24:          foreach (SPListItem listItem in web.Lists\[listName\].Items)
</code></pre><p>25:          {</p>
<pre tabindex="0"><code> 26:            points.Add(new ZoomArea(
</code></pre><p>27:                 listItem[&ldquo;title&rdquo;],</p>
<pre tabindex="0"><code> 28:                 listItem\[&#34;latitude&#34;\], 
</code></pre><p>29:                 listItem[&ldquo;longitude&rdquo;],</p>
<pre tabindex="0"><code> 30:                 listItem\[&#34;radius&#34;\], 
</code></pre><p>31:                 listItem[&ldquo;zoom&rdquo;]));</p>
<pre tabindex="0"><code> 32:          }
</code></pre><p>33:          return points;</p>
<pre tabindex="0"><code> 34: }
</code></pre><p>35:</p>
<pre tabindex="0"><code>
I then used Isolator to intercept the calls to these methods, this can be done by using the Members.CallOriginal flag to wrapper the actual class and intercept the calls to the private methods. Note that I am using different helper methods to create the list of my own data objects as opposed to List&lt;SPListItems&gt;
</code></pre><p>1: var controlWrapper = Isolate.Fake.Instance<LocationMap>(Members.CallOriginal);</p>
<pre tabindex="0"><code> 2: Isolate.Swap.NextInstance&lt;LocationMap&gt;().With(controlWrapper);
</code></pre><p>3: </p>
<pre tabindex="0"><code> 4: Isolate.NonPublic.WhenCalled(controlWrapper, &#34;LoadFixedMarkersFromSharepoint&#34;).WillReturn(CreateCentreListAsGISPoint());
</code></pre><p>5: Isolate.NonPublic.WhenCalled(controlWrapper, &ldquo;LoadZoomAreasFromSharepoint&rdquo;).WillReturn(CreateZoomAreaListAsZoomItems());</p>
<pre tabindex="0"><code> 6:   
```

My workaround, in my opinion, is a weaker test as I am not testing my conversion of SPListItems to my internal data types, but at least it works

I have had to go down this route due to a bug in Typemock Isolator (which has been logged and recreated by Typemock, so I am sure we can expect a fix soon). However it does show how powerful Isolator can be when you have restrictions in the changes you can make to a code base.Wrapping a class with Isolator can upon up a whole range of options.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>My Typemock video have been published</title>
      <link>https://blog.richardfennell.net/posts/my-typemock-video-have-been-published/</link>
      <pubDate>Thu, 09 Sep 2010 12:36:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-typemock-video-have-been-published/</guid>
      <description>&lt;p&gt;I have a couple of new video’s on on the &lt;a href=&#34;http://typemock.squarespace.com/learn-from-the-experts/&#34;&gt;Typepmock Experts site&lt;/a&gt;, why not take a look, the titles are&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Using Typemock Isolator to enable rapid development&lt;/li&gt;
&lt;li&gt;Using Typemock Isolator and Ivonna for unit testing the UI&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>I have a couple of new video’s on on the <a href="http://typemock.squarespace.com/learn-from-the-experts/">Typepmock Experts site</a>, why not take a look, the titles are</p>
<ul>
<li>Using Typemock Isolator to enable rapid development</li>
<li>Using Typemock Isolator and Ivonna for unit testing the UI</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>I am speaking at the VBug Autumn Conference</title>
      <link>https://blog.richardfennell.net/posts/i-am-speaking-at-the-vbug-autumn-conference/</link>
      <pubDate>Thu, 09 Sep 2010 11:29:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-am-speaking-at-the-vbug-autumn-conference/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.vbug.co.uk/Conference/VBUG-Autumn-Conference-2010.aspx&#34;&gt;VBug Autumn Conference&lt;/a&gt; is on the Wed 27th &amp;amp; Thurs 28th October  at Holywell Park, Loughborough. The published agenda this far is&lt;/p&gt;
&lt;p&gt;**Day One (Wed 27th Oct):**Top of the Pops with Sharepoint 2010 – Dave McMahon&lt;br&gt;
Cache Out with Windows Server AppFabric – Phil Pursglove&lt;br&gt;
Mapping the Cloud – How far can we stretch it? Johannes Kebeck&lt;br&gt;
Silverlight Development on Windows Phone 7 – Andy Wigley&lt;/p&gt;
&lt;p&gt;**&lt;strong&gt;Day Two (Thurs 28th Oct):&lt;/strong&gt;&lt;br&gt;
**TFS (actual title TBC) – Richard Fennell&lt;br&gt;
Silverlight 4 (actual title TBC) – Richard Costall&lt;br&gt;
Others to be confirmed…including Azure, Expression Blend &amp;amp; BI&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.vbug.co.uk/Conference/VBUG-Autumn-Conference-2010.aspx">VBug Autumn Conference</a> is on the Wed 27th &amp; Thurs 28th October  at Holywell Park, Loughborough. The published agenda this far is</p>
<p>**Day One (Wed 27th Oct):**Top of the Pops with Sharepoint 2010 – Dave McMahon<br>
Cache Out with Windows Server AppFabric – Phil Pursglove<br>
Mapping the Cloud – How far can we stretch it? Johannes Kebeck<br>
Silverlight Development on Windows Phone 7 – Andy Wigley</p>
<p>**<strong>Day Two (Thurs 28th Oct):</strong><br>
**TFS (actual title TBC) – Richard Fennell<br>
Silverlight 4 (actual title TBC) – Richard Costall<br>
Others to be confirmed…including Azure, Expression Blend &amp; BI</p>
<p>Well I can give a bit more detail on what I will be talking one, it will be a TFS session focused on the the benefits for people moving from Visual SourceSafe, so work item tracking automated build etc.</p>
<p>Hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Android on my HTC Diamond2</title>
      <link>https://blog.richardfennell.net/posts/running-android-on-my-htc-diamond2/</link>
      <pubDate>Wed, 08 Sep 2010 09:09:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-android-on-my-htc-diamond2/</guid>
      <description>&lt;p&gt;I have &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/11/why-don-t-i-love-my-phone.aspx&#34;&gt;posted in the past that I am not a huge fan of my phone&lt;/a&gt;, so I read &lt;a href=&#34;http://blog.hinshelwood.com/archive/2010/09/02/running-android-2.2-frodo-on-your-hd2.aspx?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A&amp;#43;MartinHinshelwood&amp;#43;%28Martin&amp;#43;Hinshelwood%27s&amp;#43;ALM&amp;#43;Blog%29&#34;&gt;Martin Hinshelwood’s experiences with putting Android on his HTC HD with interest&lt;/a&gt;. I have tended to leave my phone as a consumer device i.e. it does what it does, I tend to not fiddle too much as long as it will make calls, sync email and allow me to get the cricket scores on the BBC website, the essentials of life. However, I had a spare SD card and a bit of time. So it was off to the &lt;a href=&#34;http://forum.xda-developers.com/showthread.php?t=509493&#34;&gt;Diamond2(Topaz) XDADevelopers forums&lt;/a&gt; to see what I needed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/11/why-don-t-i-love-my-phone.aspx">posted in the past that I am not a huge fan of my phone</a>, so I read <a href="http://blog.hinshelwood.com/archive/2010/09/02/running-android-2.2-frodo-on-your-hd2.aspx?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A&#43;MartinHinshelwood&#43;%28Martin&#43;Hinshelwood%27s&#43;ALM&#43;Blog%29">Martin Hinshelwood’s experiences with putting Android on his HTC HD with interest</a>. I have tended to leave my phone as a consumer device i.e. it does what it does, I tend to not fiddle too much as long as it will make calls, sync email and allow me to get the cricket scores on the BBC website, the essentials of life. However, I had a spare SD card and a bit of time. So it was off to the <a href="http://forum.xda-developers.com/showthread.php?t=509493">Diamond2(Topaz) XDADevelopers forums</a> to see what I needed.</p>
<p>The process as follows:</p>
<ul>
<li>Download the July 2010 Froyo 2.2. build <a href="http://htcandroid.xland.cz/XDANDROID.2.2.AOSP.29.7.10.RC2.1.7z" title="XDANDROID.2.2.AOSP.29.7.10.RC2.1.7z">XDANDROID.2.2.AOSP.29.7.10.RC2.1.7z</a></li>
<li>Download the Sep 2010 XDAndroid 2.2. update <a href="http://htcandroid.xland.cz/XDANDROID.2.2.AOSP.3.9.10.RC.2.2.system.7z" title="XDANDROID.2.2.AOSP.3.9.10.RC.2.2.system.7z">XDANDROID.2.2.AOSP.3.9.10.RC.2.2.system.7z</a></li>
<li>Download the current kernel <a href="http://zimages.googlecode.com/files/htc-msm-linux-20100830_123544-package.tar.bz2" title="htc-msm-linux-20100830_123544-package.tar.bz2">htc-msm-linux-20100830_123544-package.tar.bz2</a> (without this you get no sound)</li>
<li>Unpack these with WInRar, basically use the July build as the basic directory structure but replace the system.ext2 from the Sep file and zimage and modules from the Linux downloads. The details of the installation process are on the FAQ on first page of the <a href="http://forum.xda-developers.com/showthread.php?t=509493">Diamond2(Topaz) XDADevelopers forum</a></li>
</ul>
<p>So does it work? Yes, the issues I had were as follows</p>
<ul>
<li>I had to go onto settings, wireless and networks, mobile networks and select my network operator. This got me phone services working. But I then also had to (on the same menu area) select the Access Point Name (APN) and pick the correct one for my provider, it was defaulting to the wrong one. Once this was done I also got 3G data working.</li>
<li>I seem to have similar battery issues to other forum users, you can’t trust the meter and as the operating system is running on the SD card it draw power faster</li>
<li>Clock seems to loose a bit of time</li>
<li>It is not 100% stable but not too bad, again in line with other user experiences, the phone can go into a deep sleep that is hard to exit.</li>
</ul>
<p>I will give it a go for a few days and see how it goes. It is not as if I cannot switch back to WIndows Mobile 6.5 with a reboot if I get problems.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Does anyone need a VS2010 Custom Build Activity for StyleCop?</title>
      <link>https://blog.richardfennell.net/posts/does-anyone-need-a-vs2010-custom-build-activity-for-stylecop/</link>
      <pubDate>Fri, 03 Sep 2010 13:49:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/does-anyone-need-a-vs2010-custom-build-activity-for-stylecop/</guid>
      <description>&lt;p&gt;There is a noticeable omission form the tooling for &lt;a href=&#34;http://stylecop.codeplex.com/&#34;&gt;StyleCop&lt;/a&gt; that you cannot integrate it into a TFS 2010 Build directly. There is a custom task to do it as part of &lt;a href=&#34;http://msbuildextensionpack.codeplex.com/&#34;&gt;MSBuild Extension pack&lt;/a&gt;, but this is designed for TFS 2008.&lt;/p&gt;
&lt;p&gt;So when I had to wire in StyleCop to our 2010 build process I hit a problem, I could&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Edit the .csproj files for each project to wire in StyleCop&lt;/li&gt;
&lt;li&gt;Use an MSBuild activity in my 2010 build&lt;/li&gt;
&lt;li&gt;Write my own custom activity.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I decided to follow the final option, it did not look too bad as the source needed is provided in the &lt;a href=&#34;http://stylecop.codeplex.com/releases/view/44839#DownloadId=129599&#34;&gt;StyleCop SDK&lt;/a&gt;. Well after a good deal of fiddling I have it basically working. Again I found the process of &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx&#34;&gt;developing a custom activity a little irritating&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is a noticeable omission form the tooling for <a href="http://stylecop.codeplex.com/">StyleCop</a> that you cannot integrate it into a TFS 2010 Build directly. There is a custom task to do it as part of <a href="http://msbuildextensionpack.codeplex.com/">MSBuild Extension pack</a>, but this is designed for TFS 2008.</p>
<p>So when I had to wire in StyleCop to our 2010 build process I hit a problem, I could</p>
<ol>
<li>Edit the .csproj files for each project to wire in StyleCop</li>
<li>Use an MSBuild activity in my 2010 build</li>
<li>Write my own custom activity.</li>
</ol>
<p>I decided to follow the final option, it did not look too bad as the source needed is provided in the <a href="http://stylecop.codeplex.com/releases/view/44839#DownloadId=129599">StyleCop SDK</a>. Well after a good deal of fiddling I have it basically working. Again I found the process of <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx">developing a custom activity a little irritating</a>.</p>
<p>I cannot state enough times the way to do this type of development is</p>
<ol>
<li>Create the activity, unit testing if possible</li>
<li>Create a WF console application to host it</li>
<li>For each activity property wire it to an argument on the workflow WF console application</li>
<li>Create a test project to run unit tests against this workflow</li>
<li>Only when this all works go down the painful route of getting it linked into a real build</li>
</ol>
<p>The main problem I had was that StyleCop was running fine, but finding no violation when I knew some were in my test data. This all turned out to be down to where the Microsoft.StyleCop.dll assembly was being loaded from, and whether there were any rules assemblies in the same directory. I have to admit I only got to the bottom of this after getting all the source for StyleCop so I could debug into it!</p>
<p>So I now have the first pass at StyleCop custom activity for TFS 2010, needs some more testing, but I think I am just about there. I intend to get it up on <a href="http://tfsbuildextensions.codeplex.com/" title="http://tfsbuildextensions.codeplex.com/">http://tfsbuildextensions.codeplex.com/</a>, just need to make sure it meets all the guidelines.</p>
<p>So is there any interest in this activity?</p>
]]></content:encoded>
    </item>
    <item>
      <title>New book on Refactoring with Visual Studio 2010 from Packt Publishing</title>
      <link>https://blog.richardfennell.net/posts/new-book-on-refactoring-with-visual-studio-2010-from-packt-publishing/</link>
      <pubDate>Fri, 03 Sep 2010 08:43:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-book-on-refactoring-with-visual-studio-2010-from-packt-publishing/</guid>
      <description>&lt;p&gt;Recently &lt;a href=&#34;http://www.packtpub.com&#34;&gt;Packt Publishing&lt;/a&gt; sent me a copy of &lt;a href=&#34;https://www.packtpub.com/refactoring-with-microsoft-visual-studio-2010/book?utm_source=blogs.blackmarble.co.uk&amp;amp;utm_medium=bookrev&amp;amp;utm_content=blog&amp;amp;utm_campaign=mdb_004412&#34;&gt;‘Refactoring with Microsoft Visual Studio 2010’ by Peter Ritchie&lt;/a&gt;, I have to say I have rather impressed by it.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://www.packtpub.com/refactoring-with-microsoft-visual-studio-2010/book?utm_source=blogs.blackmarble.co.uk&amp;amp;utm_medium=bookrev&amp;amp;utm_content=blog&amp;amp;utm_campaign=mdb_004412&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_2F08E5CF.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;My only major issue with it is that of the title, this book covers much more than the refactoring features of 2010. It provides a very clear example driven discussion of the use and application of both refactoring patterns and design patterns. I think this would be an excellent book for a developer who want to start to apply design patterns to their C# code. The examples being more real world than &lt;a href=&#34;http://www.amazon.co.uk/Head-First-Design-Patterns-Freeman/dp/0596007124/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1283502417&amp;amp;sr=8-1-spell&#34;&gt;Head First’s ‘Design Patterns&lt;/a&gt;’ (and the examples are in C# as opposed to Java) and the book being a far easier read than the classic &lt;a href=&#34;http://www.amazon.co.uk/Design-patterns-elements-reusable-object-oriented/dp/0201633612/ref=sr_1_1?s=books&amp;amp;ie=UTF8&amp;amp;qid=1283502470&amp;amp;sr=1-1&#34;&gt;‘Design patterns : elements of reusable object-oriented software’&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently <a href="http://www.packtpub.com">Packt Publishing</a> sent me a copy of <a href="https://www.packtpub.com/refactoring-with-microsoft-visual-studio-2010/book?utm_source=blogs.blackmarble.co.uk&amp;utm_medium=bookrev&amp;utm_content=blog&amp;utm_campaign=mdb_004412">‘Refactoring with Microsoft Visual Studio 2010’ by Peter Ritchie</a>, I have to say I have rather impressed by it.</p>
<p><a href="https://www.packtpub.com/refactoring-with-microsoft-visual-studio-2010/book?utm_source=blogs.blackmarble.co.uk&amp;utm_medium=bookrev&amp;utm_content=blog&amp;utm_campaign=mdb_004412"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_2F08E5CF.png" title="image"></a></p>
<p>My only major issue with it is that of the title, this book covers much more than the refactoring features of 2010. It provides a very clear example driven discussion of the use and application of both refactoring patterns and design patterns. I think this would be an excellent book for a developer who want to start to apply design patterns to their C# code. The examples being more real world than <a href="http://www.amazon.co.uk/Head-First-Design-Patterns-Freeman/dp/0596007124/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1283502417&amp;sr=8-1-spell">Head First’s ‘Design Patterns</a>’ (and the examples are in C# as opposed to Java) and the book being a far easier read than the classic <a href="http://www.amazon.co.uk/Design-patterns-elements-reusable-object-oriented/dp/0201633612/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1283502470&amp;sr=1-1">‘Design patterns : elements of reusable object-oriented software’</a></p>
<p>It is telling how much the book contents differs from the title that one of the most common sentences seems to be ‘that this is a not a build in automated refactoring in Visual Studio 2010’. Even though this appears fairly regularly the author goes onto explain how this limitation can be addressed in nice practical ways, interesting choosing to not mention third party refactoring tools until virtual the last page of the book.</p>
<p>Basically this book discuses theory in a nice accessible manner. It is not a simple ‘click here to do this’ tooling reference, don’t let the title fool you. It is well worth the read, you can see a sample <a href="https://www.packtpub.com/sites/default/files/0103-chapter-6-improving-class-quality.pdf">&lsquo;Chapter No 6 &ldquo;Improving Class Quality&rsquo;</a> online at Packt</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD Dublin call for speakers is open</title>
      <link>https://blog.richardfennell.net/posts/ddd-dublin-call-for-speakers-is-open/</link>
      <pubDate>Tue, 31 Aug 2010 11:52:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-dublin-call-for-speakers-is-open/</guid>
      <description>&lt;p&gt;There is a call for speaker for DDD in Dublin on the 9th of October, that is not that far away is it!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;[Updated 1 Sep 2010]&lt;/strong&gt; The link would help &lt;a href=&#34;http://developerdeveloperdeveloper.com/dddie10/&#34; title=&#34;http://developerdeveloperdeveloper.com/dddie10/&#34;&gt;http://developerdeveloperdeveloper.com/dddie10/&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is a call for speaker for DDD in Dublin on the 9th of October, that is not that far away is it!</p>
<p><strong>[Updated 1 Sep 2010]</strong> The link would help <a href="http://developerdeveloperdeveloper.com/dddie10/" title="http://developerdeveloperdeveloper.com/dddie10/">http://developerdeveloperdeveloper.com/dddie10/</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>What is an .xesc file?</title>
      <link>https://blog.richardfennell.net/posts/what-is-an-xesc-file/</link>
      <pubDate>Fri, 27 Aug 2010 15:10:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/what-is-an-xesc-file/</guid>
      <description>&lt;p&gt;Test Professional, after the &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?displaylang=en&amp;amp;FamilyID=8406ef19-35a3-4c03-a145-08ba982f3cef&amp;amp;utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed:&amp;#43;MicrosoftDownloadCenter&amp;#43;%28Microsoft&amp;#43;Download&amp;#43;Center%29&#34;&gt;Lab Management update&lt;/a&gt;, now uses &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/dd997558.aspx&#34;&gt;Expression Encoder 4.0 to create it video of screen activity&lt;/a&gt;. This means that when you run a test and record a video you end up with an attachment called ScreenCapture.xesc.&lt;/p&gt;
&lt;p&gt;Now my PC did not have the Expression Encoder 4.0 installed, so did not know what to do with an .xesc file created within our Lab Management environment. To address this the answer is simple. On any PC that might want to view the video either:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Test Professional, after the <a href="http://www.microsoft.com/downloads/details.aspx?displaylang=en&amp;FamilyID=8406ef19-35a3-4c03-a145-08ba982f3cef&amp;utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed:&#43;MicrosoftDownloadCenter&#43;%28Microsoft&#43;Download&#43;Center%29">Lab Management update</a>, now uses <a href="http://msdn.microsoft.com/en-us/library/dd997558.aspx">Expression Encoder 4.0 to create it video of screen activity</a>. This means that when you run a test and record a video you end up with an attachment called ScreenCapture.xesc.</p>
<p>Now my PC did not have the Expression Encoder 4.0 installed, so did not know what to do with an .xesc file created within our Lab Management environment. To address this the answer is simple. On any PC that might want to view the video either:</p>
<ol>
<li>Install the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=75402be0-c603-4998-a79c-becdd197aa79&amp;displaylang=en">Expression Encoder 4</a> </li>
<li>or install just the <a href="http://www.microsoft.com/downloads/details.aspx?displaylang=en&amp;FamilyID=71895f93-d804-4b70-8440-6b726ea0f12c">Screen Capture Code</a></li>
</ol>
<p>Once either of these is done, Media Player can play the .xesc file.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot run CodeUI tests in Lab Management – getting a ’Build directory of the test run is not specified or does not exist’</title>
      <link>https://blog.richardfennell.net/posts/cannot-run-codeui-tests-in-lab-management-getting-a-build-directory-of-the-test-run-is-not-specified-or-does-not-exist/</link>
      <pubDate>Fri, 27 Aug 2010 14:33:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-run-codeui-tests-in-lab-management-getting-a-build-directory-of-the-test-run-is-not-specified-or-does-not-exist/</guid>
      <description>&lt;p&gt;Interesting user too stupid error today whist adding some CodeUI tests to a Lab Management deployment scenario.&lt;/p&gt;
&lt;p&gt;I added the Test Case and associated it with Coded UI test in Visual Studio&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_57482B45.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_0F86A25E.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I made sure my deployment build had the tests selected&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_566FC55B.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_1D58E859.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I then ran my Lab Deployment build, but got the error&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Build directory of the test run is not specified or does not exist.&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This normally means the test VM cannot see the share containing the build. I checked the agent login on the test VM could view the drop location, that was OK, but when I looked for the assembly containing my coded UI tests was just not there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Interesting user too stupid error today whist adding some CodeUI tests to a Lab Management deployment scenario.</p>
<p>I added the Test Case and associated it with Coded UI test in Visual Studio</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_57482B45.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_0F86A25E.png" title="image"></a></p>
<p>I made sure my deployment build had the tests selected</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_566FC55B.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1D58E859.png" title="image"></a></p>
<p>I then ran my Lab Deployment build, but got the error</p>
<blockquote>
<p><em>Build directory of the test run is not specified or does not exist.</em></p></blockquote>
<p>This normally means the test VM cannot see the share containing the build. I checked the agent login on the test VM could view the drop location, that was OK, but when I looked for the assembly containing my coded UI tests was just not there.</p>
<p>Then I remembered……..</p>
<p>The Lab build can take loads of snapshots and do a sub-build of the actual product. This all very good for production scenarios, but when you are learning about Lab Management or debugging scripts it can be really slow. To speed up the process I had told my Deploy build to not take snapshots and the use the last compile/build drop it could find. I had just forgotten to rebuild my application on the build server after I had added the coded UI tests. So I rebuild that and tried again, but I got the same problem.</p>
<p>It turns out that though I was missing the assembly the error was before it was required. The true real error was not who the various agents were running as, but the account the test controller was running as. The key was to check the test run log. This can be accessed from the Test Run results (I seemed to have a blind spot looking for these result)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2163362B.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1639ABE1.png" title="image"></a></p>
<p>This showed problem, I had selected the default ‘Network Service’ account for the test controller and had not granted it rights to the drop location.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_41120FF3.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_07FB32F1.png" title="image"></a></p>
<p>I changed the account to my tfs210lab account as used by the agents and all was OK.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_20F70336.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_273DD9C4.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Don’t hardcode that build option</title>
      <link>https://blog.richardfennell.net/posts/dont-hardcode-that-build-option/</link>
      <pubDate>Thu, 26 Aug 2010 13:56:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dont-hardcode-that-build-option/</guid>
      <description>&lt;p&gt;I have been using the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx&#34;&gt;ExternalTestRunner 2010 Build activity I wrote&lt;/a&gt;. I realised that at least one of the parameters I need to set, the &lt;strong&gt;ProjectCollection&lt;/strong&gt; used to publish the test results, was hard coded in my sample. It was set in the form&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;http://myserver:8080/tfs/MyCollection&lt;/strong&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This is not that sensible, as this value is available using the build API as&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;BuildDetail.BuildServer.TeamProjectCollection.Uri.ToString()&lt;/strong&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;It makes no sense to hard code the name of the server if the build system already knows it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been using the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx">ExternalTestRunner 2010 Build activity I wrote</a>. I realised that at least one of the parameters I need to set, the <strong>ProjectCollection</strong> used to publish the test results, was hard coded in my sample. It was set in the form</p>
<blockquote>
<p><strong>http://myserver:8080/tfs/MyCollection</strong></p></blockquote>
<p>This is not that sensible, as this value is available using the build API as</p>
<blockquote>
<p><strong>BuildDetail.BuildServer.TeamProjectCollection.Uri.ToString()</strong></p></blockquote>
<p>It makes no sense to hard code the name of the server if the build system already knows it.</p>
<p>This simple change means that the build templates can be fair easier past between Team Projects Collections</p>
]]></content:encoded>
    </item>
    <item>
      <title>Detailed posted on using Access Services and Event Receivers</title>
      <link>https://blog.richardfennell.net/posts/detailed-posted-on-using-access-services-and-event-receivers/</link>
      <pubDate>Wed, 25 Aug 2010 08:26:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/detailed-posted-on-using-access-services-and-event-receivers/</guid>
      <description>&lt;p&gt;Access MVP Ben Clothier has posted an article and video on &lt;a href=&#34;http://blogs.msdn.com/b/access/archive/2010/08/24/going-beyond-web-macros-using-event-receivers-amp-net-with-access-services.aspx&#34;&gt;Going beyond Web Macros: Using Event Receivers &amp;amp; .NET with Access Services&lt;/a&gt;. This takes some of the techniques &lt;a href=&#34;http://blogs.msdn.com/b/access/archive/2009/10/21/net-developer-blogs-about-access-2010.aspx&#34;&gt;I was proposing for .NET/Access integration&lt;/a&gt; and added to them using Ben’s extensive experience in the Access space.&lt;/p&gt;
&lt;p&gt;Well worth a look if you want to make Access a RAD front end to legacy systems.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Access MVP Ben Clothier has posted an article and video on <a href="http://blogs.msdn.com/b/access/archive/2010/08/24/going-beyond-web-macros-using-event-receivers-amp-net-with-access-services.aspx">Going beyond Web Macros: Using Event Receivers &amp; .NET with Access Services</a>. This takes some of the techniques <a href="http://blogs.msdn.com/b/access/archive/2009/10/21/net-developer-blogs-about-access-2010.aspx">I was proposing for .NET/Access integration</a> and added to them using Ben’s extensive experience in the Access space.</p>
<p>Well worth a look if you want to make Access a RAD front end to legacy systems.</p>
]]></content:encoded>
    </item>
    <item>
      <title>&amp;quot;Program too big to fit in memory&amp;quot; when installing a TFS 2010 Test Controller</title>
      <link>https://blog.richardfennell.net/posts/program-too-big-to-fit-in-memory-when-installing-a-tfs-2010-test-controller/</link>
      <pubDate>Mon, 23 Aug 2010 14:43:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/program-too-big-to-fit-in-memory-when-installing-a-tfs-2010-test-controller/</guid>
      <description>&lt;p&gt;Just spent a while battling a problem whilst install the &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/dd648127.aspx#TestControllers&#34;&gt;TFS 2010 Test Controller&lt;/a&gt;. When I launched the install setup program off the .ISO  I could select the Test Controller installer, but then a command prompt flashed up and exited with no obvious error. If I went into the TestControllers directory on the mounted .ISO and ran the setup from a command prompt I saw the error &amp;ldquo;program too big to fit in memory&amp;rdquo;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just spent a while battling a problem whilst install the <a href="http://msdn.microsoft.com/en-us/library/dd648127.aspx#TestControllers">TFS 2010 Test Controller</a>. When I launched the install setup program off the .ISO  I could select the Test Controller installer, but then a command prompt flashed up and exited with no obvious error. If I went into the TestControllers directory on the mounted .ISO and ran the setup from a command prompt I saw the error &ldquo;program too big to fit in memory&rdquo;.</p>
<p>As the box I was trying to use only had 1Gb of memory (below the recommended minimum), I upped it to 2Gb and then to 4Gb but still got the same error.</p>
<p>Turns out the problem was a corrupt .ISO once I had downloaded it again, and dropped by target VM to 2Gb of memory all was fine.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Great iPlayer Media Center Plugin</title>
      <link>https://blog.richardfennell.net/posts/great-iplayer-media-center-plugin/</link>
      <pubDate>Wed, 18 Aug 2010 12:25:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/great-iplayer-media-center-plugin/</guid>
      <description>&lt;p&gt;I am very impressed with the &lt;a href=&#34;http://www.xpmediacentre.com.au/community/windows-media-center-plugins-addons/40492-new-iplayer-plugins.html&#34;&gt;iPlayer Media Center Plugin I found on the Australian Media Center Community&lt;/a&gt;, I like most people found it installed fine but that it failed to add itself to the start menu. However this was easily fixed using &lt;a href=&#34;http://adventmediacenter.com/&#34;&gt;Media Center Studio&lt;/a&gt; once I got my head around the Media Center Studio’s user interface. The basic process is:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Load Media Center Studio&lt;/li&gt;
&lt;li&gt;Go onto the Start Menu tab&lt;/li&gt;
&lt;li&gt;At the bottom of the screen look for the entry points section (needs expanding usually)&lt;/li&gt;
&lt;li&gt;In here you should find the iPlayer application (has a red’ish pay button logo)&lt;/li&gt;
&lt;li&gt;Drag it onto whichever part of the start menu you want, but be aware there are some limitation as to where it can go, Extensions worked fine for me&lt;/li&gt;
&lt;li&gt;Save the changes&lt;/li&gt;
&lt;li&gt;Restart Media Center&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Once this is done you should be able to view WMV based iPlayer content from within Media Center. I have seen it take a while to start buffering content, but other than that it seems to work well and certainly looks the part.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am very impressed with the <a href="http://www.xpmediacentre.com.au/community/windows-media-center-plugins-addons/40492-new-iplayer-plugins.html">iPlayer Media Center Plugin I found on the Australian Media Center Community</a>, I like most people found it installed fine but that it failed to add itself to the start menu. However this was easily fixed using <a href="http://adventmediacenter.com/">Media Center Studio</a> once I got my head around the Media Center Studio’s user interface. The basic process is:</p>
<ol>
<li>Load Media Center Studio</li>
<li>Go onto the Start Menu tab</li>
<li>At the bottom of the screen look for the entry points section (needs expanding usually)</li>
<li>In here you should find the iPlayer application (has a red’ish pay button logo)</li>
<li>Drag it onto whichever part of the start menu you want, but be aware there are some limitation as to where it can go, Extensions worked fine for me</li>
<li>Save the changes</li>
<li>Restart Media Center</li>
</ol>
<p>Once this is done you should be able to view WMV based iPlayer content from within Media Center. I have seen it take a while to start buffering content, but other than that it seems to work well and certainly looks the part.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running MSDeploy to a remote box from inside a TFS 2010 Build (Part 2)</title>
      <link>https://blog.richardfennell.net/posts/running-msdeploy-to-a-remote-box-from-inside-a-tfs-2010-build-part-2/</link>
      <pubDate>Fri, 13 Aug 2010 14:36:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-msdeploy-to-a-remote-box-from-inside-a-tfs-2010-build-part-2/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/08/06/running-msdeploy-to-a-remote-box-from-inside-a-tfs-2010-build.aspx&#34;&gt;Another follow up post, this time to the one on MSDeploy&lt;/a&gt;. As I said in that post a better way to trigger the MSDeploy PowerShell script would be as part of the build workflow, as opposed to a post build action in the MSBuild phase. Doing it this way means if the build failed testing, after MSBuild complete, you can still choose not to run MSDeploy.&lt;/p&gt;
&lt;p&gt;I have implemented this using an InvokeProcess call in my build workflow, which I have placed just before Gated checking logic at the end of the process template.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/08/06/running-msdeploy-to-a-remote-box-from-inside-a-tfs-2010-build.aspx">Another follow up post, this time to the one on MSDeploy</a>. As I said in that post a better way to trigger the MSDeploy PowerShell script would be as part of the build workflow, as opposed to a post build action in the MSBuild phase. Doing it this way means if the build failed testing, after MSBuild complete, you can still choose not to run MSDeploy.</p>
<p>I have implemented this using an InvokeProcess call in my build workflow, which I have placed just before Gated checking logic at the end of the process template.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_05B895CB.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_131EA8D1.png" title="image"></a></p>
<p>The if statement is there so I only deploy if a deploy location is set and all the tests passed</p>
<blockquote>
<p>BuildDetail.TestStatus = Microsoft.TeamFoundation.Build.Client.BuildPhaseStatus.Succeeded And<br>
String.IsNullOrEmpty(DeployLocation) = False</p></blockquote>
<p>The InvokeProcess filename property is</p>
<blockquote>
<p>BuildDetail.DropLocation &amp; &ldquo;_PublishedWebsites&rdquo; &amp; WebSiteAssemblyName &amp; &ldquo;_Package&rdquo; &amp; WebSiteAssemblyName &amp; &ldquo;.deploy.cmd&rdquo;</p></blockquote>
<p>Where “WebSiteAssemblyName” is a build argument the name of the Project that has been publish (I have not found a way to automatically detect it) e.g. BlackMarble.MyWebSite. This obviously as be set as an argument for the build if the deploy is to work</p>
<p>The arguments property is set to</p>
<blockquote>
<p>&ldquo;/M:<a href="http://%22">http://&rdquo;</a> &amp; DeployLocation &amp; &ldquo;/MSDEPLOYAGENTSERVICE /Y”</p></blockquote>
<p>Again the “DeployLocation” is a build arguement that is the name of the server to deploy to e.g. MyServer</p>
<p>The Result property is set to an Integer build variable, so any error code can be returned in the WriteBuildError</p>
<p>This seems to work for me and I think it is neater than previous solution</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to edit a TFS 2010 build template when it contains custom activities.</title>
      <link>https://blog.richardfennell.net/posts/how-to-edit-a-tfs-2010-build-template-when-it-contains-custom-activities/</link>
      <pubDate>Fri, 13 Aug 2010 12:29:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-edit-a-tfs-2010-build-template-when-it-contains-custom-activities/</guid>
      <description>&lt;p&gt;I posted a while ago on u&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx&#34;&gt;sing my Typemock TMockRunner Custom Activity for Team Build 2010&lt;/a&gt;. I left that post with the problem that if you wished to customise a template after you a had added the custom activity you had to use the somewhat complex branching model edit the XAML.&lt;/p&gt;
&lt;p&gt;If you just followed the process in my post to put the build template in a new team project and tried to edit the XAML you got the following errors, an import namespace error and the associated inability to render part of the workflow&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted a while ago on u<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx">sing my Typemock TMockRunner Custom Activity for Team Build 2010</a>. I left that post with the problem that if you wished to customise a template after you a had added the custom activity you had to use the somewhat complex branching model edit the XAML.</p>
<p>If you just followed the process in my post to put the build template in a new team project and tried to edit the XAML you got the following errors, an import namespace error and the associated inability to render part of the workflow</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_4B3CF512.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_434552B0.png" title="image"></a></p>
<p>The best answer I have been able to find has been to put the custom activity into the GAC on the PC what you wish to edit the template on, just there nowhere else the method in the previous post is fine for build agents. So I strongly signed the custom activity assembly, used GACUTIL to put it in my GAC and was then able to load the template without any other alterations. I as also able to add it to my Visual Studio toolbox so that I could drop new instances of the external test runner onto the workflow.</p>
]]></content:encoded>
    </item>
    <item>
      <title>And the first official release of the TFS Integration Platform</title>
      <link>https://blog.richardfennell.net/posts/and-the-first-official-release-of-the-tfs-integration-platform/</link>
      <pubDate>Fri, 13 Aug 2010 10:39:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-the-first-official-release-of-the-tfs-integration-platform/</guid>
      <description>&lt;p&gt;The last of the recent batch of  announcements is that of the first official release of the TFS Integration Platform. This provides tools to allow synchronisation between TFS 2010 and …..&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;TFS 2008&lt;/li&gt;
&lt;li&gt;TFS 2010&lt;/li&gt;
&lt;li&gt;ClearCase&lt;/li&gt;
&lt;li&gt;ClearQuest&lt;/li&gt;
&lt;li&gt;File System&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Again more &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2010/08/09/tfs-integration-platform-updated.aspx&#34;&gt;details on Brian Harry’s blog&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The last of the recent batch of  announcements is that of the first official release of the TFS Integration Platform. This provides tools to allow synchronisation between TFS 2010 and …..</p>
<ul>
<li>TFS 2008</li>
<li>TFS 2010</li>
<li>ClearCase</li>
<li>ClearQuest</li>
<li>File System</li>
</ul>
<p>Again more <a href="http://blogs.msdn.com/b/bharry/archive/2010/08/09/tfs-integration-platform-updated.aspx">details on Brian Harry’s blog</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New tool for configuring Sharepoint MOSS for TFS dashboards</title>
      <link>https://blog.richardfennell.net/posts/new-tool-for-configuring-sharepoint-moss-for-tfs-dashboards/</link>
      <pubDate>Fri, 13 Aug 2010 10:33:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-tool-for-configuring-sharepoint-moss-for-tfs-dashboards/</guid>
      <description>&lt;p&gt;Another recent announcement from &lt;a href=&#34;http://blogs.msdn.com/b/bharry/archive/2010/08/04/configuring-microsoft-office-sharepoint-server-with-tfs-2010.aspx&#34;&gt;Brian Harry’s blog&lt;/a&gt; was about the release of the &lt;a href=&#34;http://visualstudiogallery.msdn.microsoft.com/en-us/db469790-5e3e-42f3-906e-411a73795a1b&#34;&gt;Microsoft Team Foundation Server 2010 MOSS Configuration Tool&lt;/a&gt;. I had a look at this tool during it’s beta testing and it certainly can help to get all the setting right to allow Excel Services to provide your charting for TFS projects, it worked for us.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Another recent announcement from <a href="http://blogs.msdn.com/b/bharry/archive/2010/08/04/configuring-microsoft-office-sharepoint-server-with-tfs-2010.aspx">Brian Harry’s blog</a> was about the release of the <a href="http://visualstudiogallery.msdn.microsoft.com/en-us/db469790-5e3e-42f3-906e-411a73795a1b">Microsoft Team Foundation Server 2010 MOSS Configuration Tool</a>. I had a look at this tool during it’s beta testing and it certainly can help to get all the setting right to allow Excel Services to provide your charting for TFS projects, it worked for us.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2010 Lab Management released announced</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2010-lab-management-released-announced/</link>
      <pubDate>Fri, 13 Aug 2010 09:49:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2010-lab-management-released-announced/</guid>
      <description>&lt;p&gt;In the &lt;a href=&#34;http://vslive.com/Home.aspx&#34;&gt;VSLive! keynote&lt;/a&gt; Microsoft made announcements about Lab Management, it will be RTM’d later this month and best of all it will be included as part of the benefits of the Visual Studio 2010 Ultimate with MSDN and Visual Studio Test Professional 2010 with MSDN SKUs. &lt;a href=&#34;http://blogs.msdn.com/b/briankel/archive/2010/08/04/visual-studio-2010-lab-management-is-coming-later-this-month.aspx&#34;&gt;You can read more detail on Brian Keller’s blog&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I think this is a great move on licensing, we had expect it to be purchasable addition to Visual Studio. With this change it now is consistent with TFS i.e. if you have the right SKU of Visual Studio and MSDN you get the feature. This greatly removes the barrier to entry for this technology.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In the <a href="http://vslive.com/Home.aspx">VSLive! keynote</a> Microsoft made announcements about Lab Management, it will be RTM’d later this month and best of all it will be included as part of the benefits of the Visual Studio 2010 Ultimate with MSDN and Visual Studio Test Professional 2010 with MSDN SKUs. <a href="http://blogs.msdn.com/b/briankel/archive/2010/08/04/visual-studio-2010-lab-management-is-coming-later-this-month.aspx">You can read more detail on Brian Keller’s blog</a></p>
<p>I think this is a great move on licensing, we had expect it to be purchasable addition to Visual Studio. With this change it now is consistent with TFS i.e. if you have the right SKU of Visual Studio and MSDN you get the feature. This greatly removes the barrier to entry for this technology.</p>
<p>I look forward to have a forthright discussion with our IT manager over Hyper-V cluster resources in the near future</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running MSDeploy to a remote box from inside a TFS 2010 Build</title>
      <link>https://blog.richardfennell.net/posts/running-msdeploy-to-a-remote-box-from-inside-a-tfs-2010-build/</link>
      <pubDate>Fri, 06 Aug 2010 13:27:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-msdeploy-to-a-remote-box-from-inside-a-tfs-2010-build/</guid>
      <description>&lt;p&gt;[Also see &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/08/13/running-msdeploy-to-a-remote-box-from-inside-a-tfs-2010-build-part-2.aspx&#34;&gt;Running MSDeploy to a remote box from inside a TFS 2010 Build (Part 2)&lt;/a&gt;] &lt;/p&gt;
&lt;p&gt;A fellow MVP Ewald Hofman wrote a &lt;a href=&#34;http://www.ewaldhofman.nl/post/2010/04/12/Auto-deployment-of-my-web-application-with-Team-Build-2010-to-add-Interactive-Testing.aspx&#34;&gt;great post on getting an Web Application to deploy as part of a TFS 2010 build&lt;/a&gt;. I have been using this technique but found a few points that were not right in the original post, I assume these are down to RC/RTM issues and the fact I was trying to access a remote system.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>[Also see <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/08/13/running-msdeploy-to-a-remote-box-from-inside-a-tfs-2010-build-part-2.aspx">Running MSDeploy to a remote box from inside a TFS 2010 Build (Part 2)</a>] </p>
<p>A fellow MVP Ewald Hofman wrote a <a href="http://www.ewaldhofman.nl/post/2010/04/12/Auto-deployment-of-my-web-application-with-Team-Build-2010-to-add-Interactive-Testing.aspx">great post on getting an Web Application to deploy as part of a TFS 2010 build</a>. I have been using this technique but found a few points that were not right in the original post, I assume these are down to RC/RTM issues and the fact I was trying to access a remote system.</p>
<p>This is what I had to do</p>
<ol>
<li>
<p>Follow the details referenced by Ewald in the post to <a href="http://vishaljoshi.blogspot.com/2009/05/web-1-click-publish-with-vs-2010.html">create a MSDeploy package</a>, for me the default setting were fine as all I wanted to deploy was a basic web site that had no need for any SQL</p>
</li>
<li>
<p>Test that this package is works from your development PC to the target web server. If I viewed this publish profile in VS2010 I see the following</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_3F80C5E4.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4633CF67.png" title="image"></a></p>
</li>
<li>
<p>Now all these details are stored in a <ProjectName>.Publish.xml file in the project root directory. However, this file is not source controlled. It is only for use on the development PC.</p>
</li>
<li>
<p>You need to set the Site/Application name that will be used when the build server rebuilds this file. This is done by going into the project properties and onto the Package/Publish Web tab. At the bottom of the page set the IIS web application. If you don’t set this the package built will default to the <strong>Default Site/<Project Name> Deploy</strong> virtual directory, In my case I just wanted to point to the root of a dedicated II6 hosted web site called Test3.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2618C2AA.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1AEF3860.png" title="image"></a><br>
 </p>
</li>
<li>
<p>You now need to got back to Ewalds post. In the build definition you need to add the <strong>/p:DeployOnBuild=True</strong> MSBuild parameters to cause the package to be created</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_4CE6D8EA.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7AD42BA2.png" title="image"></a></p>
</li>
<li>
<p>As he says this causes the package to be created, but not deployed, to do this you need to add a post build event to the project (Or you could edit the Build Process Template to add a PowerShell step at the end, which might be a better option. I have found as I am using a gated Check-in I get the deployment prior to the build doing testing, so potentially I publish something that builds but the tests fail).</p>
<p>Now this is where I found the most obvious difference in the post, and that is the path. My post build step had the following</p>
<p>if &ldquo;$(ConfigurationName)&rdquo; == &ldquo;Release&rdquo; &ldquo;$(TargetDir)_PublishedWebsites$(TargetName)_Package$(TargetName).deploy.cmd&rdquo;  /M:<a href="http://hunter/MSDEPLOYAGENTSERVICE">http://hunter/MSDEPLOYAGENTSERVICE</a>  /Y</p>
<p>The first point is that the generated file path and file name is different to that in Ewald’s post. The second point is that he was trying to deploy locally to allow a CodeUI test to run, I wanted the build to be deployed to a simple IIS server (not a clever lab management environment) so I also need the /M: parameter.</p>
<p>Also remember is that the build agent service user (in my case TFSBuild) must have admin rights on the box you are trying to deploy too, esle the process fails</p>
</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>Getting a ,NET 4 WCF service running on IIS7</title>
      <link>https://blog.richardfennell.net/posts/getting-a-net-4-wcf-service-running-on-iis7/</link>
      <pubDate>Fri, 06 Aug 2010 11:33:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-a-net-4-wcf-service-running-on-iis7/</guid>
      <description>&lt;p&gt;Now I know this should be simple and obvious but I had a few problems today publishing a web service to a new IIS7 host. These are the steps I had to follow to get around all my errors:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Take a patched Windows Server 2008 R2, this had the File Server and IIS roles installed.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;I install MSDeploy (&lt;a href=&#34;http://blog.iis.net/msdeploy&#34;&gt;http://blog.iis.net/msdeploy&lt;/a&gt;) onto the server to manage my deployment, this is a tool I am becoming a big fan of lately.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Now I know this should be simple and obvious but I had a few problems today publishing a web service to a new IIS7 host. These are the steps I had to follow to get around all my errors:</p>
<ol>
<li>
<p>Take a patched Windows Server 2008 R2, this had the File Server and IIS roles installed.</p>
</li>
<li>
<p>I install MSDeploy (<a href="http://blog.iis.net/msdeploy">http://blog.iis.net/msdeploy</a>) onto the server to manage my deployment, this is a tool I am becoming a big fan of lately.</p>
</li>
<li>
<p>Make sure the MS Deploy service has started, it doesn’t by default.</p>
</li>
<li>
<p>In IIS manager</p>
</li>
<li>
<p>Create a new AppPool (I needed to set it to .NET 4 for my application)</p>
</li>
<li>
<p>Create a new Web Site, pointing at the new AppPool</p>
</li>
<li>
<p>In Visual Studio 2010 create an MSDeploy profile to send to the new server and web site. This deployed OK</p>
</li>
<li>
<p><strong>AND THIS IS WHERE THE PROBLEMS STARTED</strong></p>
</li>
<li>
<p>When I browsed to my WCF webservice e.g.http://mysite:8080/myservice.svc whilst on the server I got a ‘500.24 Integrated Pipeline Issue’ error. This was fixed by swapping my AppPool’s pipeline mode to Classic, as I did need to use impersonation for this service.</p>
</li>
<li>
<p>Next I got a ‘404.3 Not Found’ error. This was because the WCF Activation feature was not installed in the box. This is added via Server 2008 : Server Manager -&gt; Add Features-&gt; .Net Framework 3.x Features -&gt; WCF Activation</p>
</li>
<li>
<p>Next it was a ‘404.17 Not Found Static Handler’. If I looked in IIS Manager, Feature View, Handler Mapping I only saw mention on 2.0 versions of files. So I reran aspnet_iisreg /i from the 4.0 framework directory and both 2.0 and 4.0 versions were shown in the Handler list</p>
</li>
<li>
<p>Next it was a ‘404.2 Not Found. Description: The page you are requesting cannot be served because of the ISAPI and CGI <em>Restriction list settings</em> on the Web server’. In the IIS Manager at the server level I had to enable the .NET 4 handlers in ISAPI and CGI restriction section</p>
</li>
<li>
<p>I could then get to see the WSDL for the WCF service</p>
</li>
<li>
<p>And finally I had to open port 8080 on the box to allow my clients to see it.</p>
</li>
</ol>
<p>Now that was straight forward wasn&rsquo;t;t it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting code coverage working on Team Build 2010</title>
      <link>https://blog.richardfennell.net/posts/getting-code-coverage-working-on-team-build-2010/</link>
      <pubDate>Thu, 05 Aug 2010 16:08:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-code-coverage-working-on-team-build-2010/</guid>
      <description>&lt;p&gt;If you have VS2010 Premium or Ultimate [Professional corrected error in orginal post]  you have code coverage built into the test system. When you look at your test results there is a button to see the code coverage&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_7271AF2D.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_7D2F0682.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You would think that there is easy way to use the code coverage in your automated build process using Team Build 2010, well it can done but you have to do a bit of work.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you have VS2010 Premium or Ultimate [Professional corrected error in orginal post]  you have code coverage built into the test system. When you look at your test results there is a button to see the code coverage</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7271AF2D.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7D2F0682.png" title="image"></a></p>
<p>You would think that there is easy way to use the code coverage in your automated build process using Team Build 2010, well it can done but you have to do a bit of work.</p>
<p><strong>What’s on the build box?</strong></p>
<p>Firstly if your build PC has only an operating system and the Team Build Agent (with or without the Build Controller service) then stop here. This is enough to build many things but not to get code coverage. The only way to get code coverage to work is to have VS2010 Premium or Ultimate also installed on the build box.</p>
<p>Now there is some confusion in blog posts over if you install the Visual Studio 2010 Test Agents do you get code coverage, the answer for our purposes is no. The agents will allow remote code coverage in a Lab Environment via a Test Controller, but they do not provide the bits needs to allow code coverage to be run locally during a build/unit test cycle.</p>
<p><strong>Do I have a .TestSettings file?</strong></p>
<p>Code Coverage is managed using your solution’s .TestSetting file. My project did not have one of these, so I had to ‘add new item’ it via add on a right click in the solution items.</p>
<p>The reason I had no .TestSettings file was because I started with an empty solution and added projects to it, if you start with a project, such as a web application, and let the solution be created for you automatically then there should be a .TestSettings file created.</p>
<p>In the test settings you need to look at the Data &amp; Diagnostics tab and enable code coverage and then press the configure button, this is important.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_23FD1CC3.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1CDDE04B.png" title="image"></a></p>
<p>On the configuration dialog will see a list of your projects and assemblies. In my case initially I only saw the first and the last rows in the graphic below. I selected the first row, the project containing my production code and tried a build.</p>
<p>THIS DID NOT WORK – I had to added the actual production assembly as opposed to the web site project (the middle row shown below). I think this was the key step to getting it going.</p>
<p>The error I got before I did this was <strong>Empty results generated: none of the instrumented binary was used. Look at test run details for any instrumentation problems.</strong>  So if you see this message in the build report check what assemblies are flagged for code coverage.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_11B45601.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_43ABF68B.png" title="image"></a></p>
<p><strong>Does my build definition know about the .TestSettings file?</strong></p>
<p>You now need to make sure that build knows the .TestSettings file exists. Again this should be done automatically when you create a build (if the file exists), but on my build I had to add it manually as I created the file after the build.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0A951989.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_2390E9CE.png" title="image"></a></p>
<p>So when all this is done you get to see a build with test results and code coverage.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_6A7A0CCB.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_31632FC9.png" title="image"></a></p>
<p>Easy wasn’t it!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Next weeks Agile Yorkshire meeting: Some things about testing that everyone should know, ...... but were afraid to ask, in case somebody told them.</title>
      <link>https://blog.richardfennell.net/posts/next-weeks-agile-yorkshire-meeting-some-things-about-testing-that-everyone-should-know-but-were-afraid-to-ask-in-case-somebody-told-them/</link>
      <pubDate>Thu, 05 Aug 2010 08:10:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/next-weeks-agile-yorkshire-meeting-some-things-about-testing-that-everyone-should-know-but-were-afraid-to-ask-in-case-somebody-told-them/</guid>
      <description>&lt;p&gt;It is Agile Yorkshire time again, it is a real shame that due to the move of the meeting from the 2nd Wednesday to the 2nd Tuesday I really struggle to make the events. Particularly irritating this month as this one look really interesting and the speaker, Ralph Williams, from past evidence always is entertaining. To quote the Agile Yorkshire site the session will..&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“The presentation will focus on the techniques that testers use to identify their tests, whether working from a requirements specification or on agile teams.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is Agile Yorkshire time again, it is a real shame that due to the move of the meeting from the 2nd Wednesday to the 2nd Tuesday I really struggle to make the events. Particularly irritating this month as this one look really interesting and the speaker, Ralph Williams, from past evidence always is entertaining. To quote the Agile Yorkshire site the session will..</p>
<p><em>“The presentation will focus on the techniques that testers use to identify their tests, whether working from a requirements specification or on agile teams.</em></p>
<p><em>Agile testing books mostly focus on the agile aspects or the technology so this area often gets glossed over. The main sections would be:</em></p>
<ul>
<li><em>Equivalence Classes and Boundary Conditions</em></li>
<li><em>Decision Tables</em></li>
<li><em>Classification Trees</em></li>
<li><em>User Focused Testing</em></li>
</ul>
<p><em>There will be a group exercise looking a how these techniques can be applied to the testing of a well known website.</em></p>
<p><em>As a group we will go through the process of identifying the testing that is required and in the process explain various test techniques that might be useful to people back in their day jobs.”</em></p>
<p><em>For full details see <a href="http://www.agileyorkshire.org/event-announcements/10Aug2010" title="http://www.agileyorkshire.org/event-announcements/10Aug2010">http://www.agileyorkshire.org/event-announcements/10Aug2010</a></em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Scale out TFS2010 issues and fixes</title>
      <link>https://blog.richardfennell.net/posts/scale-out-tfs2010-issues-and-fixes/</link>
      <pubDate>Mon, 26 Jul 2010 09:55:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/scale-out-tfs2010-issues-and-fixes/</guid>
      <description>&lt;p&gt;One of our clients has been working on a nice scale out implementation of TFS2010 (migrating from an existing TFS2005). They are using AT load balancing, failover SQL cluster and Search Server Express to unify search. As you might expect this has been a interesting learning experience for all.&lt;/p&gt;
&lt;p&gt;You can find out about some of the problems they have had and their solutions are &lt;a href=&#34;http://www.rancidswan.com/?cat=19&#34; title=&#34;http://www.rancidswan.com/?cat=19&#34;&gt;http://www.rancidswan.com/?cat=19&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One of our clients has been working on a nice scale out implementation of TFS2010 (migrating from an existing TFS2005). They are using AT load balancing, failover SQL cluster and Search Server Express to unify search. As you might expect this has been a interesting learning experience for all.</p>
<p>You can find out about some of the problems they have had and their solutions are <a href="http://www.rancidswan.com/?cat=19" title="http://www.rancidswan.com/?cat=19">http://www.rancidswan.com/?cat=19</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Once more with feeling…watching yourself on video</title>
      <link>https://blog.richardfennell.net/posts/once-more-with-feelingwatching-yourself-on-video/</link>
      <pubDate>Sun, 25 Jul 2010 10:11:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/once-more-with-feelingwatching-yourself-on-video/</guid>
      <description>&lt;p&gt;I have been meaning to watch back the video taken of my recent presentation for a while. This weekend I have had the chance to sample the highlights! In the past I have always found it hard to watch myself on recordings, though after the initial cringe this time it was not too bad. I must be getting used to the critical process.&lt;/p&gt;
&lt;p&gt;So what have I learnt&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I say ‘err’ to ‘err’ much ‘err’&lt;/li&gt;
&lt;li&gt;I move my hands about much like &lt;a href=&#34;http://en.wikipedia.org/wiki/Magnus_Pyke&#34;&gt;Magnus Pike&lt;/a&gt;, a childhood 1970/80s TV popular science reference as seen in &lt;a href=&#34;http://www.youtube.com/watch?v=EoJKWR3DIuA&#34;&gt;this Thomas Dolby video&lt;/a&gt; and this &lt;a href=&#34;http://www.youtube.com/watch?v=6Hpd5M8Vfdg&#34;&gt;Mitchell and Web sketch&lt;/a&gt;, for those not old enough to remember. I don’t think it is too irritating do you? But to carry it off fully I do need madder hair.&lt;/li&gt;
&lt;li&gt;I tend to wander about the stage, I know it is consider good presenting practice not to talk and walk at the same time. But does this actually get on the audiences nerves? It certainly does me when watching when I jump up and down during a demo. I think I know why I am doing it, I heard about an idea from &lt;a href=&#34;http://en.wikipedia.org/wiki/Neuro-linguistic_programming&#34;&gt;NLP&lt;/a&gt; for presenting/teaching to always deliver specific types of information from the same physical location e.g. funny anecdote by the windows, summary points by the lectern etc. The idea is it subconsciously primes the audience what to expect and how to treat it e.g. oh here he is by the lectern I must remember this as it is important. I guess I need to just be a bit more sparing in the application of this technique.&lt;/li&gt;
&lt;li&gt;I still use too many slides, I especially need to focus on less bullet pointed ones. &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/fun-presenting-last-night-at-bcs.aspx&#34;&gt;When presenting at the BCS recently the projector&lt;/a&gt; broke 5 minutes into my session and I did the rest of the session without slides (that the audience could see). All I missed was a couple of images of Scrum and Kanban (and wild hand waving an pointing at a white wall got me round the problems). I the feedback I got was good. However a key factor was I could see my bullet pointed slides on my laptop. I realised the slides are my speaker notes. So next time I intend to try very few audience slides but have a stack much like my normal one running on PowerPoint on my phone. Lets see how that works.&lt;/li&gt;
&lt;li&gt;…..Oh and I have a Birmingham accent, someone could have mentioned it!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I would heartily recommend anyone presenting to have a look at videos of your sessions, even if they are just ones taken with a camcorder or phone from the back of the room. You may be surprised how you appear, sessions often look very different to the audience as opposed to how you felt it went from the stage.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been meaning to watch back the video taken of my recent presentation for a while. This weekend I have had the chance to sample the highlights! In the past I have always found it hard to watch myself on recordings, though after the initial cringe this time it was not too bad. I must be getting used to the critical process.</p>
<p>So what have I learnt</p>
<ul>
<li>I say ‘err’ to ‘err’ much ‘err’</li>
<li>I move my hands about much like <a href="http://en.wikipedia.org/wiki/Magnus_Pyke">Magnus Pike</a>, a childhood 1970/80s TV popular science reference as seen in <a href="http://www.youtube.com/watch?v=EoJKWR3DIuA">this Thomas Dolby video</a> and this <a href="http://www.youtube.com/watch?v=6Hpd5M8Vfdg">Mitchell and Web sketch</a>, for those not old enough to remember. I don’t think it is too irritating do you? But to carry it off fully I do need madder hair.</li>
<li>I tend to wander about the stage, I know it is consider good presenting practice not to talk and walk at the same time. But does this actually get on the audiences nerves? It certainly does me when watching when I jump up and down during a demo. I think I know why I am doing it, I heard about an idea from <a href="http://en.wikipedia.org/wiki/Neuro-linguistic_programming">NLP</a> for presenting/teaching to always deliver specific types of information from the same physical location e.g. funny anecdote by the windows, summary points by the lectern etc. The idea is it subconsciously primes the audience what to expect and how to treat it e.g. oh here he is by the lectern I must remember this as it is important. I guess I need to just be a bit more sparing in the application of this technique.</li>
<li>I still use too many slides, I especially need to focus on less bullet pointed ones. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/fun-presenting-last-night-at-bcs.aspx">When presenting at the BCS recently the projector</a> broke 5 minutes into my session and I did the rest of the session without slides (that the audience could see). All I missed was a couple of images of Scrum and Kanban (and wild hand waving an pointing at a white wall got me round the problems). I the feedback I got was good. However a key factor was I could see my bullet pointed slides on my laptop. I realised the slides are my speaker notes. So next time I intend to try very few audience slides but have a stack much like my normal one running on PowerPoint on my phone. Lets see how that works.</li>
<li>…..Oh and I have a Birmingham accent, someone could have mentioned it!</li>
</ul>
<p>I would heartily recommend anyone presenting to have a look at videos of your sessions, even if they are just ones taken with a camcorder or phone from the back of the room. You may be surprised how you appear, sessions often look very different to the audience as opposed to how you felt it went from the stage.</p>
<p>If you want to checkout my performances just search for ‘Fennell’ on conference media sites:</p>
<ul>
<li>Microsoft UK Techdays 2010 - <a href="http://www.microsoft.com/uk/techdays/resources.aspx" title="http://www.microsoft.com/uk/techdays/resources.aspx">http://www.microsoft.com/uk/techdays/resources.aspx</a> a session on Lab Manager</li>
<li>Norwegian Developers Conference 2010 - <a href="http://streaming.ndc2010.no/tcs/" title="http://streaming.ndc2010.no/tcs/">http://streaming.ndc2010.no/tcs/</a> three sessions, one on Lab Manager (the Techdays one is better), one on Manual Testing with VS2010 and finally one on Testing Sharepoint Webparts with Typemock</li>
</ul>
<p>Enjoy……</p>
]]></content:encoded>
    </item>
    <item>
      <title>Software Craftsmanship 2010</title>
      <link>https://blog.richardfennell.net/posts/software-craftsmanship-2010/</link>
      <pubDate>Thu, 22 Jul 2010 11:34:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/software-craftsmanship-2010/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.softwarecraftsmanship.org.uk/&#34;&gt;Software Craftsmanship 2010 conference&lt;/a&gt; has been announced and is open for registration. The &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/02/26/intent-is-the-key-thoughts-on-the-way-home-form-software-craftsmanship-2009.aspx&#34;&gt;last Software Craftsmanship conference was one of the most useful events I have ever attended&lt;/a&gt;, so this years should be well worth attending, even though it has gone from being a free event to having a small charge.&lt;/p&gt;
&lt;p&gt;As a bonus it is at &lt;a href=&#34;http://www.bletchleypark.org.uk/&#34;&gt;Bletchley Park&lt;/a&gt;, in itself &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/09/07/your-support-can-keep-our-industries-history-alive.aspx&#34;&gt;worth a trip&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Get in quick spaces are very limited.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.softwarecraftsmanship.org.uk/">Software Craftsmanship 2010 conference</a> has been announced and is open for registration. The <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/02/26/intent-is-the-key-thoughts-on-the-way-home-form-software-craftsmanship-2009.aspx">last Software Craftsmanship conference was one of the most useful events I have ever attended</a>, so this years should be well worth attending, even though it has gone from being a free event to having a small charge.</p>
<p>As a bonus it is at <a href="http://www.bletchleypark.org.uk/">Bletchley Park</a>, in itself <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/09/07/your-support-can-keep-our-industries-history-alive.aspx">worth a trip</a>.</p>
<p>Get in quick spaces are very limited.</p>
]]></content:encoded>
    </item>
    <item>
      <title>All Leds flashing on a Netgear GS108 Switch</title>
      <link>https://blog.richardfennell.net/posts/all-leds-flashing-on-a-netgear-gs108-switch/</link>
      <pubDate>Tue, 20 Jul 2010 09:52:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/all-leds-flashing-on-a-netgear-gs108-switch/</guid>
      <description>&lt;p&gt;I came back form holiday to find a Netgear GS108 switch with all it leds flashing and it passing no data. This is &lt;a href=&#34;http://www.mccambridge.org/blog/2008/04/howto-fix-a-broken-netgear-gs108/&#34;&gt;exactly the same symptoms as this post&lt;/a&gt;. My fix did not involve the use of a soldering iron as detailed in the post, I just swapped the PSU and it started working fine. I have seen this before, the PSU shows it’s age before the device itself. Good job I have a big box of misc PSU from devices down the years&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I came back form holiday to find a Netgear GS108 switch with all it leds flashing and it passing no data. This is <a href="http://www.mccambridge.org/blog/2008/04/howto-fix-a-broken-netgear-gs108/">exactly the same symptoms as this post</a>. My fix did not involve the use of a soldering iron as detailed in the post, I just swapped the PSU and it started working fine. I have seen this before, the PSU shows it’s age before the device itself. Good job I have a big box of misc PSU from devices down the years</p>
<p><strong>Update 22 July 2010</strong>: I spoke too soon, came in today to the same problem. Time to swap the capacitors</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running SPDisposeCheck as part of a 2010 CI Build</title>
      <link>https://blog.richardfennell.net/posts/running-spdisposecheck-as-part-of-a-2010-ci-build/</link>
      <pubDate>Sat, 03 Jul 2010 19:47:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-spdisposecheck-as-part-of-a-2010-ci-build/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://code.msdn.microsoft.com/SPDisposeCheck&#34;&gt;SPDisposeCheck&lt;/a&gt; is a great tool for SharePoint developers to make sure that they are disposing of resources correctly. The problem is it is a bit slow to run, a problem as this will mean developers will tend not to run it as often as they should. A good solution to the problem is to run it as part of the continuous integration process. There is are posts on how to do this &lt;a href=&#34;http://stephenvick.wordpress.com/2010/01/06/run-spdisposecheck-as-build-task-and-automated-unit-test/&#34;&gt;via unit test&lt;/a&gt;s and as a &lt;a href=&#34;http://www.sharepointdevwiki.com/display/SPPodCasts/2010/01/24/SPWebCast&amp;#43;009&amp;#43;-&amp;#43;SharePoint&amp;#43;2007&amp;#43;Development&amp;#43;with&amp;#43;Continuous&amp;#43;Integration&amp;#43;%2841&amp;#43;mins%29&#34;&gt;MSBuild task&lt;/a&gt;, but I wanted to use a TFS 2010 style build. Turns out this is reasonably straight forward without the need to write a custom activity.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://code.msdn.microsoft.com/SPDisposeCheck">SPDisposeCheck</a> is a great tool for SharePoint developers to make sure that they are disposing of resources correctly. The problem is it is a bit slow to run, a problem as this will mean developers will tend not to run it as often as they should. A good solution to the problem is to run it as part of the continuous integration process. There is are posts on how to do this <a href="http://stephenvick.wordpress.com/2010/01/06/run-spdisposecheck-as-build-task-and-automated-unit-test/">via unit test</a>s and as a <a href="http://www.sharepointdevwiki.com/display/SPPodCasts/2010/01/24/SPWebCast&#43;009&#43;-&#43;SharePoint&#43;2007&#43;Development&#43;with&#43;Continuous&#43;Integration&#43;%2841&#43;mins%29">MSBuild task</a>, but I wanted to use a TFS 2010 style build. Turns out this is reasonably straight forward without the need to write a custom activity.</p>
<ul>
<li>I created a build template based on the Default one.</li>
<li>After the compile and before the test step I added a InvokeProcess activity</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_4C10C422.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6029E0AB.png" title="image"></a></p></blockquote>
<ul>
<li>
<p>I set the InvokeProcess properties as shown below, the edited settings are</p>
</li>
<li>
<p>Arguments: String.Format(“””{0}””””, outputDirectory) (remember you need the enclosing “ if your path could have spaces in it)</p>
</li>
<li>
<p>Filename: To the location of the SPDisposeCheck.exe file</p>
</li>
<li>
<p>Result: A previously created build variable of type Int32</p>
</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_3ECA3B0F.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_68CA3937.png" title="image"></a></p></blockquote>
<ul>
<li>I also <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/02/23/logging-results-from-invokeprocess-in-a-vs2010-team-build.aspx">dropped in some write message activities to make sure I got the console output in the build log file</a></li>
<li>SPDisposeCheck has a great feature that the number of errors is returned by the command line exe, so the InvokeProcess is able to store this in the SPDisposeOutput variable. We are able to use this fail the build</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_7F3CD7BE.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_5A3F1745.png" title="image"></a></p></blockquote>
<ul>
<li>This is done with a simple if check. If there are any errors found I write a build error message and set the TestStatus to failed. You might choose to set the build status to fail or any other flag you wish. The potential problem with my solution is that the TestStatus value could be reset by the tests that follow in the build process, but for a basic example of using the tool this is fine.</li>
</ul>
<p>So it is easy to added a command line tool to the build. The key reason it is so easy is that SPDisposeCheck returns a number that we can use to see if the test passed or failed. hence we did not need to parse any text or XML results file. I wish more tools did this.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Survey on what you think of the Visual Studio web site</title>
      <link>https://blog.richardfennell.net/posts/survey-on-what-you-think-of-the-visual-studio-web-site/</link>
      <pubDate>Sat, 03 Jul 2010 18:24:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/survey-on-what-you-think-of-the-visual-studio-web-site/</guid>
      <description>&lt;p&gt;Do you use the Visual Studio web site? If you do or you don’t &lt;a href=&#34;https://www.surveymonkey.com/s/KTY6NB9&#34;&gt;Microsoft are interested to find the reasons why via a quick survey&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Do you use the Visual Studio web site? If you do or you don’t <a href="https://www.surveymonkey.com/s/KTY6NB9">Microsoft are interested to find the reasons why via a quick survey</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>BCS EGM results</title>
      <link>https://blog.richardfennell.net/posts/bcs-egm-results/</link>
      <pubDate>Thu, 01 Jul 2010 20:46:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/bcs-egm-results/</guid>
      <description>&lt;p&gt;The BCS has had its &lt;a href=&#34;http://yourfuture.bcs.org/server.php?show=nav.13833&#34;&gt;EGM today and the results are out&lt;/a&gt;. Basically the vote is about 75% in support of the status quo, which I am not surprised by. What I am really pleased to see is that the Trustees withdrew the special resolution the change the number of people to required to call an EGM from 50 people to 2% of the membership.&lt;/p&gt;
&lt;p&gt;I really do hope that this whole EGM process has been a warning to the Trustee and board that they must be more open, and help promote and continue the dialog that EGM has created.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The BCS has had its <a href="http://yourfuture.bcs.org/server.php?show=nav.13833">EGM today and the results are out</a>. Basically the vote is about 75% in support of the status quo, which I am not surprised by. What I am really pleased to see is that the Trustees withdrew the special resolution the change the number of people to required to call an EGM from 50 people to 2% of the membership.</p>
<p>I really do hope that this whole EGM process has been a warning to the Trustee and board that they must be more open, and help promote and continue the dialog that EGM has created.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Re-awarded as a Visual Studio ALM MVP</title>
      <link>https://blog.richardfennell.net/posts/re-awarded-as-a-visual-studio-alm-mvp/</link>
      <pubDate>Thu, 01 Jul 2010 14:58:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/re-awarded-as-a-visual-studio-alm-mvp/</guid>
      <description>&lt;p&gt;I have just found out I have been re-awarded as a Visual Studio ALM MVP for the third year (though it was called Team System the first two times). It is a privilege to get to work with such a great group of people as a have met via the MVP programme.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just found out I have been re-awarded as a Visual Studio ALM MVP for the third year (though it was called Team System the first two times). It is a privilege to get to work with such a great group of people as a have met via the MVP programme.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using my Typemock TMockRunner Custom Activity for Team Build 2010</title>
      <link>https://blog.richardfennell.net/posts/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010/</link>
      <pubDate>Thu, 01 Jul 2010 10:25:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010/</guid>
      <description>&lt;p&gt;[Also see &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/08/13/how-to-edit-a-tfs-2010-build-template-when-it-contains-custom-activities.aspx&#34;&gt;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/08/13/how-to-edit-a-tfs-2010-build-template-when-it-contains-custom-activities.aspx&lt;/a&gt; ]&lt;/p&gt;
&lt;p&gt;A couple of months ago I wrote and &lt;a href=&#34;http://teambuild2010contrib.codeplex.com/wikipage?title=TeamBuild%202010%20Activity%20to%20run%20Typemock%20Isolator%20based%20tests&amp;amp;referringTitle=Home&#34;&gt;published a custom activity for Team Build 2010 to allow Typemock tests to be run within the build process&lt;/a&gt;. Whilst setting up a new build need to use this activity so I though I would see if there was an easier way to use it in a build &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx&#34;&gt;without all the branch and fuss required for development&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is what I had to do&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>[Also see <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/08/13/how-to-edit-a-tfs-2010-build-template-when-it-contains-custom-activities.aspx">http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/08/13/how-to-edit-a-tfs-2010-build-template-when-it-contains-custom-activities.aspx</a> ]</p>
<p>A couple of months ago I wrote and <a href="http://teambuild2010contrib.codeplex.com/wikipage?title=TeamBuild%202010%20Activity%20to%20run%20Typemock%20Isolator%20based%20tests&amp;referringTitle=Home">published a custom activity for Team Build 2010 to allow Typemock tests to be run within the build process</a>. Whilst setting up a new build need to use this activity so I though I would see if there was an easier way to use it in a build <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx">without all the branch and fuss required for development</a>.</p>
<p>This is what I had to do</p>
<ul>
<li>Get <a href="http://www.typemock.com/files/Addons/VS2010%20TypemockBuildActivity%201.0.0.0.zip">the Zip file</a> what contains all the source and binaries for the custom activity</li>
<li>In your Team Project’s Source Control Explorer go to the <strong>BuildProcessTemplates</strong> folder. You should see (at least) the three standard templates: Default, Upgrade and Lab. Add the <strong>TypemockBuildProcessTemplate.xaml</strong> template from the <strong>SourceTypemockBuildActivity</strong> folder in the zip to the <strong>BuildProcessTemplates</strong> folder</li>
<li>Under the <strong>BuildProcessTemplates</strong> folder create <strong>Custom Activities</strong> folder and into this add the <strong>TypemockBuildActivity.dll</strong> file from the root of the zip</li>
<li>Check it all these added files</li>
<li>On your Build Controller (login to the build PC and open the Team System Administration console) set its custom assemblies path to point to the folder where the custom activity DLL is stored</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_659D8BC7.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_0C6BA208.png" title="image"></a> </p></blockquote>
<ul>
<li>
<ul>
<li>
<p>You can now edit an existing build, or create a new build, to make use of the new template. You should see the new template in the list of templates, if not just use the new template option then browser to find an existing template. It is up to you of you wish to use the original or make a copy</p>
</li>
<li>
<p>As the template takes the same options as the default template it can be used as direct replacement</p>
</li>
<li>
<p>AND THIS IS WHERE YOU HIT A PROBLEM</p>
</li>
<li>
<p>The process template in the zip has parameters set for the custom activity that were correct for the test harness used for development and the source control system it was mean to upload results to. Even for me, working on the same network, these are wrong for my new project. These parameters need to be edited.</p>
</li>
<li>
<p>Usually editing would be be done using the graphical build editing tool in Visual Studio. However as detailed in my original post, getting the custom assembly in a location so that it is correctly registered with Visual Studio involves some messy branching, and if this is not done you get error block in the graphical editor.</p>
</li>
<li>
<p>There is only one way I know how to avoid this and that is to do the editing in a text editor. The parameters that needed editing for me were</p>
</li>
<li>
<p>ProjectCollection – needs to point to the right TPC</p>
</li>
<li>
<p>TestRunnerExecutable – the location of the Typemock TMOCKRunner program as different on a 32bit PC to a 64bit build machine</p>
</li>
<li>
<p>You may need to edit more, but it is easy to see what is wrong if you try a build and look at the build log, all the parameters passed on the command line are listed, as well as any error messages.</p>
</li>
<li>
<p>Once I had completed my edits I checked the edited build template back into TFS</p>
</li>
<li>
<p>I had one further problem and that was MSTEST reported in the log that it could not run due to a missing /Platform parameter. The custom activity did not pass this parameter to MSTEST as in a default 2010 build it is not set. Once I explicitly set this my build (as shown below) it built, tests ran and results were published</p>
</li>
</ul>
<blockquote>
<p><a href="/wp-content/uploads/sites/2/historic/image_074C377C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_06E00487.png" title="image"></a></p></blockquote>
</li>
</ul>
<p>Hope this post makes adoption of the activity a little easier for you</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fun presenting last night at BCS</title>
      <link>https://blog.richardfennell.net/posts/fun-presenting-last-night-at-bcs/</link>
      <pubDate>Thu, 01 Jul 2010 09:00:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fun-presenting-last-night-at-bcs/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my session last night at the West Yorkshire BCS on Agile and Lean development process. The projector failing after only a few minutes meant I had to adopt a good agile approach to the session. It was nice that so many people came up afterwards to say they enjoyed the lack of PowerPoint.&lt;/p&gt;
&lt;p&gt;This got me thinking, as I enjoyed not having it as well. All I really missed was a couple of slides one that showed a Kanban board and another that diagrammatically showed the Scrum process, I got round the lack of both of these by pointing wildly at the blank projector screen and asking people to imagine. So if I run that session again I think I will just have that pair of slides and lose the rest. I just need to find a nice means to let me see the text slides, which I used as my speaking notes. I have never been a fan of postcard style notes when presenting, too much to drop, and when I tried using my phone in the past it was awkward, but maybe time to try again now my phone has a larger screen.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my session last night at the West Yorkshire BCS on Agile and Lean development process. The projector failing after only a few minutes meant I had to adopt a good agile approach to the session. It was nice that so many people came up afterwards to say they enjoyed the lack of PowerPoint.</p>
<p>This got me thinking, as I enjoyed not having it as well. All I really missed was a couple of slides one that showed a Kanban board and another that diagrammatically showed the Scrum process, I got round the lack of both of these by pointing wildly at the blank projector screen and asking people to imagine. So if I run that session again I think I will just have that pair of slides and lose the rest. I just need to find a nice means to let me see the text slides, which I used as my speaking notes. I have never been a fan of postcard style notes when presenting, too much to drop, and when I tried using my phone in the past it was awkward, but maybe time to try again now my phone has a larger screen.</p>
<p>If you do want to see the slides you missed they will be going on on the <a href="http://www.westyorkshire.bcs.org/">West Yorkshire BCS site</a> ASAP</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at the BCS this week on Agile</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-the-bcs-this-week-on-agile/</link>
      <pubDate>Mon, 28 Jun 2010 21:12:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-the-bcs-this-week-on-agile/</guid>
      <description>&lt;p&gt;Just a reminder I am speaking at the West Yorkshire BCS meeting this Wednesday on the subject ‘A&lt;a href=&#34;http://www.westyorkshire.bcs.org/2010/05/31/agile-is-so-old-hat-all-the-cool-kids-are-doing-lean-now/&#34;&gt;gile is so old hat all the cool kids are doing lean now’&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The meeting starts a 5:45 for refreshments, and 6:30 for my session. The venue is the Old Broadcasting House, 148 Woodhouse Lane, Leeds LS2 9EN (&lt;a href=&#34;http://www.ntileeds.co.uk&#34;&gt;www.ntileeds.co.uk&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;Hope to see you there&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just a reminder I am speaking at the West Yorkshire BCS meeting this Wednesday on the subject ‘A<a href="http://www.westyorkshire.bcs.org/2010/05/31/agile-is-so-old-hat-all-the-cool-kids-are-doing-lean-now/">gile is so old hat all the cool kids are doing lean now’</a>.</p>
<p>The meeting starts a 5:45 for refreshments, and 6:30 for my session. The venue is the Old Broadcasting House, 148 Woodhouse Lane, Leeds LS2 9EN (<a href="http://www.ntileeds.co.uk">www.ntileeds.co.uk</a>)</p>
<p>Hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>Professional Scrum Developer with TFS course in the UK</title>
      <link>https://blog.richardfennell.net/posts/professional-scrum-developer-with-tfs-course-in-the-uk/</link>
      <pubDate>Fri, 25 Jun 2010 15:16:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/professional-scrum-developer-with-tfs-course-in-the-uk/</guid>
      <description>&lt;p&gt;My fellow UK ALM MVP, &lt;a href=&#34;http://geekswithblogs.net/hinshelm/archive/2010/06/18/professional-scrum-developer-.net-training-in-london.aspx&#34;&gt;Martin Hinshelwood, is running a Professional Scrum Developer course in London at the end of next month&lt;/a&gt;. I understand there are still some places left, and some good discounts available. So if you was looking at wanting to implement a Scrum process on TFS it could be well worth a look&lt;/p&gt;
&lt;p&gt;If you want to know more about the whole &lt;a href=&#34;http://msdn.microsoft.com/en-gb/vstudio/ff433643.aspx&#34;&gt;PSD programme have a look at the Microsoft site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My fellow UK ALM MVP, <a href="http://geekswithblogs.net/hinshelm/archive/2010/06/18/professional-scrum-developer-.net-training-in-london.aspx">Martin Hinshelwood, is running a Professional Scrum Developer course in London at the end of next month</a>. I understand there are still some places left, and some good discounts available. So if you was looking at wanting to implement a Scrum process on TFS it could be well worth a look</p>
<p>If you want to know more about the whole <a href="http://msdn.microsoft.com/en-gb/vstudio/ff433643.aspx">PSD programme have a look at the Microsoft site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TF246062: Two or more database are in conflict when upgrading to TFS 2010</title>
      <link>https://blog.richardfennell.net/posts/tf246062-two-or-more-database-are-in-conflict-when-upgrading-to-tfs-2010/</link>
      <pubDate>Fri, 25 Jun 2010 10:21:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf246062-two-or-more-database-are-in-conflict-when-upgrading-to-tfs-2010/</guid>
      <description>&lt;p&gt;Whist upgrading a TFS2010 Beta2 server to RTM I saw the following error when I ran the verify step of the upgrade.&lt;/p&gt;
&lt;p&gt;TF246062: Two or more databases are in conflict because they are each designated as the owners of the following schema: Framework. The schema is for the host with the following name and ID: CollectionName, 8aace481-2471-49c8-da74-77ee3da4ce29. The databases with this conflict are: Data Source=SQLInstance1;Initial Catalog=Tfs_CollectionName;Integrated Security=True, Data Source=SQLInstance1;Initial Catalog=Tfs_Production_CollectionName;Integrated Security=True. You must specify one and only one database owner for the schema in the searchable SQL Server instances for this deployment of Team Foundation Server.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist upgrading a TFS2010 Beta2 server to RTM I saw the following error when I ran the verify step of the upgrade.</p>
<p>TF246062: Two or more databases are in conflict because they are each designated as the owners of the following schema: Framework. The schema is for the host with the following name and ID: CollectionName, 8aace481-2471-49c8-da74-77ee3da4ce29. The databases with this conflict are: Data Source=SQLInstance1;Initial Catalog=Tfs_CollectionName;Integrated Security=True, Data Source=SQLInstance1;Initial Catalog=Tfs_Production_CollectionName;Integrated Security=True. You must specify one and only one database owner for the schema in the searchable SQL Server instances for this deployment of Team Foundation Server.</p>
<p>This error is caused because you have two Team Project Collection (TPC) DBs with the same unique GUID. So how can this happen and where do these GUIDs come from?.</p>
<p>When you create a TPC it gets a GUID. It keeps this GUID if you disconnect it from a TFS server and move it to another server. <a href="http://msdn.microsoft.com/en-us/library/dd936138.aspx">The only time this GUID can change is if you clone the TPC</a>. When the cloned TPC DB is attached, the TFS spots it is a clone of a TPC is already hosts and issues a new GUID.</p>
<p>So how did I get two DBs with the same GUID? The answer is that prior to the upgrade some tests had been done using another TFS server to see if the client wished to do an in-place upgrade or disaster recovery style one onto new hardware. When doing a DR style upgrade the server does not issue a new GUID, as the TPC is unique to the new server, this server knows nothing of the original TFS server. This meant, as the two server shared a SQL cluster, that we had two copies of the same DB (but with different names) on the same SQL instance, so when the TFS upgrade program asked for the DBs by GUID it got back two DBs, hence the error.</p>
<p>The fix was to delete the Db created during the previous tests.</p>
<p>Note: You can see a similar effect if for any reason you replicate any of the TFS Dbs on a single SQL instance, such as to make a second copy of the warehouse DB for some special reporting purpose.</p>
]]></content:encoded>
    </item>
    <item>
      <title>IDD Building a breakfast comment to a become process – now there is a leap</title>
      <link>https://blog.richardfennell.net/posts/idd-building-a-breakfast-comment-to-a-become-process-now-there-is-a-leap/</link>
      <pubDate>Sun, 20 Jun 2010 07:48:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/idd-building-a-breakfast-comment-to-a-become-process-now-there-is-a-leap/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blog.typemock.com/2010/06/and-suddenly-there-was-idd-by-gil.html&#34;&gt;Gil at Typemock has been posting&lt;/a&gt; about some ideas we discussed over breakfast at the Typemock Partner conference a while ago, I have been a bit slow at commenting, so I though I better add to the conversation. Though Typemock is an excellent mocking framework, for me basic mocking is not its biggest win. All the ‘classic auto mocking’ of interfaces to speed TDD style working is great, but I can do that with any of the .NET mocking frameworks. All they do is mean I don’t have to write my own test stubs and mocks, so saving me time, which is good but not the keep win I was looking for.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blog.typemock.com/2010/06/and-suddenly-there-was-idd-by-gil.html">Gil at Typemock has been posting</a> about some ideas we discussed over breakfast at the Typemock Partner conference a while ago, I have been a bit slow at commenting, so I though I better add to the conversation. Though Typemock is an excellent mocking framework, for me basic mocking is not its biggest win. All the ‘classic auto mocking’ of interfaces to speed TDD style working is great, but I can do that with any of the .NET mocking frameworks. All they do is mean I don’t have to write my own test stubs and mocks, so saving me time, which is good but not the keep win I was looking for.</p>
<p>For me there is another way to save much more time and that is to reduce my ‘build, deploy, use’ cycle. In the land of SharePoint this is a significant time saving, or at least has been for us. It has meant that I can replace the build, create WSP, deploy WSP, let SharePoint/IIS restart and then view a web part, with a build and view in ASP.NET page that uses Typemock to fake out all to SharePoint calls. This is what Gil has termed <a href="http://en.wikipedia.org/wiki/Isolation_driven_development">Isolation Driven Development (IDD)</a> Now isn’t a three letter _DD name going a bit far, I am even not sure there enough in it for me to write a book!</p>
<p>That said this is a solid technique which can be applied to any complex environment where developers or testers need a means to mock out significant, costly, or just slow components to ease there daily work process, often enabling some manual testing process, thus making them more productive. If you read the <a href="http://www.amazon.co.uk/Toyota-Production-System-Beyond-Large-scale/dp/0915299143/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1277020056&amp;sr=1-1">TPS books</a> it mentions a lot how workers should optimise their work space to reduce wasted time the spend moving between machines or roles, this is just such a move.</p>
<p>So <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx">if you want to use the technique for Sharepoint have a look at my post</a>, I hope it will save you time whether on SP2007 or 2010, or maybe apply same technique to other technologies.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Post NDC2010 thoughts – a great event</title>
      <link>https://blog.richardfennell.net/posts/post-ndc2010-thoughts-a-great-event/</link>
      <pubDate>Sun, 20 Jun 2010 07:33:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-ndc2010-thoughts-a-great-event/</guid>
      <description>&lt;p&gt;What a great event the Norwegian Developers conference is. It is a nice size so there is a good selection of tracks, but no so big you feel lost, also the speakers and attendees were all mixing freely which I think always makes for a good atmosphere. This was all enhanced by the excellent organisation of the event, what can I say the wifi worked faultlessly, the food was good and the overflow screens (so from one place you could view the video and audio of any current sessions) was a brilliant idea, so much I know some people took to using this in preference to going to the actual session. You can see the overflow screens at the top of the photo below.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>What a great event the Norwegian Developers conference is. It is a nice size so there is a good selection of tracks, but no so big you feel lost, also the speakers and attendees were all mixing freely which I think always makes for a good atmosphere. This was all enhanced by the excellent organisation of the event, what can I say the wifi worked faultlessly, the food was good and the overflow screens (so from one place you could view the video and audio of any current sessions) was a brilliant idea, so much I know some people took to using this in preference to going to the actual session. You can see the overflow screens at the top of the photo below.</p>
<p><a href="/wp-content/uploads/sites/2/historic/IMAG0258_51C5EF63.jpg"><img alt="IMAG0258" loading="lazy" src="/wp-content/uploads/sites/2/historic/IMAG0258_thumb_6211414F.jpg" title="IMAG0258"></a></p>
<p>On the subject of food it was good to have a breakfast, a ‘lunch bite’ and then another small meal around 4pm just when you are flagging. It was noticeable that the sponsors on the expo had basically got rid of all the swag and replaced it with a selection of snacks and coffee machines. For me far more acceptable to get a nice ice cream or a hot dog rather than yet another USB pen drive or fluffy toy with a corporate logo on it. I even enjoyed the <a href="http://www.myspace.com/ralphmyerz">band at the ‘legendary NDC party</a>’, the organisers seemed to get all points right.</p>
<p>Also what was nice was that this was not a single vendor conference e.g. a TechEd or PDC. Now these can (but not always) be great for new product knowledge, but they can constrain the subject matter. At NDC there has been a good wide range of subjects with a particular strong Agile process and Ruby tracks. I for one would want to go to conferences to be exposed to things I have not used before, both technologies and concepts. This I feel you can for better at a conference with a wider scope of subject matter than a vendor conference. Now the danger is that the range becomes too wide and dilutes the content, but at NDC again I think they got it about right</p>
<p>Remember all the sessions at NDC were videoed in HD, and they are being rapidly being uploaded to <a href="http://streaming.ndc2010.no/tcs/" title="http://streaming.ndc2010.no/tcs/">http://streaming.ndc2010.no/tcs/</a>, so if you are interested in seeing my or any other sessions from this ‘amazing line-up!’ take a look.</p>
<p><a href="/wp-content/uploads/sites/2/historic/IMAG0253_50DD63AC.jpg"><img alt="IMAG0253" loading="lazy" src="/wp-content/uploads/sites/2/historic/IMAG0253_thumb_05AA1D50.jpg" title="IMAG0253"></a></p>
<p>I hope any sessions I submit for NDC2011 in May next year get accepted, I would love to go back.</p>
]]></content:encoded>
    </item>
    <item>
      <title>When roving what I hate most about windows mobile 6 is not knowing what connection it will try to use.</title>
      <link>https://blog.richardfennell.net/posts/when-roving-what-i-hate-most-about-windows-mobile-6-is-not-knowing-what-connection-it-will-try-to-use/</link>
      <pubDate>Thu, 17 Jun 2010 09:50:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/when-roving-what-i-hate-most-about-windows-mobile-6-is-not-knowing-what-connection-it-will-try-to-use/</guid>
      <description>&lt;p&gt;When travelling aboard what I hate most about Windows Mobile 6 is that I have no idea if the phone is going to trying to use a local WiFi or 3G. Mobile Outlook seems the worst culprit, it loves 3G over everything else!.&lt;/p&gt;
&lt;p&gt;On mobile 6.5 there are just too many places where you might need set which data connection to use. This means, for fear if a nightmare phone bill, I tend not use us my phone for data. And just just don’t realise how much you use it at home until you are away.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When travelling aboard what I hate most about Windows Mobile 6 is that I have no idea if the phone is going to trying to use a local WiFi or 3G. Mobile Outlook seems the worst culprit, it loves 3G over everything else!.</p>
<p>On mobile 6.5 there are just too many places where you might need set which data connection to use. This means, for fear if a nightmare phone bill, I tend not use us my phone for data. And just just don’t realise how much you use it at home until you are away.</p>
<p>I hope the connection management is more straightforward on version 7</p>
]]></content:encoded>
    </item>
    <item>
      <title>All set for Olso – Off to NDC 2010</title>
      <link>https://blog.richardfennell.net/posts/all-set-for-olso-off-to-ndc-2010/</link>
      <pubDate>Tue, 15 Jun 2010 13:43:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/all-set-for-olso-off-to-ndc-2010/</guid>
      <description>&lt;p&gt;I have sorted all the bits for my presentations so am ready for my trip to &lt;a href=&#34;http://www.ndc2010.no/&#34;&gt;NDC 2010 in Olso&lt;/a&gt; tomorrow. Really looking forward to it, the agenda looks great.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have sorted all the bits for my presentations so am ready for my trip to <a href="http://www.ndc2010.no/">NDC 2010 in Olso</a> tomorrow. Really looking forward to it, the agenda looks great.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Video of my Microsoft Techdays session on Lab Management</title>
      <link>https://blog.richardfennell.net/posts/video-of-my-microsoft-techdays-session-on-lab-management/</link>
      <pubDate>Fri, 11 Jun 2010 15:15:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/video-of-my-microsoft-techdays-session-on-lab-management/</guid>
      <description>&lt;p&gt;I had not checked the site for a while, but &lt;a href=&#34;http://www.microsoft.com/uk/techdays/resources.aspx&#34;&gt;my Microsoft Techdays session on Lab Management is now available as a video&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I cannt make it embed here, some Community Server issue I assume, so look for it on the developer track on the second day. It is called &amp;lsquo;Putting some testing in your TFS build process&amp;quot;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I had not checked the site for a while, but <a href="http://www.microsoft.com/uk/techdays/resources.aspx">my Microsoft Techdays session on Lab Management is now available as a video</a></p>
<p>I cannt make it embed here, some Community Server issue I assume, so look for it on the developer track on the second day. It is called &lsquo;Putting some testing in your TFS build process&quot;</p>
]]></content:encoded>
    </item>
    <item>
      <title>How little do you have to do to run a VS/TFS2008 build on a TFS2010 server?</title>
      <link>https://blog.richardfennell.net/posts/how-little-do-you-have-to-do-to-run-a-vstfs2008-build-on-a-tfs2010-server/</link>
      <pubDate>Thu, 10 Jun 2010 17:10:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-little-do-you-have-to-do-to-run-a-vstfs2008-build-on-a-tfs2010-server/</guid>
      <description>&lt;p&gt;As do many people I have a good number of TFS2008 style builds for legacy applications. I will, in the end, move these over to VS2010 and hence upgrade their build formats to the new 2010 workflow based one, but for now it would be nice to be able to run them with the minimum of effort. To this end I have done some work to see the minimum I can get away with to allow these builds to run, the aim being to leave the build box as close to a virgin TFS2010 one as possible.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As do many people I have a good number of TFS2008 style builds for legacy applications. I will, in the end, move these over to VS2010 and hence upgrade their build formats to the new 2010 workflow based one, but for now it would be nice to be able to run them with the minimum of effort. To this end I have done some work to see the minimum I can get away with to allow these builds to run, the aim being to leave the build box as close to a virgin TFS2010 one as possible.</p>
<p>**Basic Setup<br>
**My basic build system was</p>
<ul>
<li>Windows Server 2008 32bit with SP2</li>
<li>TFS 2010 build</li>
</ul>
<p>**A VS2010 Test<br>
**Just to make sure all was OK, I created a basic VS2010 MVC2 project with it’s default tests. I then create a default build for this. This failed as my build machine did not have the targets to build a web application. I copied the directory</p>
<blockquote>
<p><em>C:Program FilesMSBuildMicrosoftVisualStudiov10.0WebApplications</em></p></blockquote>
<p>up from a development PC to the build box and tried again. The build still failed as it was missing MVC2 assemblies, I downloaded the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=c9ba1fe1-3ba8-439a-9e21-def90a8615a9&amp;displaylang=en">AspNetMVC2_VS2008.exe installer</a> to get these assemblies on my build box. Once this was all done the build worked and the tests passed</p>
<p>**A VS2008 Test<br>
**So I knew I had a 2010 build system that could build and test a 2010 solution. I next took an existing VS2008 solution’s build, this build had been upgraded as part of the TFS2008-&gt;2010 server upgrade. The build failed completely. This was not surprising as as well as being for a different version of VS I knew that was missing a number of custom build tasks.</p>
<p>First I had to install all the custom tasks, for me this was</p>
<ul>
<li><a href="http://msbuildextensionpack.codeplex.com/">Version 4 of the MSBuild extension Pack</a></li>
<li><a href="http://stylecop.codeplex.com/">StyleCop 4</a> remembering to install the MSBuild extensions. Also remember you need to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/10/15/using-stylecop-in-tfs-team-build.aspx">copy the MSBuild.ExtensionPack.StyleCop.dll assembly from the extension directory to the StyleCop one</a> else you get an error.</li>
<li>Typemock 2010</li>
<li><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/12/22/update-in-using-stylecop-in-tfs-team-build.aspx">My own extensions that stores StyleCop data in the build summary</a>. Note this did not initially work as it needed to be rebuilt against the TFS 2010 API, just upgrading the solution to VS2010 seemed to be enough.</li>
</ul>
<p>Once all these imports were present the build tried to compile, and failed with the error.</p>
<p><em>C:Program FilesMSBuildMicrosoftVisualStudiov10.0WebApplicationsMicrosoft.WebApplication.targets(133,11): error MSB4064: The &ldquo;Retries&rdquo; parameter is not supported by the &ldquo;Copy&rdquo; task. Verify the parameter exists on the task, and it is a settable public instance property.<br>
C:Program FilesMSBuildMicrosoftVisualStudiov10.0WebApplicationsMicrosoft.WebApplication.targets(131,5): error MSB4063: The &ldquo;Copy&rdquo; task could not be initialized with its input parameters.  [c:buildsMoorcroft WebsiteDebt Collection</em></p>
<p>I spent a lot of time fiddling here, if I replaced my</p>
<p><em>C:Program FilesMSBuildMicrosoftVisualStudiov10.0WebApplicationsMicrosoft.WebApplication.targets</em></p>
<p>with one form my PCs V9.0 directory all was OK, but then I found <a href="http://codepolice.net/2010/02/22/problems-with-msbuild-tasks-after-playing-with-visual-studio-2010/" title="http://codepolice.net/2010/02/22/problems-with-msbuild-tasks-after-playing-with-visual-studio-2010/">this post that sugested to just removed the offending parameters in the targets file, they are just retry and timouts</a>! Once I did this the build attempted to compile the solution (and I checked it still worked for my VS2010 MVC2 solution).</p>
<p>I now got another set of missing assemblies errors. This was due to the lack of MVC1 on the build box, this was <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=53289097-73ce-43bf-b6a6-35e00103cb4b&amp;displaylang=en">downloaded and installed</a></p>
<p>So now my MVC1 web site built, but the associated test project failed. This was because the V9 of <em>Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll</em> could not be found, the build box had V10. To address this I changed the assembly reference in the test project from the GAC to a copy of the assembly I included under source control, so it could be pulled to the build server.</p>
<p>Everything now built. However the tests did not run. There was an error that MStest could not be found. I edited the tfsbuild.proj to add the path to the MSTest file</p>
<p><em>PropertyGroup&gt;<br>
    &lt;!&ndash; TEST ARGUMENTS<br>
     If the RunTest property is set to true, then particular tests within a<br>
     metadata file or test container may be specified here.  This is<br>
     equivalent to the /test switch on mstest.exe.</em></p>
<p>_     <TestNames>BVT;HighPriority</TestNames><br>
    &ndash;&gt;<br>
    <TestToolsTaskToolPath>C:Program FilesMicrosoft Visual Studio 10.0Common7IDE</TestToolsTaskToolPath><br>
  </PropertyGroup>_</p>
<p>If I looked at the build log I could see that MSTest now ran, and I could see all my 100+ test, but they all failed and were not published to the TFS server. I copied the .TRX results file local from the build box and opened it in VS2010. I could then see that all the test failed because the <em>Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll</em> could not be seen. I am not sure it is great solution but I just dropped a copy into the same directory as MSTest.exe <em>C:Program FilesMicrosoft Visual Studio 10.0Common7IDE.</em> The other option I could think of would be to put it in the GAC.</p>
<p>Once this was done most of the test executed and passed Ok, but some still did not. Again checking the .TRX file, as it still did not publish the results, was due to problems with TestContext. It seems the V9 assembly passes TestContext parameters in a different way to the V10 one. The test that passed did not have any TestContext declared. So as I did not need TestContext for these tests I just commented them out.</p>
<p>All my test now ran, past, and the results were publish. <a href="http://social.msdn.microsoft.com/Forums/en/tfsbuild/thread/7895b3bb-cc90-48ab-9549-241f4106c2a7">The problem was if a test failed the results were not published, but this seems to be a know issue</a> so nothing I can do about that now.</p>
<p>So I now have build that mostly work, probably well enough for now. I have not need to install VS2008 or VS2010 on the build box which is nice. Just have to see how well the build holds up in production.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF31002: Altering the URL that your TFS 2010 Web Client uses to talk to the AT</title>
      <link>https://blog.richardfennell.net/posts/tf31002-altering-the-url-that-your-tfs-2010-web-client-uses-to-talk-to-the-at/</link>
      <pubDate>Wed, 09 Jun 2010 10:34:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf31002-altering-the-url-that-your-tfs-2010-web-client-uses-to-talk-to-the-at/</guid>
      <description>&lt;p&gt;The Web Client for TFS 2010 (what was called Team System Web Access TSWA) is now installed as a core part of the Application Tier. It is no longer a separate install as it was in previous versions. This means it is easy to implement, it is just there by default. However, this can raise some problems if intend to expose to TFS AT via a firewall to the Internet, or use an alias for your TFS AT. This is because, by default, the Web Client uses it’s host server name as the AT name to connect to, it assumes the AT is local.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The Web Client for TFS 2010 (what was called Team System Web Access TSWA) is now installed as a core part of the Application Tier. It is no longer a separate install as it was in previous versions. This means it is easy to implement, it is just there by default. However, this can raise some problems if intend to expose to TFS AT via a firewall to the Internet, or use an alias for your TFS AT. This is because, by default, the Web Client uses it’s host server name as the AT name to connect to, it assumes the AT is local.</p>
<p>So for example, if you install your AT on SERVER1 you can set the server so that it responds to calls to the name TFS.DOMAIN.COM (after suitable DNS registration and disabling of local loopback checks on the server). So all your TFS clients should be able to access the server via <a href="http://tfs.domain.com:8080/tfs">http://tfs.domain.com:8080/tfs</a>. However if a user tried to access the sever via Web Client URL of <a href="http://tfs.domain.com:8080/tfs/web">http://tfs.domain.com:8080/tfs/web</a> they will get an error that that the inferred local AT (<a href="http://server1:8080/tfs">http://server1:8080/tfs</a>) cannot be resolved (from where they are outside the firewall.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_5A0FDB68.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_79BEB530.png" title="image"></a></p>
<p><em>TF31002: Unable to connect to this Team Foundation Server: <a href="https://server1/tfs">https://server1/tfs</a>. Team Foundation Server Url: <a href="https://server1/tfs">https://server1/tfs</a>. Possible reasons for failure include: - The name, port number, or protocol for the Team Foundation Server is incorrect. - The Team Foundation Server is offline. - The password has expired or is incorrect. Technical information (for administrator): The request failed with HTTP status 404: Not Found.</em></p>
<p>This is easily addressed edit the <em>C:Program FilesMicrosoft Team Foundation Server 2010Application TierWeb AccessWebweb.config file</em> and explicitly name the AT to be used for the web client. This is in the block</p>
<p><em><tfServers><br>
     <!-- <add name="_[_http://server:8080"_](http://server:8080") _/> --><br>
</tfServers></em></p>
<p>Note here the sample URL is wrong, when you are done it should look something like</p>
<p><em><tfServers>  <br>
  <add name=_[_http://server1:8080/tfs_](http://server1:8080/tfs) _/> -<br>
</tfServers></em></p>
<p>Interestingly this fixes the issue on the screen shot above for the lower instance of the error, but not, the upper one. However that is enough to get you into the client and you don’t see the error after that.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Thoughts on the BCS EGM</title>
      <link>https://blog.richardfennell.net/posts/thoughts-on-the-bcs-egm/</link>
      <pubDate>Sun, 06 Jun 2010 20:54:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/thoughts-on-the-bcs-egm/</guid>
      <description>&lt;p&gt;I got the paperwork for the &lt;a href=&#34;http://yourfuture.bcs.org/server.php?show=nav.13831&#34;&gt;British Computer Society EGM&lt;/a&gt; this week. This EGA raises some interesting issues, the &lt;a href=&#34;http://www.computerweekly.com/blogs/read-all-about-it/2010/04/the-egm-debate-bcs-v-len-keigh.html&#34;&gt;best overview of the issue I have found seems to be on the Computer Weekly site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So has supporters of the EGM motion argument, led by former BCS trustee Len Keighley, any merit? Yes, it does. Enough to vote out the current BCS management and trustees? Well I am not so sure.&lt;/p&gt;
&lt;p&gt;I was a student member of the BCS in the late 80s, when I was working for a small PC and LAN dealership. It was not good experience, I left the BCS with a feeling they did not care for anything bar old mainframe style IT and had no interest in anything newer than the late 70’s.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I got the paperwork for the <a href="http://yourfuture.bcs.org/server.php?show=nav.13831">British Computer Society EGM</a> this week. This EGA raises some interesting issues, the <a href="http://www.computerweekly.com/blogs/read-all-about-it/2010/04/the-egm-debate-bcs-v-len-keigh.html">best overview of the issue I have found seems to be on the Computer Weekly site</a>.</p>
<p>So has supporters of the EGM motion argument, led by former BCS trustee Len Keighley, any merit? Yes, it does. Enough to vote out the current BCS management and trustees? Well I am not so sure.</p>
<p>I was a student member of the BCS in the late 80s, when I was working for a small PC and LAN dealership. It was not good experience, I left the BCS with a feeling they did not care for anything bar old mainframe style IT and had no interest in anything newer than the late 70’s.</p>
<p>I rejoined the BCS in the late 90s when I formed Black Marble. This was initially purely as a means to gain my CEng to help in getting more work. I was able to do this due to the number of years experience I had by then. However, I still had to provide references, a very detailed CV and attend a rather daunting panel interview.</p>
<p>At this time also started to attend <a href="http://www.westyorkshire.bcs.org/">my local branch, West Yorkshire</a>, and I am happy to say that I found it a far more relevant and friendly organisation. I remember attending a local meeting soon after <a href="http://yourfuture.bcs.org/server.php?show=conWebDoc.35581">David Clarke became the BCS CEO</a>, he was touring the branches to introduce himself, and being impressed by his forward looking views and plans for the society, which he seems to acted upon.</p>
<p>So has it all gone too far?</p>
<p>Well I am not too impressed by the <a href="http://www.bcs.org/server.php?show=nav.10972">CITP</a> qualification, it strikes me as far to easy to get. I thought my CEng application process was fairly light compare the process required for friends who gained CEng via the IEE or who work in other chartered engineering professions such as structural or mechanical engineering.</p>
<p>We all know that qualification can become devalued, there has been no end of these in the realm of IT vendor qualifications such MCP, CNE etc. The vendors regularly force re-qualification (and often with serious re-banding to a new qualification) with greatly increased difficulty; a tactic that cannot be used for ‘pass once hold for life’ qualifications such as a CEng. For CITP I think it arrived devalued. At this time I see little for no respect for the CITP inside the industry or recognition in the world beyond. It certainly does not rank even vaguely equivalent to a ‘real’ CEng. It look like a tool, as said by the EGM supports, to drive membership alone.</p>
<p>Also I am not a fan of the <a href="http://www.bcs.org/server.php?show=nav.7849">BCS flagship product SFIAPlus</a>, this for me became a huge barrier to completing my <a href="http://www.bcs.org/server.php?show=nav.11987">CPD</a> record. I used to find it funny the CPD process for the BCS was fill out a Word document online, print out out and post it in. I welcomed the move to the online SFIA model, until I had to use it. It seems again designed for the large company 70s BCS core membership, a nightmare of complexity. It has reached the point I cannot be bothered to complete it, it would take far too much of my valuable time. So it must be said that as I don’t complete it now, there is a chance it has changed (maybe even improved) since I last tried it.</p>
<p>All this said I think the BCS is far more relevant now than it was 10 years ago, though I don’t whole heartedly welcome the move to offering business services, I can see the reason behind it. However I think the changes are still far from moving the BCS into the same space as other professional societies such as the Royal societies and BMA, which must be the long term goal.</p>
<p>So how will I vote at the EGM, like many I am still undecided. As I said the ideas behind the EGM have merit, but an EGM seems a poor way to address the questions, but maybe this is the only way to do it if the current management is a closed as Len Keighley makes out.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Hanging Humax PVT9200T PVR</title>
      <link>https://blog.richardfennell.net/posts/hanging-humax-pvt9200t-pvr/</link>
      <pubDate>Sat, 05 Jun 2010 20:46:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/hanging-humax-pvt9200t-pvr/</guid>
      <description>&lt;p&gt;Recently my Humax PVT9200T has started to hang. When I started it up, within a minute or two (usually much less) the picture froze. Prior to this it had been working fine for the six months I had owned it. This seems to be a &lt;a href=&#34;http://www.wirefresh.com/humax-pvt-9200t-pvr-problems-slow-performance-and-freezes/&#34;&gt;common problem for the PVT9200T&lt;/a&gt; of late, there is much &lt;a href=&#34;http://www.wirefresh.com/humax-freeview-pvr-issues-beta-fix-offered/&#34;&gt;talk of a patch on the way&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;For me the immediate solution seems to have been simpler; I just reset it to factory defaults and let it rescan the channels. It has now been working fine for a couple of days. Maybe this simple solution will help other owners of the PVT9200T&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Recently my Humax PVT9200T has started to hang. When I started it up, within a minute or two (usually much less) the picture froze. Prior to this it had been working fine for the six months I had owned it. This seems to be a <a href="http://www.wirefresh.com/humax-pvt-9200t-pvr-problems-slow-performance-and-freezes/">common problem for the PVT9200T</a> of late, there is much <a href="http://www.wirefresh.com/humax-freeview-pvr-issues-beta-fix-offered/">talk of a patch on the way</a>.</p>
<p>For me the immediate solution seems to have been simpler; I just reset it to factory defaults and let it rescan the channels. It has now been working fine for a couple of days. Maybe this simple solution will help other owners of the PVT9200T</p>
]]></content:encoded>
    </item>
    <item>
      <title>Can you mock out .NET extension methods with Typemock? – the answer is yes</title>
      <link>https://blog.richardfennell.net/posts/can-you-mock-out-net-extension-methods-with-typemock-the-answer-is-yes/</link>
      <pubDate>Sat, 05 Jun 2010 19:54:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/can-you-mock-out-net-extension-methods-with-typemock-the-answer-is-yes/</guid>
      <description>&lt;p&gt;In &lt;a href=&#34;http://dddsouthwest.com/Agenda/tabid/55/Default.aspx&#34;&gt;Kevin Jones’ session at today’s DDD South West on ‘Testing ASP.NET MVC Applications&lt;/a&gt;’ the question was asked it it was possible to mock out .NET extension methods using any mocking frameworks. Kevin said did not think it could be done with Rhino Mocks, but thought it might be possible with Typemock.&lt;/p&gt;
&lt;p&gt;Given my experience with Typemock I could see no reason why you would not be able to fake an extension method with Typemock, though I had never had need to do it. A quick search when I got home provided this &lt;a href=&#34;http://bloggingabout.net/blogs/dennis/archive/2009/02/21/faking-extension-methods.aspx&#34;&gt;post by Dennis van der Stelt which explains how to do it&lt;/a&gt;. t is just as easy as you would expect.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In <a href="http://dddsouthwest.com/Agenda/tabid/55/Default.aspx">Kevin Jones’ session at today’s DDD South West on ‘Testing ASP.NET MVC Applications</a>’ the question was asked it it was possible to mock out .NET extension methods using any mocking frameworks. Kevin said did not think it could be done with Rhino Mocks, but thought it might be possible with Typemock.</p>
<p>Given my experience with Typemock I could see no reason why you would not be able to fake an extension method with Typemock, though I had never had need to do it. A quick search when I got home provided this <a href="http://bloggingabout.net/blogs/dennis/archive/2009/02/21/faking-extension-methods.aspx">post by Dennis van der Stelt which explains how to do it</a>. t is just as easy as you would expect.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Today’s DDD South West</title>
      <link>https://blog.richardfennell.net/posts/todays-ddd-south-west/</link>
      <pubDate>Sat, 05 Jun 2010 18:32:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/todays-ddd-south-west/</guid>
      <description>&lt;p&gt;Thanks to everyone who turned up for my session at DDD South West, and to the organisers for putting the event on so well.&lt;/p&gt;
&lt;p&gt;As my session was basically a 1 hour demo of the testing tools in VS2010 there are no slides for me to upload, but if you have any questions ping me an email. I would say that for a good overview of the subject have a look at the book ‘&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/05/17/looking-for-a-good-overview-of-visual-studio-2010-and-alm.aspx&#34;&gt;Professional Application Lifecycle Management with Visual Studio 2010: with Team Foundation Server 2010&lt;/a&gt;’&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who turned up for my session at DDD South West, and to the organisers for putting the event on so well.</p>
<p>As my session was basically a 1 hour demo of the testing tools in VS2010 there are no slides for me to upload, but if you have any questions ping me an email. I would say that for a good overview of the subject have a look at the book ‘<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/05/17/looking-for-a-good-overview-of-visual-studio-2010-and-alm.aspx">Professional Application Lifecycle Management with Visual Studio 2010: with Team Foundation Server 2010</a>’</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF50609 error when creating a new team project in TFS 2010</title>
      <link>https://blog.richardfennell.net/posts/tf50609-error-when-creating-a-new-team-project-in-tfs-2010/</link>
      <pubDate>Fri, 04 Jun 2010 11:23:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf50609-error-when-creating-a-new-team-project-in-tfs-2010/</guid>
      <description>&lt;p&gt;After upgrading a TFS 2010 RC server (which was previously upgraded from Beta1 to Beta2) to RTM I hit a problem when trying to create a new team project. The error I saw was&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Event Description: TF30162: Task &amp;ldquo;GroupCreation1&amp;rdquo; from Group &amp;ldquo;Groups&amp;rdquo; failed&lt;br&gt;
Exception Type: Microsoft.TeamFoundation.Client.PcwException&lt;br&gt;
Exception Message: TF50609: Unable to retrieve information for action ADMINISTER_TEST_ENVIRONMENTS, it does not exist.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A quick search shows this is a known problem with TFS 2010 Beta 1 to Beta 2 upgrades, strange it did not show itself on our servers until we when to RTM. &lt;a href=&#34;http://blogs.msdn.com/b/granth/archive/2009/08/31/tfs2010-replacing-beta1-process-templates-with-beta2-versions.aspx&#34;&gt;Grant Holiday provides the solution for Beta 1 to Beta 2&lt;/a&gt;, but the command line required for RTM is slightly different to the one he had to use. The fix involves using both the parts of this post, the TF50660 is a symptom of the underling TF237091 error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After upgrading a TFS 2010 RC server (which was previously upgraded from Beta1 to Beta2) to RTM I hit a problem when trying to create a new team project. The error I saw was</p>
<p><em>Event Description: TF30162: Task &ldquo;GroupCreation1&rdquo; from Group &ldquo;Groups&rdquo; failed<br>
Exception Type: Microsoft.TeamFoundation.Client.PcwException<br>
Exception Message: TF50609: Unable to retrieve information for action ADMINISTER_TEST_ENVIRONMENTS, it does not exist.</em></p>
<p>A quick search shows this is a known problem with TFS 2010 Beta 1 to Beta 2 upgrades, strange it did not show itself on our servers until we when to RTM. <a href="http://blogs.msdn.com/b/granth/archive/2009/08/31/tfs2010-replacing-beta1-process-templates-with-beta2-versions.aspx">Grant Holiday provides the solution for Beta 1 to Beta 2</a>, but the command line required for RTM is slightly different to the one he had to use. The fix involves using both the parts of this post, the TF50660 is a symptom of the underling TF237091 error</p>
<ul>
<li>Make sure you have exported a good working process template from another TPC</li>
<li>Delete all the old (Beta 2 process templates) from problem TPC</li>
<li><strong>ALSO</strong> delete the new RTM process templates, in my case MSF Agile V5 (if you don’t do this and then reload the template the witadmin edit seems to have no effect)</li>
<li>Run the witadmin command (note no /P: option that Grant mentions and it is dimension not dim at the end)</li>
</ul>
<blockquote>
<p><em>witadmin chnagefield /collection:http://myserver:8080/tfs/myprojectcollection /n:Microsoft.VSTS.TCM.AutomationStatus /reportingtype:dimension</em></p></blockquote>
<ul>
<li>Import the known good process template (if you had tried to do this before the witadmin edit you would have got the error ‘TF237091: Actual reporting settings for the field Microsoft.VSTS.TCM.AutomationStatus are different from those specified in the XML. Changing these settings is prohibited.’</li>
<li>You should now be able to create new team projects</li>
<li>You need to repeat this for each TPC you have that shows the problem</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>TF261007 error message when attaching a collection</title>
      <link>https://blog.richardfennell.net/posts/tf261007-error-message-when-attaching-a-collection/</link>
      <pubDate>Thu, 03 Jun 2010 09:22:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf261007-error-message-when-attaching-a-collection/</guid>
      <description>&lt;p&gt;Whist moving Team Project Collections (TPC) from our old TFS2010 Beta/RC server to new production hardware I got a &lt;a href=&#34;http://blogs.infosupport.com/blogs/marcelv/archive/2010/03/19/tfs-2010-tf261007-error-message-when-attaching-a-collection.aspx&#34;&gt;TF261007, exactly as detailed in Marcel de Vries post&lt;/a&gt;. However, I was in the better position than he was as I still had full access to my original server, so I thought I could go back, re-attached the TPC on the old server, run the command&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;tfsconfig lab /delete /collectionName:MyCollection /External&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;re-detach and all would be OK.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist moving Team Project Collections (TPC) from our old TFS2010 Beta/RC server to new production hardware I got a <a href="http://blogs.infosupport.com/blogs/marcelv/archive/2010/03/19/tfs-2010-tf261007-error-message-when-attaching-a-collection.aspx">TF261007, exactly as detailed in Marcel de Vries post</a>. However, I was in the better position than he was as I still had full access to my original server, so I thought I could go back, re-attached the TPC on the old server, run the command</p>
<blockquote>
<p><em>tfsconfig lab /delete /collectionName:MyCollection /External</em></p></blockquote>
<p>re-detach and all would be OK.</p>
<p>This was not the case, you still get the same problem as TF261007. It seems that if you have a TPC what has ever been on a TFS 2010 RC server with Lab Management configured then it can only be moved to a new server that also has Lab management configured, or at least SCOM VMM console installed and pointed at VMM Host.</p>
<p>I checked this with Microsoft and was told it is a known issue with the Lab Management RC and will be fixed in the RTM, for now there is no other workaround other than the one Marcel detailed.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at the West Yorkshire BCS meeting on Agile Methods</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-the-west-yorkshire-bcs-meeting-on-agile-methods/</link>
      <pubDate>Sat, 29 May 2010 09:56:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-the-west-yorkshire-bcs-meeting-on-agile-methods/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/08/25/welcome-to-the-past-of-software-development.aspx&#34;&gt;After my comments on the QDD session&lt;/a&gt; at a past BCS meeting I have been asked to return and speak at the &lt;a href=&#34;http://www.westyorkshire.bcs.org/&#34;&gt;West Yorkshire BCS&lt;/a&gt; meeting on the 30th of June to give an overview about Agile methods.&lt;/p&gt;
&lt;p&gt;This will be very much a management overview for people who are unaware of Agile methods, hence I will touch on XP, Scrum, Crystal Clear and Kanban and try to compare and contrast.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/08/25/welcome-to-the-past-of-software-development.aspx">After my comments on the QDD session</a> at a past BCS meeting I have been asked to return and speak at the <a href="http://www.westyorkshire.bcs.org/">West Yorkshire BCS</a> meeting on the 30th of June to give an overview about Agile methods.</p>
<p>This will be very much a management overview for people who are unaware of Agile methods, hence I will touch on XP, Scrum, Crystal Clear and Kanban and try to compare and contrast.</p>
<p>But, If you can’t wait to find out about Agile, why not come to the <a href="http://www.agileyorkshire.org/">next Agile Yorkshire meeting on the 8th</a>?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Great book on Kanban</title>
      <link>https://blog.richardfennell.net/posts/great-book-on-kanban/</link>
      <pubDate>Sat, 29 May 2010 09:45:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/great-book-on-kanban/</guid>
      <description>&lt;p&gt;I won’t bother to repeat what &lt;a href=&#34;http://gojko.net/2010/05/18/finally-an-authoritative-source-on-the-kanban-method/&#34;&gt;Gojko has said&lt;/a&gt; on &lt;a href=&#34;http://www.amazon.co.uk/Kanban-David-J-Anderson/dp/0984521402/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1275125846&amp;amp;sr=1-1&#34;&gt;David J, Anderson’s book on Kanban&lt;/a&gt;, other than to say I agree wholeheartedly.&lt;/p&gt;
&lt;p&gt;I too have been looking for a good introduction book on Kanban as applied to software development (so &lt;strong&gt;K&lt;/strong&gt;anban as opposed to kanban, not the capital K). This is exactly what with book does. OK I know that this information is out there on the web (&lt;a href=&#34;http://www.limitedwipsociety.org/&#34; title=&#34;http://www.limitedwipsociety.org/&#34;&gt;http://www.limitedwipsociety.org/&lt;/a&gt;); but there times, especially for introducing potentially non technical people to development methodologies, where you want to point people at an easily accessible introduction they can dip into. This is that book, have a look it is worth it&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I won’t bother to repeat what <a href="http://gojko.net/2010/05/18/finally-an-authoritative-source-on-the-kanban-method/">Gojko has said</a> on <a href="http://www.amazon.co.uk/Kanban-David-J-Anderson/dp/0984521402/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1275125846&amp;sr=1-1">David J, Anderson’s book on Kanban</a>, other than to say I agree wholeheartedly.</p>
<p>I too have been looking for a good introduction book on Kanban as applied to software development (so <strong>K</strong>anban as opposed to kanban, not the capital K). This is exactly what with book does. OK I know that this information is out there on the web (<a href="http://www.limitedwipsociety.org/" title="http://www.limitedwipsociety.org/">http://www.limitedwipsociety.org/</a>); but there times, especially for introducing potentially non technical people to development methodologies, where you want to point people at an easily accessible introduction they can dip into. This is that book, have a look it is worth it</p>
<p><a href="http://www.amazon.co.uk/Kanban-David-J-Anderson/dp/0984521402/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1275125846&amp;sr=1-1"><img loading="lazy" src="http://gojko.s3.amazonaws.com/kanban.jpg"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Next Agile Yorkshire meeting is on the 8th of June on Agile BI Testing</title>
      <link>https://blog.richardfennell.net/posts/next-agile-yorkshire-meeting-is-on-the-8th-of-june-on-agile-bi-testing/</link>
      <pubDate>Wed, 26 May 2010 11:22:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/next-agile-yorkshire-meeting-is-on-the-8th-of-june-on-agile-bi-testing/</guid>
      <description>&lt;p&gt;June’s Agile Yorkshire meeting is on the experiences of the BI development team at Skipton Building Society on bringing automated testing to data warehousing using Fitnesse and dbFit.&lt;/p&gt;
&lt;p&gt;For more details &lt;a href=&#34;http://www.agileyorkshire.org/event-announcements/08Jun2010&#34;&gt;check out the Agile Yorkshire site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>June’s Agile Yorkshire meeting is on the experiences of the BI development team at Skipton Building Society on bringing automated testing to data warehousing using Fitnesse and dbFit.</p>
<p>For more details <a href="http://www.agileyorkshire.org/event-announcements/08Jun2010">check out the Agile Yorkshire site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Setting the display value in databound combo boxes on Access 2010 web forms</title>
      <link>https://blog.richardfennell.net/posts/setting-the-display-value-in-databound-combo-boxes-on-access-2010-web-forms/</link>
      <pubDate>Fri, 21 May 2010 10:13:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/setting-the-display-value-in-databound-combo-boxes-on-access-2010-web-forms/</guid>
      <description>&lt;p&gt;When you drop a combo box on a Access 2010 web form and databind to a query or table with a value column and display column (e.g. the query select id, description from table1) you don’t get what you would expect. It shows just the value e.g. a set of integers in the combo.&lt;/p&gt;
&lt;p&gt;This is not the case if you are using a standard Access client form. the wizard that gets run sort it all out for you, and if it does not, you just set the ‘Bound Column’ property to sort it out.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you drop a combo box on a Access 2010 web form and databind to a query or table with a value column and display column (e.g. the query select id, description from table1) you don’t get what you would expect. It shows just the value e.g. a set of integers in the combo.</p>
<p>This is not the case if you are using a standard Access client form. the wizard that gets run sort it all out for you, and if it does not, you just set the ‘Bound Column’ property to sort it out.</p>
<p>On web forms the fix is simple, but not obvious.</p>
<ol>
<li>Databind the combo as you normal do to the query/table.</li>
<li>Go to the combo’s properties format tab</li>
<li>Set the column count to 2</li>
<li>Set the column widths to 0cm;2cm (basically hide the first column and show the second)</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_568926C8.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7D573D08.png" title="image"></a></p>
<p>Once this is done it work fine</p>
]]></content:encoded>
    </item>
    <item>
      <title>More on my Vodafone 403 errors – an explanation at least</title>
      <link>https://blog.richardfennell.net/posts/more-on-my-vodafone-403-errors-an-explanation-at-least/</link>
      <pubDate>Fri, 21 May 2010 09:51:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-on-my-vodafone-403-errors-an-explanation-at-least/</guid>
      <description>&lt;p&gt;The next chapter of &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/05/13/stupid-support-suggestions-from-vodafone-about-403-server-proxy-errors.aspx&#34;&gt;the ongoing saga&lt;/a&gt;. I had the 403 errors again today when using my phone as a modem to test out firewall. As I was in the office I had time to call Vodafone support. As expect I had to speak to a succession of people:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Person 1 – call centre advisor, passed me straight onto ‘support’&lt;/li&gt;
&lt;li&gt;Person 2 – support advisor but seem lost when I spoke about web protocols, I think it was more ‘making a phone ring’ support&lt;/li&gt;
&lt;li&gt;Person 3 – 2nd line support, success, as soon as I explained my problem they said ‘yes that happens’.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The problem is down, as I had suspected, to congestion in their WAN network (not the 3G network, though the density of cells is a factor). When you try to use HTTP Vodafone route a request to their authentication server to see if your account is allow to connect to the site. By default they block a list of adult/premium web sites (this is service you have switched on or off with your account). The problem is at busy times this validation service is overloaded and so their systems get no response as to whether the site is allowed, so assume the site you asked for is restricted and gives the 403 error. Once this happens you seem to have to make new 3G data connection (reset the phone, move cell or let the connection time out) to get it to try again.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The next chapter of <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/05/13/stupid-support-suggestions-from-vodafone-about-403-server-proxy-errors.aspx">the ongoing saga</a>. I had the 403 errors again today when using my phone as a modem to test out firewall. As I was in the office I had time to call Vodafone support. As expect I had to speak to a succession of people:</p>
<ul>
<li>Person 1 – call centre advisor, passed me straight onto ‘support’</li>
<li>Person 2 – support advisor but seem lost when I spoke about web protocols, I think it was more ‘making a phone ring’ support</li>
<li>Person 3 – 2nd line support, success, as soon as I explained my problem they said ‘yes that happens’.</li>
</ul>
<p>The problem is down, as I had suspected, to congestion in their WAN network (not the 3G network, though the density of cells is a factor). When you try to use HTTP Vodafone route a request to their authentication server to see if your account is allow to connect to the site. By default they block a list of adult/premium web sites (this is service you have switched on or off with your account). The problem is at busy times this validation service is overloaded and so their systems get no response as to whether the site is allowed, so assume the site you asked for is restricted and gives the 403 error. Once this happens you seem to have to make new 3G data connection (reset the phone, move cell or let the connection time out) to get it to try again.</p>
<p>I have now had the restricted service removed from my account, we will see if this help. I fear it will not as any connection still has to ask the authentication server if the site is restricted even if my account is set to restrict no sites.</p>
<p>So it seems that volume of smartphones is really hurting Vodafone&rsquo;s internal network. They seem to have the 3G/Edge capacity, shame they have not got enough server wan/capacity to back it up yet.</p>
<p>What I don’t understand is why this has taken me so long to get a sensible answer. Can’t they publish a statement on this problem on their support forums or web site?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Agile Yorkshire moves to the 2nd Tuesday in the month</title>
      <link>https://blog.richardfennell.net/posts/agile-yorkshire-moves-to-the-2nd-tuesday-in-the-month/</link>
      <pubDate>Wed, 19 May 2010 14:38:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/agile-yorkshire-moves-to-the-2nd-tuesday-in-the-month/</guid>
      <description>&lt;p&gt;Don’t be a muppet like me and turn up for an Agile Yorkshire meeting on the 2nd Wednesday of the month out of habit. Due to the move to the great new location of the &lt;a href=&#34;http://www.ntileeds.co.uk/find-us/&#34;&gt;Old Broadcasting House&lt;/a&gt; the regular date has had to moved to the 2nd Tuesday.&lt;/p&gt;
&lt;p&gt;Bit of a pain for me as I struggle to make Tuesdays.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Don’t be a muppet like me and turn up for an Agile Yorkshire meeting on the 2nd Wednesday of the month out of habit. Due to the move to the great new location of the <a href="http://www.ntileeds.co.uk/find-us/">Old Broadcasting House</a> the regular date has had to moved to the 2nd Tuesday.</p>
<p>Bit of a pain for me as I struggle to make Tuesdays.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading from Office 2010 RC to RTM</title>
      <link>https://blog.richardfennell.net/posts/upgrading-from-office-2010-rc-to-rtm/</link>
      <pubDate>Wed, 19 May 2010 09:25:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-from-office-2010-rc-to-rtm/</guid>
      <description>&lt;p&gt;I got this commonly seen version error when I tried to upgrade my RC version of Office 2010 to RTM&lt;/p&gt;
&lt;p&gt;Setup is unable to proceed due to the following error(s):&lt;br&gt;
Microsoft Office 2010 does not support upgrading from a prerelease version of Microsoft Office 2010. You must first install any prerelease versions of Microsoft Office 2010 products and associated technologies. Correct the issue(s) listed above and re-run setup.&lt;/p&gt;
&lt;p&gt;I had already removed by RC Office and Visio. Turns out the problem was I had the Outlook Hotmail Connector (Beta) installed too. Once this was removed the install worked fine.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I got this commonly seen version error when I tried to upgrade my RC version of Office 2010 to RTM</p>
<p>Setup is unable to proceed due to the following error(s):<br>
Microsoft Office 2010 does not support upgrading from a prerelease version of Microsoft Office 2010. You must first install any prerelease versions of Microsoft Office 2010 products and associated technologies. Correct the issue(s) listed above and re-run setup.</p>
<p>I had already removed by RC Office and Visio. Turns out the problem was I had the Outlook Hotmail Connector (Beta) installed too. Once this was removed the install worked fine.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Looking for a good overview of Visual Studio 2010 and ALM?</title>
      <link>https://blog.richardfennell.net/posts/looking-for-a-good-overview-of-visual-studio-2010-and-alm/</link>
      <pubDate>Mon, 17 May 2010 19:09:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/looking-for-a-good-overview-of-visual-studio-2010-and-alm/</guid>
      <description>&lt;p&gt;Visual Studio 2010 provides many new features to aid Application Lifecycle Management. Learning its capabilities can be a bit daunting, especially if you are new to Team Foundation Server.&lt;/p&gt;
&lt;p&gt;So enter the new book ‘&lt;a href=&#34;http://www.amazon.co.uk/gp/product/0470484268/ref=s9_simh_gw_p14_i1?pf_rd_m=A3P5ROKL5A1OLE&amp;amp;pf_rd_s=center-1&amp;amp;pf_rd_r=0FRJRHN25S0K5RY50YC5&amp;amp;pf_rd_t=101&amp;amp;pf_rd_p=467198433&amp;amp;pf_rd_i=468294&#34;&gt;Professional Application Lifecycle Management with Visual Studio 2010: with Team Foundation Server 2010&lt;/a&gt;’ by &lt;strong&gt;Mickey Gousset, Brian Keller, Ajoy Krishnamoorthy and Martin Woodward&lt;/strong&gt;. This provides a great overview of all the the key features of Visual Studio and TFS 2010, presented with a view as to how TFS gives an end to end delivery of an ALM process.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Visual Studio 2010 provides many new features to aid Application Lifecycle Management. Learning its capabilities can be a bit daunting, especially if you are new to Team Foundation Server.</p>
<p>So enter the new book ‘<a href="http://www.amazon.co.uk/gp/product/0470484268/ref=s9_simh_gw_p14_i1?pf_rd_m=A3P5ROKL5A1OLE&amp;pf_rd_s=center-1&amp;pf_rd_r=0FRJRHN25S0K5RY50YC5&amp;pf_rd_t=101&amp;pf_rd_p=467198433&amp;pf_rd_i=468294">Professional Application Lifecycle Management with Visual Studio 2010: with Team Foundation Server 2010</a>’ by <strong>Mickey Gousset, Brian Keller, Ajoy Krishnamoorthy and Martin Woodward</strong>. This provides a great overview of all the the key features of Visual Studio and TFS 2010, presented with a view as to how TFS gives an end to end delivery of an ALM process.</p>
<p>Don’t go expecting this book will tell you everything about TFS, even at 600 pages it cannot be that detailed, it is a huge product, you think SharePoint is big, well it is just a subset of TFS! To address this potential problem the book contains many links of to relevant sources both on MSDN and blogs to fill in the extra detail you are bound to need concerning TFS and general development processes. Think of it as a “get you started” book, or a “what’s new in this release”, not a day to day administrators guide.</p>
<p>My one complaint, and it is very minor, is that it does read a bit like a collection articles as opposed to a single book. However, given it has four authors and the scope of the subject it covers this is forgivable.</p>
<p>So if you are are considering TFS 2010, whether as a developer, a sys-admin or manager this book will give you a good introduction into that you can achieve in your ALM process. Well worth a read.</p>
<p><a href="http://www.amazon.co.uk/gp/product/0470484268/ref=s9_simh_gw_p14_i1?pf_rd_m=A3P5ROKL5A1OLE&amp;pf_rd_s=center-1&amp;pf_rd_r=0FRJRHN25S0K5RY50YC5&amp;pf_rd_t=101&amp;pf_rd_p=467198433&amp;pf_rd_i=468294"><img alt="51GBrWYIk1L__BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA300_SH20_OU02_" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/51GBrWYIk1L__BO2204203200_PIsitbstickerarrowclickTopRight3576_AA300_SH20_OU02__3B0C6D48.jpg" title="51GBrWYIk1L__BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA300_SH20_OU02_"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Notes from our TFS 2010 upgrade</title>
      <link>https://blog.richardfennell.net/posts/notes-from-our-tfs-2010-upgrade/</link>
      <pubDate>Mon, 17 May 2010 17:39:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/notes-from-our-tfs-2010-upgrade/</guid>
      <description>&lt;p&gt;Got time today to start our proper internal TFS 2010 rollout. This involves linking up the new TFS to our SharePoint 2010 farm and merging in the contents from 2008 TFS and 2010 RC servers.&lt;/p&gt;
&lt;p&gt;All went fairly well, our SharePoint farm caused a few problems with its rather nice redirection feature. This  allows you to gradually move SharePoint content from an older server to a new one. A page is requested from the new server, if it is there it is used, if it is not the content on the old server is used. This caused a couple of problems during the install of TFS that content was not on the new SharePoint server so it redirected to the old one. However, the correct response was that the content was not there and TFS need to install or configure it. Once we realised this was going on there were no major issues with the TFS 2010 installation.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Got time today to start our proper internal TFS 2010 rollout. This involves linking up the new TFS to our SharePoint 2010 farm and merging in the contents from 2008 TFS and 2010 RC servers.</p>
<p>All went fairly well, our SharePoint farm caused a few problems with its rather nice redirection feature. This  allows you to gradually move SharePoint content from an older server to a new one. A page is requested from the new server, if it is there it is used, if it is not the content on the old server is used. This caused a couple of problems during the install of TFS that content was not on the new SharePoint server so it redirected to the old one. However, the correct response was that the content was not there and TFS need to install or configure it. Once we realised this was going on there were no major issues with the TFS 2010 installation.</p>
<p>So we had a nice new TFS 2010 install. The next task was to move over content from the 2008 server. Our 2008 server was virtualised so I snaphot’d it, made sure we had a good SQL backup and did an in-place upgrade, and this is where I hit a problem. Our new TFS 2010 install and the 2008 were using the same central SQL server. Our new TFS 2010 install had created a <strong>TFS_Configuration</strong> DB, I had had the option to put in prefix such as <strong>TFS_2010_Configuration</strong> but decided I did not need it. This choice came back to bite me. When you do an in-place upgrade of 2008 to 2010 it rolls up all the old TFS DBs into the new structure, creating a new DB, with you guess it, the name <strong>TFS_Configuration</strong>. If you are doing a new install you can alter this DB name, but this is not the case for a upgrade, you have to use the standard name. So I was a bit stuck. The solution was to:</p>
<ol>
<li>take my new TFS 2010 server temporarily off line</li>
<li>rename the TFS_Configuration DB</li>
<li>do the in-place upgrade on the new 2008 server to 2010, so it creates as new TFS_Configuration DB</li>
<li>detach the newly upgraded Team Project Collection on the TFS admin console</li>
<li>take my upgraded TFS 2008 server off line</li>
<li>rename its TFS_Configuration DB (if really confident you could delete it)</li>
<li>rename the original TFS_Configuration DB back</li>
<li>restart the main TFS 2010 server</li>
<li>attached the upgraded TPC in the TFS admin console</li>
</ol>
<p>Simple really !</p>
]]></content:encoded>
    </item>
    <item>
      <title>Stupid support suggestions from Vodafone about 403 server proxy errors</title>
      <link>https://blog.richardfennell.net/posts/stupid-support-suggestions-from-vodafone-about-403-server-proxy-errors/</link>
      <pubDate>Thu, 13 May 2010 19:45:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/stupid-support-suggestions-from-vodafone-about-403-server-proxy-errors/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/getting-sick-of-seeing-the-vodafone-error-http-error-403-the-service-you-requested-is-restricted.aspx&#34;&gt;A while ago I blogged about the problems I was having with 403 error when trying to use my HTC phone as a 3G modem or using Opera or IE on the phone itself&lt;/a&gt;. When I initially complained Vodafone refunded my data fee for the month and said engineers were onto this known intermittent issue. No grief, Ok I did not want to see 403 errors but I was happy with the customer service, fast and no quibble.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/getting-sick-of-seeing-the-vodafone-error-http-error-403-the-service-you-requested-is-restricted.aspx">A while ago I blogged about the problems I was having with 403 error when trying to use my HTC phone as a 3G modem or using Opera or IE on the phone itself</a>. When I initially complained Vodafone refunded my data fee for the month and said engineers were onto this known intermittent issue. No grief, Ok I did not want to see 403 errors but I was happy with the customer service, fast and no quibble.</p>
<p>But the 403 errors have not gone way, in fact I am seeing them more often and not just when in the square mile in London. When I complained again I got a different level of service, I got the ‘it is your handset’ response. This was even though I had clearly said FTP and HTTPS (to an Exchange server) were fine it was only HTTP again via the phones two browsers and from a PC using the phone as a 3G modem i.e. traffic that they proxy.</p>
<p>They outdid themselves today with this suggestion</p>
<p><em>1. Switch off your phone<br>
2. Remove battery and SIM<br>
3. Clean it with clean cotton cloth<br>
4. Insert battery and SIM after 15 to 20 minutes</em></p>
<p>How this is meant to help a server side error how?</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF237195 error when creating a new team project</title>
      <link>https://blog.richardfennell.net/posts/tf237195-error-when-creating-a-new-team-project/</link>
      <pubDate>Thu, 13 May 2010 19:14:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf237195-error-when-creating-a-new-team-project/</guid>
      <description>&lt;p&gt;After upgrade TFS 2010 RC to RTM you might see a problem when you try to create a new team project. The creation fails with error:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Module: Engine&lt;br&gt;
Event Description: TF30162: Task &amp;ldquo;Queries&amp;rdquo; from Group &amp;ldquo;WorkItemTracking&amp;rdquo; failed&lt;br&gt;
Exception Type: Microsoft.TeamFoundation.Client.PcwException&lt;br&gt;
Exception Message: TF237195: The following user name is not supported: [SERVER]$$PROJECTCOLLECTIONADMINGROUP$$&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The answer is simple, but I had missed it!&lt;/p&gt;
&lt;p&gt;Make sure that you are creating your new team project from a copy of the 2010 RTM version of Team Explorer. If you use the RC version you will see the error.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After upgrade TFS 2010 RC to RTM you might see a problem when you try to create a new team project. The creation fails with error:</p>
<p><em>Module: Engine<br>
Event Description: TF30162: Task &ldquo;Queries&rdquo; from Group &ldquo;WorkItemTracking&rdquo; failed<br>
Exception Type: Microsoft.TeamFoundation.Client.PcwException<br>
Exception Message: TF237195: The following user name is not supported: [SERVER]$$PROJECTCOLLECTIONADMINGROUP$$</em></p>
<p>The answer is simple, but I had missed it!</p>
<p>Make sure that you are creating your new team project from a copy of the 2010 RTM version of Team Explorer. If you use the RC version you will see the error.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My article on Typemock Insider Blog</title>
      <link>https://blog.richardfennell.net/posts/my-article-on-typemock-insider-blog/</link>
      <pubDate>Tue, 11 May 2010 09:32:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-article-on-typemock-insider-blog/</guid>
      <description>&lt;p&gt;I have an article posted on the &lt;a href=&#34;http://blog.typemock.com/2010/05/so-how-did-first-typemock-partner.html?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A&amp;#43;Typemock&amp;#43;%28The&amp;#43;Typemock&amp;#43;Insider%29&#34;&gt;Typemock Insider blog&lt;/a&gt; about the Typemock Academy I attended.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have an article posted on the <a href="http://blog.typemock.com/2010/05/so-how-did-first-typemock-partner.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A&#43;Typemock&#43;%28The&#43;Typemock&#43;Insider%29">Typemock Insider blog</a> about the Typemock Academy I attended.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2010 RC –&amp;gt; RTM upgrade problem with the warehouse</title>
      <link>https://blog.richardfennell.net/posts/tfs-2010-rc-rtm-upgrade-problem-with-the-warehouse/</link>
      <pubDate>Mon, 10 May 2010 12:46:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2010-rc-rtm-upgrade-problem-with-the-warehouse/</guid>
      <description>&lt;p&gt;I recently did an in-place upgrade of a TFS 2010 RC  to RTM box. All appeared to be going fine, all the verification in the upgrade wizard passed, but when I hit configure the first step failed. It said there was a missing  dbo.sysobjects in the warehouse DB. I then had to resort the configuration DB as it was partly upgraded to try again as the DB was left partially upgraded.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently did an in-place upgrade of a TFS 2010 RC  to RTM box. All appeared to be going fine, all the verification in the upgrade wizard passed, but when I hit configure the first step failed. It said there was a missing  dbo.sysobjects in the warehouse DB. I then had to resort the configuration DB as it was partly upgraded to try again as the DB was left partially upgraded.</p>
<p>I got round this problem by telling the upgrade to ignore the reporting side of TFS. The upgrade completed without issue. I was then able to re-add the reporting via the TFS administration console (optional configuration of features in 2010 is really useful!). I actually chose to recreate the warehouse DBs anew to make sure I was bringing over no corruption. Once TFS had rebuild all the DBs and cubes all was working fine.</p>
<p>So the technical tip: if upgrading TFS 2010 and you are getting a problem with the reporting sub system, consider bypassing the upgraded and just rebuilding that system when you are finished</p>
<p>_[Update 19th May 2010] _</p>
<p><em>Saw the problem again today on another site going a RC-&gt;RTM upgrade. the exact error message you get is</em></p>
<p>_Exception Message: TF255356: The following error occurred when configuring the Team Foundation databases: Error occurred while executing servicing step Upgrade Warehouse for component Tfs2010WarehouseBeta2ToRTM during Tfs2010Beta2ToRTM: Errors in the metadata manager. The dimension with the name of &lsquo;Work Item Tree&rsquo; already exists in the &lsquo;Tfs_Analysis&rsquo; database.<br>
_</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF246064 upgrading TFS 2010RC to RTM</title>
      <link>https://blog.richardfennell.net/posts/tf246064-upgrading-tfs-2010rc-to-rtm/</link>
      <pubDate>Tue, 04 May 2010 10:53:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf246064-upgrading-tfs-2010rc-to-rtm/</guid>
      <description>&lt;p&gt;I have a basic installation of TFS 2010 running on my Windows 7 laptop. This is really useful for testing build customisation and the like, a great new feature for 2010. Today I got around to trying to upgrade it from RC to RTM, but on the verification test I got the error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;[ Configuration Database ] TF255407: An error occurred while attempting to validate the configuration database. Error message: TF246064: No database could be found for the following host: TestCollection. The host has the following ID: eaf3c572-8657-4268-9852-3d73a799cdf5. To fix this problem, use the TFSConfig RemapDBs command-line tool and make sure that you specify the SQL Server instance that contains this database.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have a basic installation of TFS 2010 running on my Windows 7 laptop. This is really useful for testing build customisation and the like, a great new feature for 2010. Today I got around to trying to upgrade it from RC to RTM, but on the verification test I got the error</p>
<p><em>[ Configuration Database ] TF255407: An error occurred while attempting to validate the configuration database. Error message: TF246064: No database could be found for the following host: TestCollection. The host has the following ID: eaf3c572-8657-4268-9852-3d73a799cdf5. To fix this problem, use the TFSConfig RemapDBs command-line tool and make sure that you specify the SQL Server instance that contains this database.</em></p>
<p>Turns out I had been a bit stupid. I had created a couple of test team project collections (TPC) whilst testing some attach/detach scenarios for a client. I deleted the underlying SQL DBs, but not deleted the TCP in TFS, I guess as part of some test, but for the life of me I cannot remember why.</p>
<p>So as the error message suggested I tried to run the TFSConfig command</p>
<p><em>C:Program FilesMicrosoft Team Foundation Server 2010Tools&gt;tfsconfig remapdbs /sqlinstances:Typhoonsqlexpress /databaseName:typhoonsqlexpress;TFs_configuration<br>
Logging sent to file C:ProgramDataMicrosoftTeam FoundationServer ConfigurationLogsCFG_CFG_AT_0504_090019.log<br>
Command: remapDBs<br>
TfsConfig - Team Foundation Server Configuration Tool<br>
Copyright (c) Microsoft Corporation. All rights reserved.<br>
The Team Foundation Server configuration could not be reconfigured. The following errors were encountered:</em></p>
<p><em>TF246064: No database could be found for the following host: test1. The host has the following ID: 4e9b737a-b666-48f4-9411-20249aed7ae0. To fix this problem, use the TFSConfig RemapDBs command-line tool and make sure that you specify the SQL Server instance that contains this database.<br>
TF246064: No database could be found for the following host: TestCollection. The host has the following ID: eaf3c572-8657-4268-9852-3d73a799cdf5. To fix this problem, use the TFSConfig RemapDBs command-line tool and make sure that you specify the SQL Server instance that contains this database.</em></p>
<p>So the same error as the upgrade wizard. I also tried detaching and deleting the TCP, but got:</p>
<p><em>C:Program FilesMicrosoft Team Foundation Server 2010Tools&gt;tfsconfig collection /delete /collectionName:TestCollection<br>
Logging sent to file C:ProgramDataMicrosoftTeam FoundationServer ConfigurationLogsCFG_TPC_AT_0504_090828.log<br>
Command: collection<br>
TfsConfig - Team Foundation Server Configuration Tool<br>
Copyright (c) Microsoft Corporation. All rights reserved.<br>
Could not find file &lsquo;C:Program FilesMicrosoft Team Foundation Server 2010Application TierWeb Servicesweb.config&rsquo;.</em></p>
<p>But this failed as I had already removed the 2010RC instance, so there was no web.config to read.</p>
<p>So what to do?</p>
<p>I opened SQL Management Studio and connected to my local SQL instance, open the Tfs_Configuration DB and found the tbl_ServiceHost table. I then removed the rows that referenced the TPC that had I deleted the DB for. Of course I had made sure I had a backup of the DB before I started.</p>
<p>Once this was done the upgrade wizard passed verification and completed without error.</p>
<p>Now I would not recommend this as a good way to work, but it did get me out of a hole. In my case it was only a test system so if I lost it, it was not that important. However it is good to know there is a reasonably simple solution address problems with missing TPC DBs</p>
<p><strong>Technical Tip</strong>: Make sure all your TPCs are valid before you start the upgrade process, so you don’t see this issue in the first place!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Slides from my Typemock Academy session</title>
      <link>https://blog.richardfennell.net/posts/slides-from-my-typemock-academy-session/</link>
      <pubDate>Tue, 27 Apr 2010 14:34:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/slides-from-my-typemock-academy-session/</guid>
      <description>&lt;p&gt;A PDF version of my slides for my session at the Typemock Academy on SharePoint testing with Typemock are available &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2010/Typemock%20Academy%20Oslo%20-%20Increasing%20quality%20and%20reducing%20development%20time%20for%20SharePoint%20Webparts.pdf&#34;&gt;on the Black Marble site&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A PDF version of my slides for my session at the Typemock Academy on SharePoint testing with Typemock are available <a href="http://www.blackmarble.co.uk/ConferencePapers/2010/Typemock%20Academy%20Oslo%20-%20Increasing%20quality%20and%20reducing%20development%20time%20for%20SharePoint%20Webparts.pdf">on the Black Marble site</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Busy week, off to see Typemock in Oslo</title>
      <link>https://blog.richardfennell.net/posts/busy-week-off-to-see-typemock-in-oslo/</link>
      <pubDate>Sun, 25 Apr 2010 19:47:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/busy-week-off-to-see-typemock-in-oslo/</guid>
      <description>&lt;p&gt;I am off to Oslo, for the first time, for the &lt;a href=&#34;http://site.typemock.com/typemock-academy&#34;&gt;Typemock Academy&lt;/a&gt; this week. The event has managed to survive the disruption of the Icelandic volcano.&lt;/p&gt;
&lt;p&gt;A great chance to meet a pile of people I have only spoken toon the phone or via email in the past. Looking forward to it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am off to Oslo, for the first time, for the <a href="http://site.typemock.com/typemock-academy">Typemock Academy</a> this week. The event has managed to survive the disruption of the Icelandic volcano.</p>
<p>A great chance to meet a pile of people I have only spoken toon the phone or via email in the past. Looking forward to it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Mocking Sharepoint for Testing</title>
      <link>https://blog.richardfennell.net/posts/mocking-sharepoint-for-testing/</link>
      <pubDate>Thu, 22 Apr 2010 10:47:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mocking-sharepoint-for-testing/</guid>
      <description>&lt;p&gt;In my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx&#34;&gt;previous post&lt;/a&gt; I talked about using Isolator to mock Sharepoint to aid the speed of the development process. I find this a productive way of working, but it does not really help in the realm of automated testing. You need a way to programmatically explore a webpart, preferably outside of SharePoint to check its correctness.&lt;/p&gt;
&lt;p&gt;You could use the methods in my previous post and some form of automated web test, but this does mean you need to spin up a web server of some descriptions (IIS, Cassini etc. and deploy to it) An alternative is look at the &lt;a href=&#34;http://www.sm-art.biz/Ivonna.aspx&#34;&gt;Typmock addin Ivonna&lt;/a&gt;. This a creates a fake web server to load you page and tools to explore it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx">previous post</a> I talked about using Isolator to mock Sharepoint to aid the speed of the development process. I find this a productive way of working, but it does not really help in the realm of automated testing. You need a way to programmatically explore a webpart, preferably outside of SharePoint to check its correctness.</p>
<p>You could use the methods in my previous post and some form of automated web test, but this does mean you need to spin up a web server of some descriptions (IIS, Cassini etc. and deploy to it) An alternative is look at the <a href="http://www.sm-art.biz/Ivonna.aspx">Typmock addin Ivonna</a>. This a creates a fake web server to load you page and tools to explore it.</p>
<p>I will describe how to use this technique using the same example as my <a href="http://www.sm-art.biz/Ivonna.aspx">previous post</a>.</p>
<p>Previously I had placed all the code to fake out SharePoint in the Page_Load event of the test harness page. As I am now trying to write an unit/integration test I think it better to move this into the test itself so I would delete code I placed in the Page_Load event other than any property settings on the actual webpart. I would actually refactor the lines creating the fake URL context and fake SPSite into some helper methods and then call them from my new test. I would then load the page in Ivonna and check it’s values.</p>
<p>I have tried to show this below, using a couple of techniques to show how to get to components in the page.</p>
<pre tabindex="0"><code> 1: \[TestMethod, Isolated\]
</code></pre><p>2:      public void LoadWebPage_SpSimpleWebPart_3EntriesInList()</p>
<pre tabindex="0"><code> 3:      {
</code></pre><p>4:          // Arrange</p>
<pre tabindex="0"><code> 5:          TestHelpers.CreateFakeSPSite();
</code></pre><p>6:          TestHelpers.CreateFakeURL();</p>
<pre tabindex="0"><code> 7:  
</code></pre><p>8:          TestSession session = new TestSession(); //Start each test with this</p>
<pre tabindex="0"><code> 9:          WebRequest request = new WebRequest(&#34;/SpSimpleTest.aspx&#34;); //Create a WebRequest object
</code></pre><p>10:</p>
<pre tabindex="0"><code> 11:          // Act
</code></pre><p>12:          WebResponse response = session.ProcessRequest(request); //Process the request</p>
<pre tabindex="0"><code> 13:  
</code></pre><p>14:          // Assert</p>
<pre tabindex="0"><code> 15:          //Check the page loaded
</code></pre><p>16:          Assert.IsNotNull(response.Page);</p>
<pre tabindex="0"><code> 17:          
</code></pre><p>18:          // the the Ivonna extension method to find the control</p>
<pre tabindex="0"><code> 19:          var wp = response.Page.FindRecursive&lt;DemoWebParts.SpSimpleWebPart&gt;(&#34;wp1&#34;);
</code></pre><p>20:          Assert.IsNotNull(wp);</p>
<pre tabindex="0"><code> 21:  
</code></pre><p>22:          // so we have to use the following structure and dig knowing the format</p>
<pre tabindex="0"><code> 23:          // webpart/table/row/cell/control
</code></pre><p>24:          var label = ((TableRow)wp.Controls[0].Controls[0]).Cells[1].Controls[0] as Label;</p>
<pre tabindex="0"><code> 25:          Assert.IsNotNull(label);
</code></pre><p>26:          Assert.AreEqual(&ldquo;<a href="http://mockedsite.com">http://mockedsite.com</a>&rdquo;,label.Text);</p>
<pre tabindex="0"><code> 27:  
</code></pre><p>28:          var list = ((TableRow)wp.Controls[0].Controls[1]).Cells[1].Controls[0] as DropDownList;</p>
<pre tabindex="0"><code> 29:          Assert.IsNotNull(list);
</code></pre><p>30:          Assert.AreEqual(3, list.Items.Count);</p>
<pre tabindex="0"><code> 31:      }
```

Now I have to say I had high hopes for this technique, but it has not been as useful as I would hope. I suspect that this is due to the rapid changing of the UI design of client’s webparts making these tests too brittle. We have found the ‘mark 1 eyeball’ more appropriate in many cases, as is so often true for UI testing.

However, I do see this as being a great option for smoke testing in long running projects with fairly stable designs.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Mocking Sharepoint for Design with Typemock Isolator</title>
      <link>https://blog.richardfennell.net/posts/mocking-sharepoint-for-design-with-typemock-isolator/</link>
      <pubDate>Thu, 22 Apr 2010 09:39:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mocking-sharepoint-for-design-with-typemock-isolator/</guid>
      <description>&lt;p&gt;I have found the most productive use of Typemock Isolator with SharePoint is to use it to reduce the time of the F5 cycle (build/deploy/use). If you are using a VPC of some type to do your SharePoint development, as many do, this process can easily take a couple minutes, and these minutes add up.&lt;/p&gt;
&lt;p&gt;In my experience webparts usually make fairly simple use of the underlying SharePoint site, by this I mean that they get some data from an SPList(s) or remote data source and render in some way. Or the reverse, they gather data that they drop to an SPLits(s) or remote data source.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have found the most productive use of Typemock Isolator with SharePoint is to use it to reduce the time of the F5 cycle (build/deploy/use). If you are using a VPC of some type to do your SharePoint development, as many do, this process can easily take a couple minutes, and these minutes add up.</p>
<p>In my experience webparts usually make fairly simple use of the underlying SharePoint site, by this I mean that they get some data from an SPList(s) or remote data source and render in some way. Or the reverse, they gather data that they drop to an SPLits(s) or remote data source.</p>
<p>So why not remove the requirement for SharePoint during development from the equation? Mock it out with Isolator and an ASP.NET test site.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_6B7E03D9.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_62ADFB8D.png" title="image"></a></p>
<p>Consider this scenario…</p>
<ul>
<li>You have a part that lists people name and email address in a combo box</li>
<li>This data is stored in an SPList</li>
<li>The webpart must also list the URL of the site it is hosted on.</li>
</ul>
<p>All fairly straight forward, but how to implement the wrapper around it?</p>
<ul>
<li>The first we create an ASP.NET Web Application and an ASP.NET web page in the same solution as the WebParts class library.</li>
<li>In this web application reference the webpart project</li>
<li>Edit the .ASPX to reference the webpart assembly (line 2) and create an instance on the page (line 11-17). It is best to do this declaratively to avoid any question of the ASP.NET personalisation system.</li>
</ul>
<pre tabindex="0"><code> 1: &lt;%@ Page Language=&#34;C#&#34; AutoEventWireup=&#34;true&#34; CodeBehind=&#34;SpSimpleTest.aspx.cs&#34; Inherits=&#34;TestWebSite.SpSimpleTest&#34; %&gt;
</code></pre><p>2: &lt;%@ Register Assembly=&ldquo;DemoWebParts&rdquo; Namespace=&ldquo;DemoWebParts&rdquo; TagPrefix=&ldquo;wp&rdquo; %&gt;</p>
<pre tabindex="0"><code> 3: &lt;!DOCTYPE html PUBLIC &#34;-//W3C//DTD XHTML 1.0 Transitional//EN&#34; &#34;http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd&#34;\&gt;
</code></pre><p>4: &lt;html xmlns=&ldquo;<a href="http://www.w3.org/1999/xhtml%22">http://www.w3.org/1999/xhtml"</a>&gt;</p>
<pre tabindex="0"><code> 5: &lt;head runat\=&#34;server&#34;\&gt;
</code></pre><p>6:     &lt;title&gt;Untitled Page&lt;/title&gt;</p>
<pre tabindex="0"><code> 7: &lt;/head\&gt;
</code></pre><p>8: &lt;body&gt;</p>
<pre tabindex="0"><code> 9:     &lt;form id\=&#34;form1&#34; runat\=&#34;server&#34;\&gt;
</code></pre><p>10:     &lt;div&gt;</p>
<pre tabindex="0"><code> 11:         &lt;asp:WebPartManager ID\=&#34;WebPartManager1&#34; runat\=&#34;server&#34;\&gt;
</code></pre><p>12:         &lt;/asp:WebPartManager&gt;</p>
<pre tabindex="0"><code> 13:         &lt;asp:WebPartZone ID\=&#34;WebPartZone1&#34; runat\=&#34;server&#34; \&gt;
</code></pre><p>14:             &lt;ZoneTemplate&gt;</p>
<pre tabindex="0"><code> 15:                 &lt;wp:SpSimpleWebPart ID\=&#34;wp1&#34; runat\=&#34;server&#34; /&gt;
</code></pre><p>16:             &lt;/ZoneTemplate&gt;</p>
<pre tabindex="0"><code> 17:         &lt;/asp:WebPartZone\&gt;
</code></pre><p>18:     &lt;/div&gt;</p>
<pre tabindex="0"><code> 19:     &lt;/form\&gt;
</code></pre><p>20: &lt;/body&gt;</p>
<pre tabindex="0"><code> 21: &lt;/html\&gt;
```

*   If you browse to this page you will see a null object exception as it loads the web part but it fails when it calls to SharePoint. This is because we have not mocked that yet. Seeing this error does rely on the fact you have a nice big global Try/Catch in the webparts Render() and CreateChildControls() methods. Note a technique I normally recommend but I think vital for this type of component else errors get swallowed by SharePoint

[![image](/wp-content/uploads/sites/2/historic/image_thumb_0CADF9B6.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_4226BBDB.png)

*   Add a reference to the SharePoint assemblies and Typemock Isolator (which of course you need a licensed copy of, or the 30 day demo) in the Test web application
*   In the Page\_Load event you now need to add the code to do the faking, I think the comments below explain what is going on. (There is a good argument to refactor much of this into a TestHelper class so it is easy to reuse).

```
 1: using System;
</code></pre><p>2: using System.Collections.Generic;</p>
<pre tabindex="0"><code> 3: using System.Linq;
</code></pre><p>4: using System.Web;</p>
<pre tabindex="0"><code> 5: using System.Web.UI;
</code></pre><p>6: using System.Web.UI.WebControls;</p>
<pre tabindex="0"><code> 7: using TypeMock.ArrangeActAssert;
</code></pre><p>8: using Microsoft.SharePoint;</p>
<pre tabindex="0"><code> 9:  
</code></pre><p>10: namespace TestWebSite</p>
<pre tabindex="0"><code> 11: {
</code></pre><p>12:     public partial class SpSimpleTestWithMockedSP : System.Web.UI.Page</p>
<pre tabindex="0"><code> 13:     {
</code></pre><p>14:         protected void Page_Load(object sender, EventArgs e)</p>
<pre tabindex="0"><code> 15:         {
</code></pre><p>16: </p>
<pre tabindex="0"><code> 17:             // set the name of the list to read data from
</code></pre><p>18:             wp1.DataList = &ldquo;ListName&rdquo;;</p>
<pre tabindex="0"><code> 19:  
</code></pre><p>20:             // set the fake return value for the currently running context</p>
<pre tabindex="0"><code> 21:             // we can us null as the current parameter as this is what this web page will return
</code></pre><p>22:             Isolate.WhenCalled(() =&gt; Microsoft.SharePoint.WebControls.SPControl.GetContextSite(null).Url).WillReturn(&ldquo;<a href="http://mockedsite.com">http://mockedsite.com</a>&rdquo;);</p>
<pre tabindex="0"><code> 23:  
</code></pre><p>24: </p>
<pre tabindex="0"><code> 25:             // create the mock SP Site we are using
</code></pre><p>26:             SPSite fakeSite = Isolate.Fake.Instance<SPSite>();</p>
<pre tabindex="0"><code> 27:             Isolate.Swap.NextInstance&lt;SPSite&gt;().With(fakeSite);
</code></pre><p>28: </p>
<pre tabindex="0"><code> 29:             // create a fke collection to hold our test data
</code></pre><p>30:             var itemCollection = new List<SPListItem>();</p>
<pre tabindex="0"><code> 31:             for (int i = 0; i &lt; 3; i++)
</code></pre><p>32:             {</p>
<pre tabindex="0"><code> 33:                 var fakeItem = Isolate.Fake.Instance&lt;SPListItem&gt;();
</code></pre><p>34:                 itemCollection.Add(fakeItem);</p>
<pre tabindex="0"><code> 35:  
</code></pre><p>36:                 Isolate.WhenCalled(() =&gt; fakeItem[&ldquo;Title&rdquo;]).WillReturn(string.Format(&ldquo;Title {0}&rdquo;, i));</p>
<pre tabindex="0"><code> 37:                 Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Email Address&#34;\]).WillReturn(string.Format(&#34;email{0}@email.com&#34;, i));
</code></pre><p>38: </p>
<pre tabindex="0"><code> 39:  
</code></pre><p>40:             }</p>
<pre tabindex="0"><code> 41:  
</code></pre><p>42:             // set what is returned when a call is made for the list</p>
<pre tabindex="0"><code> 43:             Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[&#34;ListName&#34;\].Items).WillReturnCollectionValuesOf(itemCollection);
</code></pre><p>44:         }</p>
<pre tabindex="0"><code> 45:     }
</code></pre><p>46: }</p>
<pre tabindex="0"><code>
*   Once this code is entered you can browse to the page again and you should see a working webpart that thinks it is talking to a real SharePoint site.

[![image](/wp-content/uploads/sites/2/historic/image_thumb_13610339.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_337C0FF6.png)

So now you have a way to run you SharePoint dependant webpart outside of SharePoint.This means the F5 cycle is reduced to seconds as opposed to minutes, and debugging is loads easier.

What I find this particularly useful for is sorting CSS and JavaScript issues, where is loads of tiny edits to text files.

**THIS DOES NOT MEAN YOU DON’T NEED TO TEST IN SHAREPOINT,** but it does means you can avoid much of it. The SharePoint phase become far more a test/QA/UAT process as opposed a development one. This model as a developer productivity enabling one.

In my next post I will talk about Mocking Sharepoint for Test with Typemock Isolator
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Post SharePoint Evolution thoughts</title>
      <link>https://blog.richardfennell.net/posts/post-sharepoint-evolution-thoughts/</link>
      <pubDate>Wed, 21 Apr 2010 16:01:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-sharepoint-evolution-thoughts/</guid>
      <description>&lt;p&gt;I am on the way home from my whistle stop visit to the &lt;a href=&#34;http://www.sharepointevolutionconference.com/&#34;&gt;SharePoint Evolution conference&lt;/a&gt;. I must say congratulations to the organisers for putting on such a successful event given all the problems they have had related to speakers and air travel, well done.&lt;/p&gt;
&lt;p&gt;My slides, on Testing Webparts with Typemock, will appear on the conference site soon, but I thought it a good idea to link here to previous posts I have done on the subject. Imaging how surprised I was to realise to find I never wrote them! If you search my blog you will find links to older versions of today&amp;rsquo;s slide stack, but no coding samples.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am on the way home from my whistle stop visit to the <a href="http://www.sharepointevolutionconference.com/">SharePoint Evolution conference</a>. I must say congratulations to the organisers for putting on such a successful event given all the problems they have had related to speakers and air travel, well done.</p>
<p>My slides, on Testing Webparts with Typemock, will appear on the conference site soon, but I thought it a good idea to link here to previous posts I have done on the subject. Imaging how surprised I was to realise to find I never wrote them! If you search my blog you will find links to older versions of today&rsquo;s slide stack, but no coding samples.</p>
<p>So I will do a couple of posts as quick as I can that go through the demos I did today, so expect one on ‘<a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/04/22/mocking-sharepoint-for-design-with-typemock-isolator.aspx">Mocking Sharepoint for Design’</a> and another for <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/04/22/mocking-sharepoint-for-testing.aspx">‘Mocking Sharepoint for Tests’</a></p>
<p><strong>Updated 22 Mar 2010 – added links to the posts as I wrote them</strong></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Developer Day South West</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-developer-day-south-west/</link>
      <pubDate>Fri, 16 Apr 2010 14:03:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-developer-day-south-west/</guid>
      <description>&lt;p&gt;I have just heard I will be speaking at Developer Day South West on June the 5th. My subject is &lt;strong&gt;Using the new Developer and Test features of VS 2010 to track down and fix bugs,&lt;/strong&gt; this is basically the same session as I have at our TechDays fringe event yesterday.&lt;/p&gt;
&lt;p&gt;Hope to see you there&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://dddsouthwest.com/&#34;&gt;&lt;img alt=&#34;DDDSouthWest2BadgeSmall[1]&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/DDDSouthWest2BadgeSmall1_1F0950EA.png&#34; title=&#34;DDDSouthWest2BadgeSmall[1]&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just heard I will be speaking at Developer Day South West on June the 5th. My subject is <strong>Using the new Developer and Test features of VS 2010 to track down and fix bugs,</strong> this is basically the same session as I have at our TechDays fringe event yesterday.</p>
<p>Hope to see you there</p>
<p><a href="http://dddsouthwest.com/"><img alt="DDDSouthWest2BadgeSmall[1]" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/DDDSouthWest2BadgeSmall1_1F0950EA.png" title="DDDSouthWest2BadgeSmall[1]"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Todays #UKTechDays Leeds Fringe event</title>
      <link>https://blog.richardfennell.net/posts/todays-uktechdays-leeds-fringe-event/</link>
      <pubDate>Thu, 15 Apr 2010 13:28:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/todays-uktechdays-leeds-fringe-event/</guid>
      <description>&lt;p&gt;Thanks to everyone who turned up for Matt Nunn’s and my sessions in Leeds today. All seemed to go well and were well received.&lt;/p&gt;
&lt;p&gt;As my slide stack consisted of a a welcome screen and an agenda I don’t really see the point of posting them on web. If you want to see the end to end story of VS2010 ALM I would suggest looking at the &lt;a href=&#34;http://channel9.msdn.com/shows/VS2010Launch/&#34;&gt;video’s on Channel 9&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who turned up for Matt Nunn’s and my sessions in Leeds today. All seemed to go well and were well received.</p>
<p>As my slide stack consisted of a a welcome screen and an agenda I don’t really see the point of posting them on web. If you want to see the end to end story of VS2010 ALM I would suggest looking at the <a href="http://channel9.msdn.com/shows/VS2010Launch/">video’s on Channel 9</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Video of MVP at Microsoft Techdays events</title>
      <link>https://blog.richardfennell.net/posts/video-of-mvp-at-microsoft-techdays-events/</link>
      <pubDate>Thu, 15 Apr 2010 13:19:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/video-of-mvp-at-microsoft-techdays-events/</guid>
      <description>&lt;p&gt;Whist at the the UK Techdays I did a video with &lt;a href=&#34;http://www.robmiles.com&#34;&gt;Rob Miles&lt;/a&gt;. &lt;a href=&#34;http://blogs.msdn.com/mvpawardprogram/archive/2010/04/15/watch-mvps-help-lauch-visual-studio-2010.aspx&#34;&gt;You an see the results here, we are near the end.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;A full video of my session should be available on the Microsoft UK Techdays site soon&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist at the the UK Techdays I did a video with <a href="http://www.robmiles.com">Rob Miles</a>. <a href="http://blogs.msdn.com/mvpawardprogram/archive/2010/04/15/watch-mvps-help-lauch-visual-studio-2010.aspx">You an see the results here, we are near the end.</a></p>
<p>A full video of my session should be available on the Microsoft UK Techdays site soon</p>
]]></content:encoded>
    </item>
    <item>
      <title>Useful tool to move Reporting Services reports between servers</title>
      <link>https://blog.richardfennell.net/posts/useful-tool-to-move-reporting-services-reports-between-servers/</link>
      <pubDate>Wed, 14 Apr 2010 13:44:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/useful-tool-to-move-reporting-services-reports-between-servers/</guid>
      <description>&lt;p&gt;When you are moving TFS contents around between servers, as many people will be doing as they implement new 2010 servers and want to make use of Team Project Collections; you often have to move Reporting Services reports. In many cases you find people have lost their customised .RDL files they uploaded in the first place, and don’t want to restore the whole Reporting Services DB.&lt;/p&gt;
&lt;p&gt;So how to extract an RDL file from a Reporting Services instance and moving it to a new Reporting Services instance?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you are moving TFS contents around between servers, as many people will be doing as they implement new 2010 servers and want to make use of Team Project Collections; you often have to move Reporting Services reports. In many cases you find people have lost their customised .RDL files they uploaded in the first place, and don’t want to restore the whole Reporting Services DB.</p>
<p>So how to extract an RDL file from a Reporting Services instance and moving it to a new Reporting Services instance?</p>
<p>Well have a look at <a href="http://www.sqldbatips.com/showarticle.asp?ID=62">Reporting Services Scripter</a>, make makes the process fairly straight forward.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at the Typemock Academy in Olso in a couple of weeks</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-the-typemock-academy-in-olso-in-a-couple-of-weeks/</link>
      <pubDate>Wed, 14 Apr 2010 11:54:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-the-typemock-academy-in-olso-in-a-couple-of-weeks/</guid>
      <description>&lt;p&gt;I am really pleased to say I will be speaking at the &lt;a href=&#34;http://site.typemock.com/typemock-academy&#34;&gt;first Typemock Partner Academy&lt;/a&gt; in Olso the week after next. I will be talking about using how we at Black Marble have used Isolator to improve the speed and quality of our SharePoint development.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really pleased to say I will be speaking at the <a href="http://site.typemock.com/typemock-academy">first Typemock Partner Academy</a> in Olso the week after next. I will be talking about using how we at Black Marble have used Isolator to improve the speed and quality of our SharePoint development.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Follow up on my TechDays Lab Management session about the cost.</title>
      <link>https://blog.richardfennell.net/posts/follow-up-on-my-techdays-lab-management-session-about-the-cost/</link>
      <pubDate>Tue, 13 Apr 2010 20:09:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-up-on-my-techdays-lab-management-session-about-the-cost/</guid>
      <description>&lt;p&gt;I have seen &lt;a href=&#34;http://twitter.com/#search?q=%23uktechdays%20lab&#34;&gt;a few tweets&lt;/a&gt; after my Techdays session on Lab Management that it sounded expensive. It is true that Lab Management is an extra license to purchase over and above Visual Studio. However under the new licensing model for VS you have to remember that as an MSDN subscriber you are already licensed for a TFS CAL and Server and if you have VS2010 Ultimate you are licensed for the Test Professional tools. With this collection of tools you can do much of the bits I showed in my session:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have seen <a href="http://twitter.com/#search?q=%23uktechdays%20lab">a few tweets</a> after my Techdays session on Lab Management that it sounded expensive. It is true that Lab Management is an extra license to purchase over and above Visual Studio. However under the new licensing model for VS you have to remember that as an MSDN subscriber you are already licensed for a TFS CAL and Server and if you have VS2010 Ultimate you are licensed for the Test Professional tools. With this collection of tools you can do much of the bits I showed in my session:</p>
<ul>
<li>You can have a build server that runs MSTest (or nUnit) tests</li>
<li>You can create Gated Check Ins</li>
<li>You can use the new WF based build process (which you can extend yourself) that can include deployment to test boxes</li>
</ul>
<p>Personally I think you can do this far easier than with a CC.NET/NAnt/SVN CI stack. That is not to say this stack is not really powerfully, but that you need to make more on a learning/time investment to get theses separate tools running together.</p>
<p>As I said in my session I think the key advantage of VS/TFS is the end to end ALM story, you can track and report on everything. The various components might not do everything its competitors do, but in my opinion they are in this release very close in functionality.</p>
<p>As to Lab Manager, as soon as you look at any comprehensive test environment, virtual or not, there are going to be costs. Irrespective of how you build it you are going to need tin and operating system licenses. It is not a going to be a cheap option, but any serious test lab strategy will need some investment and I think the Lab Management license itself will not be the major part of the cost.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Goodbye Team System hello Visual Studio ALM</title>
      <link>https://blog.richardfennell.net/posts/goodbye-team-system-hello-visual-studio-alm/</link>
      <pubDate>Tue, 13 Apr 2010 08:44:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/goodbye-team-system-hello-visual-studio-alm/</guid>
      <description>&lt;p&gt;Now that &lt;a href=&#34;http://weblogs.asp.net/scottgu/archive/2010/04/12/visual-studio-2010-and-net-4-released.aspx&#34;&gt;VS2010 has launched&lt;/a&gt; the branding of ‘Team System’ has gone; so I am no longer a MVP (Team System) but a MVP (Visual Studio ALM).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Now that <a href="http://weblogs.asp.net/scottgu/archive/2010/04/12/visual-studio-2010-and-net-4-released.aspx">VS2010 has launched</a> the branding of ‘Team System’ has gone; so I am no longer a MVP (Team System) but a MVP (Visual Studio ALM).</p>
]]></content:encoded>
    </item>
    <item>
      <title>Down in London for TechDays</title>
      <link>https://blog.richardfennell.net/posts/down-in-london-for-techdays/</link>
      <pubDate>Mon, 12 Apr 2010 17:42:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/down-in-london-for-techdays/</guid>
      <description>&lt;p&gt;I am down in London for the first two days of the Microsoft TechDays event. First day has been very interesting, there seems to be a good buzz over the new testing features. So it looks good for my session tomorrow when I will be presenting on Lab Manager hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am down in London for the first two days of the Microsoft TechDays event. First day has been very interesting, there seems to be a good buzz over the new testing features. So it looks good for my session tomorrow when I will be presenting on Lab Manager hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Error running Ivonna tests with ASP.NET 4</title>
      <link>https://blog.richardfennell.net/posts/error-running-ivonna-tests-with-asp-net-4/</link>
      <pubDate>Mon, 12 Apr 2010 17:26:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/error-running-ivonna-tests-with-asp-net-4/</guid>
      <description>&lt;p&gt;When I tried to run a working Ivonna test, previously targeted at .NET 3.5, against .NET 4 I found my test failing with the error&lt;/p&gt;
&lt;p&gt;-&amp;mdash;&amp;ndash; Test started: Assembly: Webpart.Tests.dll &amp;mdash;&amp;mdash;&lt;/p&gt;
&lt;html&gt;&lt;body&gt;Bad Request&lt;/body&gt;&lt;/html&gt;  
Setup information  
Physical Web path: C:ProjectsTestTypeMockSampleTestWebSite  
Actual path: C:UsersfezAppDataLocalTempTemporary ASP.NET Filesroot156567f2
&lt;p&gt;Turns out that the fix to simple, you have to use an absolute path i.e. the / in front of the BasicTest.aspx is vital&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;\[TestMethod, Isolated\]
&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code&gt;    public void LoadWebPage\_HelloWorldWebPart\_NoError()
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;        {
&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code&gt;        TestSession session = new TestSession(); //Start each test with this
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;            WebRequest request = new WebRequest(&amp;#34;/BasicTest.aspx&amp;#34;); //Create a WebRequest object
&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code&gt;        WebResponse response = session.ProcessRequest(request); //Process the request
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;            System.Web.UI.Page page = response.Page;
&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code&gt;        //Check the page loaded
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;            Assert.IsNotNull(page);
&lt;/code&gt;&lt;/pre&gt;&lt;pre&gt;&lt;code&gt;    }
&lt;/code&gt;&lt;/pre&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;
So this is a temporary work around for now, for me it is not an issue having to use a absolute path. I understand from the [writer of Ivonna](http://www.sm-art.biz/Ivonna.aspx) that this issue will be further investigated now that .NET 4.0 has RTM’d
&lt;/code&gt;&lt;/pre&gt;</description>
      <content:encoded><![CDATA[<p>When I tried to run a working Ivonna test, previously targeted at .NET 3.5, against .NET 4 I found my test failing with the error</p>
<p>-&mdash;&ndash; Test started: Assembly: Webpart.Tests.dll &mdash;&mdash;</p>
<html><body>Bad Request</body></html>  
Setup information  
Physical Web path: C:ProjectsTestTypeMockSampleTestWebSite  
Actual path: C:UsersfezAppDataLocalTempTemporary ASP.NET Filesroot156567f2
<p>Turns out that the fix to simple, you have to use an absolute path i.e. the / in front of the BasicTest.aspx is vital</p>
<pre tabindex="0"><code>\[TestMethod, Isolated\]
</code></pre><pre><code>    public void LoadWebPage\_HelloWorldWebPart\_NoError()
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        TestSession session = new TestSession(); //Start each test with this
</code></pre>
<pre tabindex="0"><code>            WebRequest request = new WebRequest(&#34;/BasicTest.aspx&#34;); //Create a WebRequest object
</code></pre><pre><code>        WebResponse response = session.ProcessRequest(request); //Process the request
</code></pre>
<pre tabindex="0"><code>            System.Web.UI.Page page = response.Page;
</code></pre><pre><code>        //Check the page loaded
</code></pre>
<pre tabindex="0"><code>            Assert.IsNotNull(page);
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code>
So this is a temporary work around for now, for me it is not an issue having to use a absolute path. I understand from the [writer of Ivonna](http://www.sm-art.biz/Ivonna.aspx) that this issue will be further investigated now that .NET 4.0 has RTM’d
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Experiences upgrading to Lab Manager 2010 RC</title>
      <link>https://blog.richardfennell.net/posts/experiences-upgrading-to-lab-manager-2010-rc/</link>
      <pubDate>Thu, 01 Apr 2010 21:49:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/experiences-upgrading-to-lab-manager-2010-rc/</guid>
      <description>&lt;p&gt;Whilst preparing for my session at &lt;a href=&#34;http://www.microsoft.com/uk/techdays/daydev.aspx&#34;&gt;Techdays&lt;/a&gt; I have upgraded &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/02/05/at-last-my-creature-it-lives.aspx&#34;&gt;my 2010 Beta2 Lab Manager&lt;/a&gt; to RC. I am pleased to say &lt;a href=&#34;http://blogs.msdn.com/lab_management/archive/2010/02/08/lab-management-2010-beta2-to-rc-upgrade-guide.aspx&#34;&gt;the process&lt;/a&gt; is far more straight forward than the initial install. Again I used the Lab Manager team blog as my guide, they have revised ‘Getting started with Lab Management 2010 RC’ Parts &lt;a href=&#34;http://blogs.msdn.com/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-1.aspx&#34;&gt;1&lt;/a&gt;, &lt;a href=&#34;http://blogs.msdn.com/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-2.aspx&#34;&gt;2&lt;/a&gt; ,&lt;a href=&#34;http://blogs.msdn.com/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-3.aspx&#34;&gt;3&lt;/a&gt; and &lt;a href=&#34;http://blogs.msdn.com/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-4.aspx&#34;&gt;4&lt;/a&gt; posts to help.&lt;/p&gt;
&lt;p&gt;I was able to skip through the initial OS/VMM setup as this has not altered. I chose to throw away my TestVM (with its Beta agents) and create a new one. I upgraded my TFS 2010 instance to RC. The only awkward bit was that I had to extract the RC version of the Lab Build Template from a newly created RC Team Project Collection and load it over my existing Beta2 version. I then recreated the Lab E2E build – and it just worked. My basic sample created for Beta2 build and tested OK.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst preparing for my session at <a href="http://www.microsoft.com/uk/techdays/daydev.aspx">Techdays</a> I have upgraded <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/02/05/at-last-my-creature-it-lives.aspx">my 2010 Beta2 Lab Manager</a> to RC. I am pleased to say <a href="http://blogs.msdn.com/lab_management/archive/2010/02/08/lab-management-2010-beta2-to-rc-upgrade-guide.aspx">the process</a> is far more straight forward than the initial install. Again I used the Lab Manager team blog as my guide, they have revised ‘Getting started with Lab Management 2010 RC’ Parts <a href="http://blogs.msdn.com/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-1.aspx">1</a>, <a href="http://blogs.msdn.com/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-2.aspx">2</a> ,<a href="http://blogs.msdn.com/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-3.aspx">3</a> and <a href="http://blogs.msdn.com/lab_management/archive/2010/02/16/getting-started-with-lab-management-vs2010-rc-part-4.aspx">4</a> posts to help.</p>
<p>I was able to skip through the initial OS/VMM setup as this has not altered. I chose to throw away my TestVM (with its Beta agents) and create a new one. I upgraded my TFS 2010 instance to RC. The only awkward bit was that I had to extract the RC version of the Lab Build Template from a newly created RC Team Project Collection and load it over my existing Beta2 version. I then recreated the Lab E2E build – and it just worked. My basic sample created for Beta2 build and tested OK.</p>
<p>I got confident then and so decided to build my own application with Coded UI tests, and surprise, surprise it work. OK this was after some reconfiguring of the Test VM to allow UI interactive testing, and a few dead ends, but basically the underlying system worked and I think I now have a working understanding of it.</p>
<p>The whole process is just much slicker than it was, and the online <a href="http://msdn.microsoft.com/en-us/library/dd380687%28v=VS.100%29.aspx">MSDN documentation is much more useful too</a>. This is certainly becoming as more accessible product, but you still do need a good mixture of ITPro and Developer skills that I bet many teams are going to struggle to find in a single person. The best thing I can recommend is build your own system step by step (following the blog posts) so you know how all the moving parts interact. Once you do this you will find it is less daunting.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Post AIC 2010 Thoughts</title>
      <link>https://blog.richardfennell.net/posts/post-aic-2010-thoughts/</link>
      <pubDate>Thu, 01 Apr 2010 17:52:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-aic-2010-thoughts/</guid>
      <description>&lt;p&gt;I was at the &lt;a href=&#34;http://msdn.microsoft.com/en-gb/architecture/ee959240.aspx&#34;&gt;AIC 2010 conference yesterday&lt;/a&gt;, which I enjoyed &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/05/09/a-day-at-the-architect-insight-conference.aspx&#34;&gt;more than last year&lt;/a&gt;. The most interesting session was &lt;a href=&#34;http://en.wikipedia.org/wiki/Ivar_Jacobson&#34;&gt;Ivar Jacobson&lt;/a&gt;. He discussed how immature our industry is, with its disjoint between software engineers and academic computer scientists. Something &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/bm-bloggers/pages/232.aspx&#34;&gt;I have commented on before when discussing if our industry should be licensed.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;He discussed how we need to build our industry upon commonly agreed kernel of simple best practice activities, not upon some currently fashionable process whether it be Waterfall, CMMI, Agile, Lean etc. We must identify what are the best practices from all process (and all processes have something to offer) and teach them to students at University as this kernel of activities, as well as teaching how to compose them into more complex practices and hence whole development processes. Thus providing new entries to our industry with the base of reusable techniques that the can use for their whole career, irrespective of fashion. The key idea being that any current process model can be build by composing these kernel of basic activity.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was at the <a href="http://msdn.microsoft.com/en-gb/architecture/ee959240.aspx">AIC 2010 conference yesterday</a>, which I enjoyed <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/05/09/a-day-at-the-architect-insight-conference.aspx">more than last year</a>. The most interesting session was <a href="http://en.wikipedia.org/wiki/Ivar_Jacobson">Ivar Jacobson</a>. He discussed how immature our industry is, with its disjoint between software engineers and academic computer scientists. Something <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/pages/232.aspx">I have commented on before when discussing if our industry should be licensed.</a></p>
<p>He discussed how we need to build our industry upon commonly agreed kernel of simple best practice activities, not upon some currently fashionable process whether it be Waterfall, CMMI, Agile, Lean etc. We must identify what are the best practices from all process (and all processes have something to offer) and teach them to students at University as this kernel of activities, as well as teaching how to compose them into more complex practices and hence whole development processes. Thus providing new entries to our industry with the base of reusable techniques that the can use for their whole career, irrespective of fashion. The key idea being that any current process model can be build by composing these kernel of basic activity.</p>
<p>So if this sounds interesting, and it does to me, have a look at <a href="http://www.semat.org">www.semat.org</a>. The signatories aim to have results within the year, if they achieve this aims this could well be the first step to making Software Engineering on a par with other chartered engineering disciplines such as Structural or Mechanical Engineering, with there is a long term set of industry accepted best practices that people can be judged against.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Fitnesse.NET tests using MSTest - Revisited</title>
      <link>https://blog.richardfennell.net/posts/running-fitnesse-net-tests-using-mstest-revisited/</link>
      <pubDate>Mon, 29 Mar 2010 21:35:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-fitnesse-net-tests-using-mstest-revisited/</guid>
      <description>&lt;p&gt;In 2008 I wrote a post &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/07/18/running-fitnesse-net-tests-using-mstest.aspx&#34;&gt;Running Fitnesse.NET tests using MSTest&lt;/a&gt;.  Recently I had need to us this technique on a VS2010 project, and as is so often the issue the code that worked then does not seem to work now. All I can assume is that the Fitnesse API had altered, but I thought I was using the same assemblies!&lt;/p&gt;
&lt;p&gt;So I pulled down the code from &lt;a href=&#34;http://sourceforge.net/projects/fitnessedotnet/&#34; title=&#34;http://sourceforge.net/projects/fitnessedotnet/&#34;&gt;http://sourceforge.net/projects/fitnessedotnet/&lt;/a&gt; and had a poke about. Basically I seemed to be using the command line switches wrong. The listing below shows what I found is the working usage:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In 2008 I wrote a post <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/07/18/running-fitnesse-net-tests-using-mstest.aspx">Running Fitnesse.NET tests using MSTest</a>.  Recently I had need to us this technique on a VS2010 project, and as is so often the issue the code that worked then does not seem to work now. All I can assume is that the Fitnesse API had altered, but I thought I was using the same assemblies!</p>
<p>So I pulled down the code from <a href="http://sourceforge.net/projects/fitnessedotnet/" title="http://sourceforge.net/projects/fitnessedotnet/">http://sourceforge.net/projects/fitnessedotnet/</a> and had a poke about. Basically I seemed to be using the command line switches wrong. The listing below shows what I found is the working usage:</p>
<pre tabindex="0"><code>\[TestMethod\]
</code></pre><pre><code>public void WorkFlowSwitchOnName\_DataViaFitnesse\_Success()
</code></pre>
<pre tabindex="0"><code>    {
</code></pre><pre><code>    fit.Runner.FolderRunner runner = new fit.Runner.FolderRunner(new fit.Runner.ConsoleReporter());
</code></pre>
<pre tabindex="0"><code>        var errorCount = runner.Run(new string\[\] {
</code></pre><pre><code>            &quot;-i&quot;,@&quot;WF Workflow&quot;, // the directory that holds the HTM test files
</code></pre>
<pre tabindex="0"><code>                &#34;-a&#34;,@&#34;TestProject.dll&#34;,  //we have the fit fascard in this assembly
</code></pre><pre><code>            &quot;-o&quot;,@&quot;results&quot;}); // the directory the results are dumped into as HTML
</code></pre>
<pre tabindex="0"><code>        // fit can fail silently giving no failures as no test are run, so check for exceptions
</code></pre><pre><code>    Assert.AreEqual(false, Regex.IsMatch(runner.Results, &quot;^0.+?0.+?0.+?0.+?$&quot;), &quot;No tests appear to have been run&quot;);
</code></pre>
<pre tabindex="0"><code>        // look for expected errors
</code></pre><pre><code>    Assert.AreEqual(0, errorCount, runner.Results);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code>
The –i option is the one that was wrong. I had been specifying a single HTM file. What I should have done was specify a directory that contains one or more HTM files. I handled this as a sub directory in my project with its contents set to copy to the output directory.

Once I made these edits I had my tests running again as expect.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>TF30046 Error when trying to create new team project collection using an existing empty DB</title>
      <link>https://blog.richardfennell.net/posts/tf30046-error-when-trying-to-create-new-team-project-collection-using-an-existing-empty-db/</link>
      <pubDate>Mon, 29 Mar 2010 15:05:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf30046-error-when-trying-to-create-new-team-project-collection-using-an-existing-empty-db/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/25/tfs-2010-database-label-not-use-for-team-project-collection-dbs.aspx&#34;&gt;I my previous post I discussed how the DB label was not used for TPC Dbs in 2010&lt;/a&gt;. As I was working on a setup where a central SQL box was the DT for two virtualised TFS AT instances, I therefore needed to create my TPC databases manually if I wanted TPCs of the same name on each TFS instance.&lt;/p&gt;
&lt;p&gt;I won’t go over the [old post again](I had to create a TPC Dbs manually ), but in essence I was trying to create a TPC with the name &lt;strong&gt;ABC&lt;/strong&gt; on a TFS instance with a database label of &lt;strong&gt;RC&lt;/strong&gt;. So I tried to create the DB &lt;strong&gt;TFS_RC_ABC&lt;/strong&gt; manually and pointed the TPC create process at this. It passed the verify but I then got a&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/25/tfs-2010-database-label-not-use-for-team-project-collection-dbs.aspx">I my previous post I discussed how the DB label was not used for TPC Dbs in 2010</a>. As I was working on a setup where a central SQL box was the DT for two virtualised TFS AT instances, I therefore needed to create my TPC databases manually if I wanted TPCs of the same name on each TFS instance.</p>
<p>I won’t go over the [old post again](I had to create a TPC Dbs manually ), but in essence I was trying to create a TPC with the name <strong>ABC</strong> on a TFS instance with a database label of <strong>RC</strong>. So I tried to create the DB <strong>TFS_RC_ABC</strong> manually and pointed the TPC create process at this. It passed the verify but I then got a</p>
<blockquote>
<p><strong>TF30046: The instance information does not match</strong></p></blockquote>
<p>error during the core stage of the TPC creation. Basically the empty DB was found and used, but the wizard checking on the IDs of DB and the TPC found they don’t match.</p>
<p>It seems  the name of the DB is the problem for pre created DBs.  I tried altering the prefix from <strong>TFS_RC,</strong> changing to just <strong>RC_</strong>, removing the <strong>_</strong>,  and eventually all the prefix, but to no effect. However, when I altered the end of the DB name so it did not match the collection name it worked</p>
<p>So the workaround is if I create an empty DB called <strong>ABCRCTFS,</strong>  create a collection called <strong>ABC</strong> (using <strong>ABCRCTFS</strong> as the empty DB) all seems to be OK, including that all the URLs for the collections and SP sites include the name <strong>ABC</strong>, it is just the DB has an unexpected name.</p>
<p>I will post again if I get a better solution or and explanation in the future.</p>
]]></content:encoded>
    </item>
    <item>
      <title>And finally my personal domains</title>
      <link>https://blog.richardfennell.net/posts/and-finally-my-personal-domains/</link>
      <pubDate>Mon, 29 Mar 2010 14:59:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-finally-my-personal-domains/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/default.aspx&#34;&gt;When I said will our internet bits were back&lt;/a&gt; I was not quite right. I had forgotten that I had not re-pointed m own richardfennell domains to put to my blog. These are now done, so I hope all search results to my blog should resolve OK.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/default.aspx">When I said will our internet bits were back</a> I was not quite right. I had forgotten that I had not re-pointed m own richardfennell domains to put to my blog. These are now done, so I hope all search results to my blog should resolve OK.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Mixed mode assembly is built against version &#39;v2.0.50727&#39; error using .NET 4 Development Web Server</title>
      <link>https://blog.richardfennell.net/posts/mixed-mode-assembly-is-built-against-version-v2-0-50727-error-using-net-4-development-web-server/</link>
      <pubDate>Sat, 27 Mar 2010 12:38:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mixed-mode-assembly-is-built-against-version-v2-0-50727-error-using-net-4-development-web-server/</guid>
      <description>&lt;p&gt;If your application has a dependency on an assembly built in .NET 2 you will see the error below if you try to run your application when it has been built in.NET 4.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Mixed mode assembly is built against version &amp;lsquo;v2.0.50727&amp;rsquo; of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This can be important in VS2010 testing, as test projects must be built as .NET 4, there is no option to build with an older runtime. I suffered this problem when trying to do some development where I hosted a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/25/a-fix-to-run-typemock-isolator-inside-the-page-load-for-an-aspx-page-on-vs2010-net4.aspx&#34;&gt;webpart that make calls into SharePoint (that was faked out with Typemock Isolator) inside a ASP.NET 4.0 test harness&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If your application has a dependency on an assembly built in .NET 2 you will see the error below if you try to run your application when it has been built in.NET 4.</p>
<p><em><strong>Mixed mode assembly is built against version &lsquo;v2.0.50727&rsquo; of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information.</strong></em></p>
<p>This can be important in VS2010 testing, as test projects must be built as .NET 4, there is no option to build with an older runtime. I suffered this problem when trying to do some development where I hosted a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/25/a-fix-to-run-typemock-isolator-inside-the-page-load-for-an-aspx-page-on-vs2010-net4.aspx">webpart that make calls into SharePoint (that was faked out with Typemock Isolator) inside a ASP.NET 4.0 test harness</a></p>
<p>The answer to this problem is well documented, you need to add the <strong>useLegacyV2RuntimeActivationPolicy</strong> attribute to a .CONFIG file, but which one? It is not the <strong>web.config</strong> file you might suspect, but the <strong>C:Program Files (x86)Common Filesmicrosoft sharedDevServer10.0WebDev.WebServer40.exe.config</strong> file. The revised config file should read as follows (new bits in red)</p>
<blockquote>
<?xml version="1.0" encoding="utf-8" ?>  
<configuration>  
  <startup useLegacyV2RuntimeActivationPolicy="true">  
            <supportedRuntime version="v4.0" />     
  </startup>
<p>  <runtime><br>
    <generatePublisherEvidence enabled="false" /><br>
  </runtime><br>
</configuration></p></blockquote>
<p>Note: Don’t add the following <em><supportedRuntime version="v2.0.50727" />  this cause the web server to crash on start-up.</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2010 Database Label not use for Team Project Collection DBs</title>
      <link>https://blog.richardfennell.net/posts/tfs-2010-database-label-not-use-for-team-project-collection-dbs/</link>
      <pubDate>Thu, 25 Mar 2010 17:54:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2010-database-label-not-use-for-team-project-collection-dbs/</guid>
      <description>&lt;p&gt;Found out something I did not know today, the TFS 2010 database label is only used for the server’s own primary configuration databases not for the DBs for the TPC it creates.&lt;/p&gt;
&lt;p&gt;For example on a 2010 RC install with the database label was set to &lt;strong&gt;RC&lt;/strong&gt; during the installation. When I try to create a new TPC (called ABC) it tries to create a Db named &lt;strong&gt;TFS_ABC,&lt;/strong&gt; even though the database label for TFS server can be seen to &lt;strong&gt;TFS_RC_&lt;/strong&gt;  on the admin console and the instance’s primary databases are called &lt;strong&gt;TFS_RC_Configuration, TFS_RC_Warehouse&lt;/strong&gt; and &lt;strong&gt;TFS_RC_Analysis&lt;/strong&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Found out something I did not know today, the TFS 2010 database label is only used for the server’s own primary configuration databases not for the DBs for the TPC it creates.</p>
<p>For example on a 2010 RC install with the database label was set to <strong>RC</strong> during the installation. When I try to create a new TPC (called ABC) it tries to create a Db named <strong>TFS_ABC,</strong> even though the database label for TFS server can be seen to <strong>TFS_RC_</strong>  on the admin console and the instance’s primary databases are called <strong>TFS_RC_Configuration, TFS_RC_Warehouse</strong> and <strong>TFS_RC_Analysis</strong>.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_53B1A423.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_076974A8.png" title="image"></a></p>
<p>I would have expect the new db to be call <strong>TFS_RC_ABC</strong>.</p>
<p>This can become interesting if like me you are working with a number of TFS instances that all share the same central SQL server instance. So if you want a TPC of the same name on two or more TFS servers that share a DT you must manually create the empty DBs to avoid a name clash.</p>
<p>So not a major issue but confusing if you don’t know it is a problem.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Announcing Typemock TMockRunner Custom Activity for Team Build 2010</title>
      <link>https://blog.richardfennell.net/posts/announcing-typemock-tmockrunner-custom-activity-for-team-build-2010/</link>
      <pubDate>Thu, 25 Mar 2010 17:18:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/announcing-typemock-tmockrunner-custom-activity-for-team-build-2010/</guid>
      <description>&lt;p&gt;My &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx&#34;&gt;Team Build 2010 custom activity for Typemock Isolator&lt;/a&gt; is now available at &lt;a href=&#34;http://site.typemock.com/add-ons/&#34; title=&#34;http://site.typemock.com/add-ons/&#34;&gt;http://site.typemock.com/add-ons/&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It is packaged as a &lt;a href=&#34;http://www.typemock.com/files/Addons/VS2010%20TypemockBuildActivity%201.0.0.0.zip&#34;&gt;zip&lt;/a&gt; which include all the source and a compiled assembly as well a document detailing usage and why the solution is constructed as it is.&lt;/p&gt;
&lt;p&gt;Hope you find it useful&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/08/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build.aspx">Team Build 2010 custom activity for Typemock Isolator</a> is now available at <a href="http://site.typemock.com/add-ons/" title="http://site.typemock.com/add-ons/">http://site.typemock.com/add-ons/</a>.</p>
<p>It is packaged as a <a href="http://www.typemock.com/files/Addons/VS2010%20TypemockBuildActivity%201.0.0.0.zip">zip</a> which include all the source and a compiled assembly as well a document detailing usage and why the solution is constructed as it is.</p>
<p>Hope you find it useful</p>
]]></content:encoded>
    </item>
    <item>
      <title>A fix to run Typemock Isolator inside the Page_Load for an ASPX page on VS2010/.NET4</title>
      <link>https://blog.richardfennell.net/posts/a-fix-to-run-typemock-isolator-inside-the-page_load-for-an-aspx-page-on-vs2010-net4/</link>
      <pubDate>Thu, 25 Mar 2010 15:23:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-fix-to-run-typemock-isolator-inside-the-page_load-for-an-aspx-page-on-vs2010-net4/</guid>
      <description>&lt;p&gt;I recently tried to use Typemock Isolator inside a ASP.NET page load event, to fake out some SharePoint SPLists e.g.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;protected void Page_Load(object sender, EventArgs e)&lt;br&gt;
{&lt;br&gt;
     SPSite fakeSite = Isolate.Fake.Instance&lt;SPSite&gt;();&lt;br&gt;
     ……..&lt;br&gt;
}&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;This has worked in the past but using VS2010RC and Isolator 6.0.1.0 it failed.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If running VS2010 as Admin I got the error &lt;em&gt;&amp;ldquo;Could not load file or assembly &amp;lsquo;TypeMock, Version=0.12900.25133.13153, Culture=neutral, PublicKeyToken=37342d316331342d&amp;rsquo; or one of its dependencies. The located assembly&amp;rsquo;s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)&amp;rdquo;:&amp;ldquo;TypeMock, Version=0.12900.25133.13153, Culture=neutral, PublicKeyToken=37342d316331342d&amp;rdquo;}&amp;quot;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;If running as a non admin user I got the error &lt;em&gt;&amp;ldquo;Attempted to read or write protected memory. This is often an indication that other memory is corrupt.&amp;rdquo;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Typemock Support told me that I needed to edit the Typemock Isolator’s &lt;strong&gt;namespaces.dat&lt;/strong&gt; file (in its programs directory), this is the file that lists valid test runners. I needed to add &lt;strong&gt;WebDev.WebServer40.exe&lt;/strong&gt; to this file as this is program that makes the call to Isolator to do the faking.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently tried to use Typemock Isolator inside a ASP.NET page load event, to fake out some SharePoint SPLists e.g.</p>
<blockquote>
<p>protected void Page_Load(object sender, EventArgs e)<br>
{<br>
     SPSite fakeSite = Isolate.Fake.Instance<SPSite>();<br>
     ……..<br>
}</p></blockquote>
<p>This has worked in the past but using VS2010RC and Isolator 6.0.1.0 it failed.</p>
<ul>
<li>If running VS2010 as Admin I got the error <em>&ldquo;Could not load file or assembly &lsquo;TypeMock, Version=0.12900.25133.13153, Culture=neutral, PublicKeyToken=37342d316331342d&rsquo; or one of its dependencies. The located assembly&rsquo;s manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)&rdquo;:&ldquo;TypeMock, Version=0.12900.25133.13153, Culture=neutral, PublicKeyToken=37342d316331342d&rdquo;}&quot;</em></li>
<li>If running as a non admin user I got the error <em>&ldquo;Attempted to read or write protected memory. This is often an indication that other memory is corrupt.&rdquo;</em></li>
</ul>
<p>Typemock Support told me that I needed to edit the Typemock Isolator’s <strong>namespaces.dat</strong> file (in its programs directory), this is the file that lists valid test runners. I needed to add <strong>WebDev.WebServer40.exe</strong> to this file as this is program that makes the call to Isolator to do the faking.</p>
<p>So this is an immediate fix, it is unclear at the moment if this filename <strong>WebDev.WebServer40.exe</strong> will be the final one for the RTM of 2010, so be prepared for another edit in the future.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Internet problems should be over</title>
      <link>https://blog.richardfennell.net/posts/internet-problems-should-be-over/</link>
      <pubDate>Thu, 25 Mar 2010 07:10:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/internet-problems-should-be-over/</guid>
      <description>&lt;p&gt;Our new Internet line has now been commissioned fully and hopefully any problems you may have had accessing my blog via the blackmarble.com or co.uk domains should be over.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Our new Internet line has now been commissioned fully and hopefully any problems you may have had accessing my blog via the blackmarble.com or co.uk domains should be over.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF254024 error when upgrading TFS 2010 Beta2 to RC</title>
      <link>https://blog.richardfennell.net/posts/tf254024-error-when-upgrading-tfs-2010-beta2-to-rc/</link>
      <pubDate>Wed, 24 Mar 2010 14:04:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf254024-error-when-upgrading-tfs-2010-beta2-to-rc/</guid>
      <description>&lt;p&gt;Whilst upgrading a single server instance of TFS 2010 Beta2 to the RC I got a TF254024 error. This occurred at the point in the upgrade wizard where it tries to list the databases available to upgrade.&lt;/p&gt;
&lt;p&gt;The reason for the error was the account I was logged in as (the domain administrator in my case) did not have rights on the SQL instance to see any TFS DBs. Once this was sorted, by granting owner rights to all the TFS DBs, all was OK.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst upgrading a single server instance of TFS 2010 Beta2 to the RC I got a TF254024 error. This occurred at the point in the upgrade wizard where it tries to list the databases available to upgrade.</p>
<p>The reason for the error was the account I was logged in as (the domain administrator in my case) did not have rights on the SQL instance to see any TFS DBs. Once this was sorted, by granting owner rights to all the TFS DBs, all was OK.</p>
<p>I think it had got into this state as this was a ‘standard’ TFS install using an automatically installed SQLExpress instance; so rights had not been explicitly assigned during the TFS setup. By installing <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=08E52AC2-1D62-45F6-9A4A-4B76A8564A2B&amp;displaylang=en">SQL 2008 Management Studio Express</a> and logging in as the servers local administrator I was able to grant the rights needed.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at NDC in Oslo in June</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-ndc-in-oslo-in-june/</link>
      <pubDate>Mon, 22 Mar 2010 09:55:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-ndc-in-oslo-in-june/</guid>
      <description>&lt;p&gt;&lt;em&gt;Updated 25th Mar when I found I have three sessions&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I have just heard that I had two three sessions accepted for the &lt;a href=&#34;http://www.ndc2010.no&#34;&gt;Norwegian Developers Conference&lt;/a&gt;. They are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Putting Some Testing Into Your TFS Build&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Making Manual Testing a Part of Your Development Process&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Developing Testable Web Parts for SharePoint.&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So off to Oslo, never been there before, looking forward to it already&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.ndc2010.no&#34;&gt;&lt;img alt=&#34;clip_image002&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_46962510.gif&#34; title=&#34;clip_image002&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>Updated 25th Mar when I found I have three sessions</em></p>
<p>I have just heard that I had two three sessions accepted for the <a href="http://www.ndc2010.no">Norwegian Developers Conference</a>. They are:</p>
<ul>
<li><strong>Putting Some Testing Into Your TFS Build</strong> </li>
<li><strong>Making Manual Testing a Part of Your Development Process</strong></li>
<li><strong>Developing Testable Web Parts for SharePoint.</strong></li>
</ul>
<p>So off to Oslo, never been there before, looking forward to it already</p>
<p><a href="http://www.ndc2010.no"><img alt="clip_image002" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_46962510.gif" title="clip_image002"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>VHD boot and c00002e2 Errors</title>
      <link>https://blog.richardfennell.net/posts/vhd-boot-and-c00002e2-errors/</link>
      <pubDate>Thu, 18 Mar 2010 19:58:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vhd-boot-and-c00002e2-errors/</guid>
      <description>&lt;p&gt;For some reason that is beyond me now I did not setup my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/02/05/at-last-my-creature-it-lives.aspx&#34;&gt;Lab Manager test system&lt;/a&gt; to be a VHD boot. So before installing the 2010 RC version I decided to P2V this system (on the same hardware) to make backups easier whilst testing. All seemed to go well&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I used IMAGEX to create a WIM of the disk&lt;/li&gt;
&lt;li&gt;Created an empty VHD&lt;/li&gt;
&lt;li&gt;Used IMAGEX to apply the WIM to the VHD&lt;/li&gt;
&lt;li&gt;Formatted the PC with a default Windows 7 install&lt;/li&gt;
&lt;li&gt;Added a VHD boot Windows Server 2008R2 to the PC, tested this all booted OK&lt;/li&gt;
&lt;li&gt;Replaced the test VHD with my own and rebooted&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;…. and it just went into a reboot cycle. Pressing F8 and stopping the reboot on error I saw I had a “c00002e2 Directory Services could not start” error. I managed to get into the PC by pressing F8 and using the AD recovery mode (safe mode did not work). After much fiddling around I eventually noticed that my boot drive was drive D: not C: as I would have expected. My VHD and parent drive had reversed letter assignments. So when the AD services tried to start they look on the parent Windows 7 partition (C:) for their data and hence failed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For some reason that is beyond me now I did not setup my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/02/05/at-last-my-creature-it-lives.aspx">Lab Manager test system</a> to be a VHD boot. So before installing the 2010 RC version I decided to P2V this system (on the same hardware) to make backups easier whilst testing. All seemed to go well</p>
<ol>
<li>I used IMAGEX to create a WIM of the disk</li>
<li>Created an empty VHD</li>
<li>Used IMAGEX to apply the WIM to the VHD</li>
<li>Formatted the PC with a default Windows 7 install</li>
<li>Added a VHD boot Windows Server 2008R2 to the PC, tested this all booted OK</li>
<li>Replaced the test VHD with my own and rebooted</li>
</ol>
<p>…. and it just went into a reboot cycle. Pressing F8 and stopping the reboot on error I saw I had a “c00002e2 Directory Services could not start” error. I managed to get into the PC by pressing F8 and using the AD recovery mode (safe mode did not work). After much fiddling around I eventually noticed that my boot drive was drive D: not C: as I would have expected. My VHD and parent drive had reversed letter assignments. So when the AD services tried to start they look on the parent Windows 7 partition (C:) for their data and hence failed.</p>
<p>I think the root cause was the way I had attached the empty VHD to used IMAGEX. I had not done it using WINPE, but just created in my Windows 7 instance and attached the VHD as drive D: before copying on the WIM</p>
<p>So my revised method was</p>
<ol>
<li>I used IMAGEX to create a WIM of the disk (actually used the one I already had as there was nothing wrong with it, which was a good job as I had formatted the disk)</li>
<li>Formatted the PC with a default Windows 7 install</li>
<li>Added a VHD boot Windows Server 2008R2 to the PC, tested this all booted OK</li>
<li>Copied my WIM file to the same directory as my newly created W2k8R2.VHD</li>
<li>Copied IMAGEX to this directory</li>
<li>Booted of a Win7 DVD</li>
<li>Pressed Shift F10 to get a prompt at the first opportunity
<ol>
<li>Ran DISKPART</li>
<li>Select Disk 1</li>
<li>Select Part 1</li>
<li>Detail Part – this was the 100Mb system partition Windows 7 creates and was assigned as drive C: (note when you boots Windows 7 the drive letters get reassigned just to confuse you, as to look at this you would expect your Windows 7 boot drive to be D:)</li>
<li>Assign Letter = Q – this set the system partition to be drive Q, but any unused letter would do</li>
<li>Select vdisk file:d:vhdw2k8r2.vhd</li>
<li>attach vdisk – this loaded the VHD and assigned it the letter C: as this was now not in use</li>
<li>list disk</li>
<li>Select disk 2</li>
<li>Select Part 1</li>
<li>detail Part – checked the drive letter was correct</li>
<li>I then exited DISKPART and from the same command prompt ran IMAGEX to put the WIM on this new drive C:</li>
</ol>
</li>
<li>Rebooted and it worked</li>
</ol>
<p>So the technical tip is make sure your drive letter assignments are what you think they are, it may not be as obvious as you expect.</p>
]]></content:encoded>
    </item>
    <item>
      <title>NEBytes last night</title>
      <link>https://blog.richardfennell.net/posts/nebytes-last-night/</link>
      <pubDate>Thu, 18 Mar 2010 08:24:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/nebytes-last-night/</guid>
      <description>&lt;p&gt;Hope everyone enjoyed my session VS2010 at NEBytes last night. I don&amp;rsquo;t now about you but I think that quick end to end demo of build, manual test and Intellitrace debug work very nicely. That was the first time I have done it as a single demo and I think it works better than three smaller ones. Truely shows the intergrated team store for VS2010&lt;/p&gt;
&lt;p&gt;Anyway as the session was demo lead there are no slides to download, but if you have follow up questions post a comment on this post or email me&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Hope everyone enjoyed my session VS2010 at NEBytes last night. I don&rsquo;t now about you but I think that quick end to end demo of build, manual test and Intellitrace debug work very nicely. That was the first time I have done it as a single demo and I think it works better than three smaller ones. Truely shows the intergrated team store for VS2010</p>
<p>Anyway as the session was demo lead there are no slides to download, but if you have follow up questions post a comment on this post or email me</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at NEBytes tomorrow</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-nebytes-tomorrow/</link>
      <pubDate>Tue, 16 Mar 2010 10:43:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-nebytes-tomorrow/</guid>
      <description>&lt;p&gt;Just a reminder &lt;a href=&#34;http://www.nebytes.net/page/events.aspx&#34;&gt;I am speaking at NEBytes in Newcastle&lt;/a&gt; tomorrow on the new features of VS2010. I will be covering project management, architect, dev and test features so something there for everyone.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just a reminder <a href="http://www.nebytes.net/page/events.aspx">I am speaking at NEBytes in Newcastle</a> tomorrow on the new features of VS2010. I will be covering project management, architect, dev and test features so something there for everyone.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Cannot access TFS over HTTPS after upgrade from 2010 Beta 2 to RC</title>
      <link>https://blog.richardfennell.net/posts/cannot-access-tfs-over-https-after-upgrade-from-2010-beta-2-to-rc/</link>
      <pubDate>Mon, 15 Mar 2010 13:13:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cannot-access-tfs-over-https-after-upgrade-from-2010-beta-2-to-rc/</guid>
      <description>&lt;p&gt;I upgraded our Beta2 2010 TFS last week, and after a quick local test all appeared to be working OK, so I rushed out the office like you do. However, whilst I have been out the office it was spotted that though it was working within the office using HTTP and the NETBIOSDomain server name (&lt;strong&gt;TFSSERVER)&lt;/strong&gt; it could not be accessed outside the firewall or over HTTPS inside the office, so &lt;strong&gt;&lt;a href=&#34;https://tfsserver.mydomain.com:8443/tfs&#34;&gt;https://tfsserver.mydomain.com:8443/tfs&lt;/a&gt;&lt;/strong&gt; did not work&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I upgraded our Beta2 2010 TFS last week, and after a quick local test all appeared to be working OK, so I rushed out the office like you do. However, whilst I have been out the office it was spotted that though it was working within the office using HTTP and the NETBIOSDomain server name (<strong>TFSSERVER)</strong> it could not be accessed outside the firewall or over HTTPS inside the office, so <strong><a href="https://tfsserver.mydomain.com:8443/tfs">https://tfsserver.mydomain.com:8443/tfs</a></strong> did not work</p>
<p>Turns out the problem was twofold, both it seems caused by the TFS in place upgrade process.</p>
<p><strong>IIS7 Bindings</strong></p>
<p>The 2010 upgrade configuration wizard appeared to removed the non Port 8080 bindings on the IIS7 TFS site instance. I had to re-add the binding on 8443 that we use to access the TFS web services (this did not happen with the related SharePoint web site, this was still happily on ports 80 and 443)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_03B83BA9.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4382222E.png" title="image"></a></p>
<p>One this was re-added I could access the server from the console on port 8443 using the Url <strong><a href="https://tfsserver.mydomain.com:8443/tfs">https://tfsserver.mydomain.com:8443/tfs</a></strong>, however other clients still could not access it</p>
<p><strong>Windows Firewall with Advanced Security</strong></p>
<p>One the Firewall I noticed that though the rule to allow port 8443 was there it’s profile was set to only public/private (can’t remember how it used to be set).</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0A6B452C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3C62E5B6.png" title="image"></a> <a href="/wp-content/uploads/sites/2/historic/image_034C08B4.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4A352BB1.png" title="image"></a></p>
<p>Once I added domain (or set it to any) I was able to access the upgraded TFS server from other clients using the Url <strong><a href="https://tfsserver.mydomain.com:8443/tfs">https://tfsserver.mydomain.com:8443/tfs</a></strong></p>
]]></content:encoded>
    </item>
    <item>
      <title>Post QCon thoughts</title>
      <link>https://blog.richardfennell.net/posts/post-qcon-thoughts/</link>
      <pubDate>Sat, 13 Mar 2010 12:10:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-qcon-thoughts/</guid>
      <description>&lt;p&gt;Interesting time at &lt;a href=&#34;http://qconlondon.com/london-2010/&#34;&gt;QCon&lt;/a&gt; yesterday,  shame I was only there one day, I do like the events that are not limited to a single vendor or technology. The multi presenter session I was involved in on &lt;a href=&#34;http://qconlondon.com/london-2010/presentation/The&amp;#43;Interoperable&amp;#43;Platform&#34;&gt;Microsoft interoperability&lt;/a&gt; seemed to go well, there is talk of repeating at other events or podcasting. It is a nice format if you can get the sub-sessions linking nicely, like themed grok talks. &lt;/p&gt;
&lt;p&gt;Due to chatting to people (but that why you go really isn&amp;rsquo;t it?), I only managed to get to one other session, but I was the one I wanted to see, &lt;a href=&#34;http://weblogs.asp.net/rosherove/&#34;&gt;Roy Osherove’s&lt;/a&gt; on using &lt;a href=&#34;http://www.codeplex.com/CThru&#34;&gt;CThru to enable testing of monolithic frameworks such as Silverlight&lt;/a&gt;. It got a few things clearer in my mind over using CThu, &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/06/22/logging-everything-that-is-going-on-when-an-assembly-loads-using-cthru.aspx&#34;&gt;a tool I have tried to use in the past but not had as much success as I hoped&lt;/a&gt;. So I think I will have another go at trying to build a SharePoint workflow testing framework, the problem has rested on the back burner too long. I think I just need to persist longer in digging to the eventing model to see why my workflows under test do not start. Roy’s comment that there is no short cut for this type of problem to avoid an archaeological excavation into the framework under test, I think is the key here.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Interesting time at <a href="http://qconlondon.com/london-2010/">QCon</a> yesterday,  shame I was only there one day, I do like the events that are not limited to a single vendor or technology. The multi presenter session I was involved in on <a href="http://qconlondon.com/london-2010/presentation/The&#43;Interoperable&#43;Platform">Microsoft interoperability</a> seemed to go well, there is talk of repeating at other events or podcasting. It is a nice format if you can get the sub-sessions linking nicely, like themed grok talks. </p>
<p>Due to chatting to people (but that why you go really isn&rsquo;t it?), I only managed to get to one other session, but I was the one I wanted to see, <a href="http://weblogs.asp.net/rosherove/">Roy Osherove’s</a> on using <a href="http://www.codeplex.com/CThru">CThru to enable testing of monolithic frameworks such as Silverlight</a>. It got a few things clearer in my mind over using CThu, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/06/22/logging-everything-that-is-going-on-when-an-assembly-loads-using-cthru.aspx">a tool I have tried to use in the past but not had as much success as I hoped</a>. So I think I will have another go at trying to build a SharePoint workflow testing framework, the problem has rested on the back burner too long. I think I just need to persist longer in digging to the eventing model to see why my workflows under test do not start. Roy’s comment that there is no short cut for this type of problem to avoid an archaeological excavation into the framework under test, I think is the key here.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at SharePoint Evolution Conference</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-sharepoint-evolution-conference/</link>
      <pubDate>Tue, 09 Mar 2010 20:35:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-sharepoint-evolution-conference/</guid>
      <description>&lt;p&gt;I am please to be able to announce that I am speaking at the &lt;a href=&#34;http://www.sharepointevolutionconference.com/Agenda.html&#34;&gt;SharePoint 2010 Evolution Conference&lt;/a&gt; in London on the 21st of April.&lt;/p&gt;
&lt;p&gt;I will be talking about my experiences during the [Prontaprint project](&lt;a href=&#34;http://www.blackmarble.co.uk/CaseStudy.aspx?study=Prontaprint%27s&#34;&gt;http://www.blackmarble.co.uk/CaseStudy.aspx?study=Prontaprint&#39;s&lt;/a&gt; Website with SharePoint 2007) developing testable components for SharePoint using TypeMock Isolator and other techniques to speed development and improve quality.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/SP2010EvoBanner_Square_6D408626.png&#34;&gt;&lt;img alt=&#34;SP2010EvoBanner_Square&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/SP2010EvoBanner_Square_thumb_662149AE.png&#34; title=&#34;SP2010EvoBanner_Square&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am please to be able to announce that I am speaking at the <a href="http://www.sharepointevolutionconference.com/Agenda.html">SharePoint 2010 Evolution Conference</a> in London on the 21st of April.</p>
<p>I will be talking about my experiences during the [Prontaprint project](<a href="http://www.blackmarble.co.uk/CaseStudy.aspx?study=Prontaprint%27s">http://www.blackmarble.co.uk/CaseStudy.aspx?study=Prontaprint's</a> Website with SharePoint 2007) developing testable components for SharePoint using TypeMock Isolator and other techniques to speed development and improve quality.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/SP2010EvoBanner_Square_6D408626.png"><img alt="SP2010EvoBanner_Square" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/SP2010EvoBanner_Square_thumb_662149AE.png" title="SP2010EvoBanner_Square"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Agile Yorkshire Martin Fowler event is full</title>
      <link>https://blog.richardfennell.net/posts/agile-yorkshire-martin-fowler-event-is-full/</link>
      <pubDate>Mon, 08 Mar 2010 11:57:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/agile-yorkshire-martin-fowler-event-is-full/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.agileyorkshire.org/event-announcements/17Mar2010&#34;&gt;Next weeks Agile Yorkshire meeting&lt;/a&gt; with Martin Fowler is now full (and the venue is twice the size of our usual one!). I said it would be popular.&lt;/p&gt;
&lt;p&gt;There is a reserve list but I would not hold out too much hope.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.agileyorkshire.org/event-announcements/17Mar2010">Next weeks Agile Yorkshire meeting</a> with Martin Fowler is now full (and the venue is twice the size of our usual one!). I said it would be popular.</p>
<p>There is a reserve list but I would not hold out too much hope.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Lessons learnt building a custom activity to run Typemock Isolator in VS2010 Team Build</title>
      <link>https://blog.richardfennell.net/posts/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build/</link>
      <pubDate>Mon, 08 Mar 2010 09:46:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/lessons-learnt-building-a-custom-activity-to-run-typemock-isolator-in-vs2010-team-build/</guid>
      <description>&lt;p&gt;Updated 25th March 2010 - All the source is now available at the &lt;a href=&#34;http://www.typemock.com/files/Addons/VS2010%20TypemockBuildActivity%201.0.0.0.zip&#34;&gt;Typemock Add-in site&lt;/a&gt; &lt;br&gt;
Updated 2nd July 2010 - &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx&#34;&gt;Some usage notes posted&lt;/a&gt;&lt;br&gt;
Updated 26th July 2011 - &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/07/26/more-tips-and-tricks-using-my-typemock-custom-build-activity-with-tfs-2010-build.aspx&#34;&gt;More usage notes&lt;/a&gt;&lt;br&gt;
Updated 21st Nov 2011 - Typemock Isolator now has direct support for TFS 2010 Build, &lt;a href=&#34;http://docs.typemock.com/Isolator/#%23typemock.chm/Documentation/TFSBuild.html&#34;&gt;see usage notes&lt;/a&gt; &lt;/p&gt;
&lt;p&gt;I have previously &lt;a href=&#34;https://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/22/running-typemock-isolator-based-tests-in-tfs-2010-team-build.aspx&#34;&gt;posted on how you can run Typemock Isolator based tests within a VS2010 using the InvokeMethod activity&lt;/a&gt;. After this post &lt;a href=&#34;http://www.nablasoft.com/alkampfer&#34;&gt;Gian Maria Ricci, a fellow Team System MVP&lt;/a&gt; suggested it would be better to put this functionality in a custom code activity, and provided the basis of the solution. I have taken this base sample and worked it up to be a functional activity, and boy have I learnt a few things doing it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Updated 25th March 2010 - All the source is now available at the <a href="http://www.typemock.com/files/Addons/VS2010%20TypemockBuildActivity%201.0.0.0.zip">Typemock Add-in site</a> <br>
Updated 2nd July 2010 - <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/07/01/using-my-typemock-tmockrunner-custom-activity-for-team-build-2010.aspx">Some usage notes posted</a><br>
Updated 26th July 2011 - <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2011/07/26/more-tips-and-tricks-using-my-typemock-custom-build-activity-with-tfs-2010-build.aspx">More usage notes</a><br>
Updated 21st Nov 2011 - Typemock Isolator now has direct support for TFS 2010 Build, <a href="http://docs.typemock.com/Isolator/#%23typemock.chm/Documentation/TFSBuild.html">see usage notes</a> </p>
<p>I have previously <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/22/running-typemock-isolator-based-tests-in-tfs-2010-team-build.aspx">posted on how you can run Typemock Isolator based tests within a VS2010 using the InvokeMethod activity</a>. After this post <a href="http://www.nablasoft.com/alkampfer">Gian Maria Ricci, a fellow Team System MVP</a> suggested it would be better to put this functionality in a custom code activity, and provided the basis of the solution. I have taken this base sample and worked it up to be a functional activity, and boy have I learnt a few things doing it.</p>
<h3 id="getting-the-custom-activity-into-a-team-build">Getting the custom activity into a team build</h3>
<p>Coding up a custom Team Build activity is not easy, there are a good few posts on the subject (<a href="http://blogs.msdn.com/jimlamb/archive/2009/11/18/How-to-Create-a-Custom-Workflow-Activity-for-TFS-Build-2010.aspx">Jim Lamb’s is a good place to start</a>). The problem is not writing the code but getting the activity into the VS toolbox. All documentation gives basically the same complex manual process, there is no way of avoiding it. Hopefully this will be addressed in a future release of Visual Studio. But for now the basic process is this:</p>
<ol>
<li>Create a Class Library project in your language of choice</li>
<li>Code up your activity inheriting it from the CodeActivity<T> class</li>
<li>Branch the build workflow, that you wish to use for testing, into the folder of the class library project</li>
<li>Add the build workflow’s .XAML file to the class library project then set it’s properties: “build action” to none and “copy to output directory” to do not copy</li>
<li>Open the .XAML file (in VS2010), the new activity should appear in the toolbox, it can be dropped onto the workflow. Set the properties required.</li>
<li>Check in the file .XAML file</li>
<li>Merge the .XAML file to the original location, if you get conflicts simply tell merge to use the new version discarding the original version, so effectively overwriting the original version with the version edited in the project.</li>
<li>Check in the merged original .XAML file that now contains the modifications.</li>
<li>Take the .DLL containing the new activity and place it in a folder under source control (usually under the <strong>BuildProcessTemplates</strong> folder)</li>
<li>Set the Build Controller’s custom assemblies path to point to this folder (so your custom activity can be loaded) </li>
</ol>
<pre><code>[![image](/wp-content/uploads/sites/2/historic/image_thumb_0C6BA208.png &quot;image&quot;)](/wp-content/uploads/sites/2/historic/image_659D8BC7.png)
</code></pre>
<ol start="11">
<li>Run the build and all should be fine</li>
</ol>
<p>But of course is wasn’t. I kept getting the error when I ran a build</p>
<p><em>TF215097: An error occurred while initializing a build for build definition Typemock TestBuildTest Branch: Cannot create unknown type &lsquo;{clr-namespace:TypemockBuildActivity}ExternalTestRunner&rsquo;.</em></p>
<p>This was because I had not followed the procedure correctly. I had tried to be clever. Instead of step 6 and onwards I had had an idea. I created a new build that referenced the branched copy of the .XAML file in the class library project directly. I thought this would save me a good deal of tedious merging while I was debugged my process. It did do this but introduced other issues</p>
<p>The problem was when I inspected the .XAML in my trusty copy of Notepad, I saw that there was no namespace declared for my assembly (as the TF21509 error suggested). If I looked at the actual activity call in the file it was declared as <strong>&lt;local:ExternalTestRunner  …… /&gt;</strong>, the <strong>local:</strong> replacing the namespace reference I would expect. This is obviously down to the way I was editing the .XAML file in the VS2010.</p>
<p>The fix is easy, using Notepad I added a namespace declaration to the Activity block</p>
<p><em>&lt;Activity ……    xmlns:t=&ldquo;clr-namespace:TypemockBuildActivity;assembly=TypemockBuildActivity&rdquo; &gt;</em></p>
<p>and then edited the references from local: to t: (the alias for my namespace) for any classes called from the custom assembly e.g.</p>
<p><em>&lt;t:ExternalTestRunner ResultsFileRoot=&quot;{x:Null}&quot; BuildNumber=&quot;[BuildDetail.Uri.ToString()]&quot; Flavor=&quot;[platformConfiguration.Configuration]&quot; sap:VirtualizedContainerService.HintSize=&ldquo;200,22&rdquo; MsTestExecutable=&ldquo;C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEMSTest.exe&rdquo; Platform=&quot;[platformConfiguration.Platform]&quot; ProjectCollection=&quot;</em><a href="http://typhoon:8080/tfs/DefaultCollection%22"><em>http://typhoon:8080/tfs/DefaultCollection&quot;</em></a> <em>Result=&quot;[ExternalTestRunnerResult]&quot; ResultsFile=&ldquo;ExternalTestRunner.Trx&rdquo; SearchPathRoot=&quot;[outputDirectory]&quot; TeamProjectName=&quot;[BuildDetail.TeamProject]&quot; TestAssemblyNames=&quot;[testAssemblies.ToArray()]&quot; TestRunnerExecutable=&ldquo;C:Program Files (x86)TypemockIsolator6.0TMockRunner.exe&rdquo; TestSettings=&quot;[localTestSettings]&quot; /&gt;</em></p>
<p>Once this was done I could use my custom activity in a Team Build, though I had to make this manual edit every time I edited the branched .XAML file in VS2010 IDE. So I had swapped repeated merges with repeated editing, you take your view as to which is worst.</p>
<h3 id="so-what-is-in-my-typemock-external-test-runner-custom-activity">So what is in my Typemock external test runner custom activity?</h3>
<p>The activity is basically the same as the one suggest by Gian Maria, it takes all the same parameters as the MSTest team build activity and then executes the TMockRunner to wrapper MSTest. What I have done is add a couple of parameters that were missing in the original sample and also added some more error traps and logging.</p>
<pre tabindex="0"><code>using System;
</code></pre><p>using System.Collections.Generic;</p>
<pre tabindex="0"><code>using System.Linq;
</code></pre><p>using System.Text;</p>
<pre tabindex="0"><code>using System.Activities;
</code></pre><p>using System.IO;</p>
<pre tabindex="0"><code>using System.Diagnostics;
</code></pre><p>using Microsoft.TeamFoundation.Build.Workflow.Activities;</p>
<pre tabindex="0"><code>using Microsoft.TeamFoundation.Build.Client;
</code></pre><p>using System.Text.RegularExpressions;</p>
<pre tabindex="0"><code></code></pre><p>namespace TypemockBuildActivity</p>
<pre tabindex="0"><code>{
</code></pre><pre><code>public enum ExternalTestRunnerReturnCode { Unknown =0 , NotRun, Passed, Failed };
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>\[BuildExtension(HostEnvironmentOption.Agent)\]
</code></pre>
<pre tabindex="0"><code>    \[BuildActivity(HostEnvironmentOption.All)\]
</code></pre><pre><code>public sealed class ExternalTestRunner : CodeActivity&lt;ExternalTestRunnerReturnCode&gt;
</code></pre>
<pre tabindex="0"><code>    {
</code></pre><pre><code>    // Define an activity input argument of type string 
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// The name of the wrapper application, usually tmockrunner.exe
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        public InArgument&lt;string\&gt; TestRunnerExecutable { get; set; }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// The name of the application that actually runs the test, defaults to MSTest.exe if not set
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    public InArgument&lt;string\&gt; MsTestExecutable { get; set; }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// The project collection to publish to e.g. http://tfs2010:8080/tfs/DefaultCollection
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        public InArgument&lt;string\&gt; ProjectCollection { get; set; }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// The build ID to to publish to e.g. vstfs:///Build/Build/91
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    public InArgument&lt;string\&gt; BuildNumber { get; set; }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// The project name to publish to e.g: &#34;Typemock Test&#34;
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        public InArgument&lt;string\&gt; TeamProjectName { get; set; }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// The platform name to publish to e.g. Any CPU
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    public InArgument&lt;string\&gt; Platform { get; set; }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// The flavour (configuration) to publish to e.g. &#34;Debug&#34;
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        public InArgument&lt;string\&gt; Flavor { get; set; }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// Array of assembly names to test
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    public InArgument&lt;string\[\]&gt; TestAssemblyNames { get; set; }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// Where to search for assemblies under test
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        public InArgument&lt;string\&gt; SearchPathRoot { get; set; }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// A single name result file
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    public InArgument&lt;string\&gt; ResultsFile { get; set; }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// A directory to store results in (tends not be used if the ResultFile is set)
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        public InArgument&lt;string\&gt; ResultsFileRoot { get; set; }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// The file that list as to how test should be run
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    public InArgument&lt;string\&gt; TestSettings { get; set; }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    // If your activity returns a value, derive from CodeActivity&lt;TResult&gt; 
</code></pre>
<pre tabindex="0"><code>        // and return the value from the Execute method. 
</code></pre><pre><code>    protected override ExternalTestRunnerReturnCode Execute(CodeActivityContext context)
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        String msTestOutput = string.Empty;
</code></pre>
<pre tabindex="0"><code>            ExternalTestRunnerReturnCode exitMessage = ExternalTestRunnerReturnCode.NotRun;
```

```
            if (CheckFileExists(TestRunnerExecutable.Get(context)) == false)
</code></pre><pre><code>        {
</code></pre>
<pre tabindex="0"><code>                LogError(context, string.Format(&#34;TestRunner not found {0}&#34;, TestRunnerExecutable.Get(context)));
</code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code>            else
</code></pre><pre><code>        {
</code></pre>
<pre tabindex="0"><code>                String mstest = MsTestExecutable.Get(context);
</code></pre><pre><code>            if (CheckFileExists(mstest) == false)
</code></pre>
<pre tabindex="0"><code>                {
</code></pre><pre><code>                mstest = GetDefaultMsTestPath();
</code></pre>
<pre tabindex="0"><code>                }
```

```
                String testrunner = TestRunnerExecutable.Get(context);
```

```
                var arguments = new StringBuilder();
</code></pre><pre><code>            arguments.Append(string.Format(&quot;&quot;{0}&quot;&quot;, mstest));
</code></pre>
<pre tabindex="0"><code>                arguments.Append(&#34; /nologo &#34;);
```

```
                // the files to test
</code></pre><pre><code>            foreach (string name in TestAssemblyNames.Get(context))
</code></pre>
<pre tabindex="0"><code>                {
</code></pre><pre><code>                arguments.Append(AddParameterIfNotNull(&quot;testcontainer&quot;, name));
</code></pre>
<pre tabindex="0"><code>                }
```

```
                // settings about what to test
</code></pre><pre><code>            arguments.Append(AddParameterIfNotNull(&quot;searchpathroot&quot;, SearchPathRoot.Get(context)));
</code></pre>
<pre tabindex="0"><code>                arguments.Append(AddParameterIfNotNull(&#34;testSettings&#34;, TestSettings.Get(context)));
```

```
                // now the publish bits
</code></pre><pre><code>            if (string.IsNullOrEmpty(ProjectCollection.Get(context)) == false)
</code></pre>
<pre tabindex="0"><code>                {
</code></pre><pre><code>                arguments.Append(AddParameterIfNotNull(&quot;publish&quot;, ProjectCollection.Get(context)));
</code></pre>
<pre tabindex="0"><code>                    arguments.Append(AddParameterIfNotNull(&#34;publishbuild&#34;, BuildNumber.Get(context)));
</code></pre><pre><code>                arguments.Append(AddParameterIfNotNull(&quot;teamproject&quot;, TeamProjectName.Get(context)));
</code></pre>
<pre tabindex="0"><code>                    arguments.Append(AddParameterIfNotNull(&#34;platform&#34;, Platform.Get(context)));
</code></pre><pre><code>                arguments.Append(AddParameterIfNotNull(&quot;flavor&quot;, Flavor.Get(context)));
</code></pre>
<pre tabindex="0"><code>                }
```

```
                // where do the results go, tend to use one of these not both
</code></pre><pre><code>            arguments.Append(AddParameterIfNotNull(&quot;resultsfile&quot;, ResultsFile.Get(context)));
</code></pre>
<pre tabindex="0"><code>                arguments.Append(AddParameterIfNotNull(&#34;resultsfileroot&#34;, ResultsFileRoot.Get(context)));
```

```
                LogMessage(context, string.Format(&#34;Call Mstest With Wrapper \[{0}\] and arguments \[{1}\]&#34;, testrunner, arguments.ToString()), BuildMessageImportance.Normal);
```

```
                using (System.Diagnostics.Process process = new System.Diagnostics.Process())
</code></pre><pre><code>            {
</code></pre>
<pre tabindex="0"><code>                    process.StartInfo.FileName = testrunner;
</code></pre><pre><code>                process.StartInfo.WorkingDirectory = SearchPathRoot.Get(context);
</code></pre>
<pre tabindex="0"><code>                    process.StartInfo.WindowStyle = ProcessWindowStyle.Normal;
</code></pre><pre><code>                process.StartInfo.UseShellExecute = false;
</code></pre>
<pre tabindex="0"><code>                    process.StartInfo.ErrorDialog = false;
</code></pre><pre><code>                process.StartInfo.CreateNoWindow = true;
</code></pre>
<pre tabindex="0"><code>                    process.StartInfo.RedirectStandardOutput = true;
</code></pre><pre><code>                process.StartInfo.Arguments = arguments.ToString();
</code></pre>
<pre tabindex="0"><code>                    try
</code></pre><pre><code>                {
</code></pre>
<pre tabindex="0"><code>                        process.Start();
</code></pre><pre><code>                    msTestOutput = process.StandardOutput.ReadToEnd();
</code></pre>
<pre tabindex="0"><code>                        process.WaitForExit();
</code></pre><pre><code>                    // for TypemockRunner and MSTest this is alway seems to be 1 so does not help tell if test passed or not
</code></pre>
<pre tabindex="0"><code>                        //  In general you can detect test failures by simply checking whether mstest.exe returned 0 or not. 
</code></pre><pre><code>                    // I say in general because there is a known bug where on certain OSes mstest.exe sometimes returns 128 whether 
</code></pre>
<pre tabindex="0"><code>                        // successful or not, so mstest.exe 10.0 added a new command-line option /usestderr which causes it to write 
</code></pre><pre><code>                    // something to standard error on failure.
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>                    // If (error data received)
</code></pre>
<pre tabindex="0"><code>                        //    FAIL
</code></pre><pre><code>                    // Else If (exit code != 0 AND exit code != 128)
</code></pre>
<pre tabindex="0"><code>                        //    FAIL
</code></pre><pre><code>                    // Else If (exit code == 128)
</code></pre>
<pre tabindex="0"><code>                        //    Write Warning about weird error code, but SUCCEED
</code></pre><pre><code>                    // Else
</code></pre>
<pre tabindex="0"><code>                        //   SUCCEED
```

```
                        ///int exitCode = process.ExitCode;
</code></pre><pre><code>                    LogMessage(context, string.Format(&quot;Output of ExternalTestRunner: {0}&quot;, msTestOutput), BuildMessageImportance.High);
</code></pre>
<pre tabindex="0"><code>                    }
</code></pre><pre><code>                catch (InvalidOperationException ex)
</code></pre>
<pre tabindex="0"><code>                    {
</code></pre><pre><code>                    LogError(context, &quot;ExternalTestRunner InvalidOperationException :&quot; + ex.Message);
</code></pre>
<pre tabindex="0"><code>                    }
```

```
                    exitMessage = ParseResultsForSummary(msTestOutput);
</code></pre><pre><code>            }
</code></pre>
<pre tabindex="0"><code>            }
</code></pre><pre><code>        LogMessage(context, string.Format(&quot;ExternaTestRunner exiting with message \[{0}\]&quot;, exitMessage), BuildMessageImportance.High);
</code></pre>
<pre tabindex="0"><code>            return exitMessage;
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// Adds a parameter to the MSTest line, it has been extracted to allow us to do a isEmpty chekc in one place
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// &lt;param name=&#34;parameterName&#34;&gt;The name of the parameter&lt;/param&gt;
</code></pre><pre><code>    /// &lt;param name=&quot;value&quot;&gt;The string value&lt;/param&gt;
</code></pre>
<pre tabindex="0"><code>        /// &lt;returns&gt;If the value is present a formated block is return&lt;/returns&gt;
</code></pre><pre><code>    private static string AddParameterIfNotNull(string parameterName, string value)
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        var returnValue = string.Empty;
</code></pre>
<pre tabindex="0"><code>            if (string.IsNullOrEmpty(value) == false)
</code></pre><pre><code>        {
</code></pre>
<pre tabindex="0"><code>                returnValue = string.Format(&#34; /{0}:&#34;{1}&#34;&#34;, parameterName, value);
</code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code>            return returnValue;
</code></pre><pre><code>   }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// A handler to check the results for the success or failure message
</code></pre><pre><code>    /// This is a rough way to do it, but is more reliable than the MSTest exit codes
</code></pre>
<pre tabindex="0"><code>        /// It returns a string as opposed to an  exit code so that it 
</code></pre><pre><code>    /// Note this will not work of the /usestderr flag is used
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    /// &lt;param name=&quot;output&quot;&gt;The output from the test run&lt;/param&gt;
</code></pre>
<pre tabindex="0"><code>        /// &lt;returns&gt;A single line summary&lt;/returns&gt;
</code></pre><pre><code>    private static ExternalTestRunnerReturnCode ParseResultsForSummary(String output)
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        ExternalTestRunnerReturnCode exitMessage = ExternalTestRunnerReturnCode.NotRun;
</code></pre>
<pre tabindex="0"><code>            if (Regex.IsMatch(output, &#34;Test Run Failed&#34;))
</code></pre><pre><code>        {
</code></pre>
<pre tabindex="0"><code>                exitMessage = ExternalTestRunnerReturnCode.Failed;
</code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code>            else if (Regex.IsMatch(output, &#34;Test Run Completed&#34;))
</code></pre><pre><code>        {
</code></pre>
<pre tabindex="0"><code>                exitMessage = ExternalTestRunnerReturnCode.Passed;
</code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code>            else
</code></pre><pre><code>        {
</code></pre>
<pre tabindex="0"><code>                exitMessage = ExternalTestRunnerReturnCode.Unknown;
</code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>        return exitMessage;
</code></pre>
<pre tabindex="0"><code>        }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// Handles finding MSTest, checking both the 32 and 64 bit paths
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    /// &lt;returns&gt;&lt;/returns&gt;
</code></pre>
<pre tabindex="0"><code>        private static string GetDefaultMsTestPath()
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            String mstest = @&#34;C:Program FilesMicrosoft Visual Studio 10.0Common7IDEmstest.exe&#34;;
</code></pre><pre><code>        if (CheckFileExists(mstest) == false)
</code></pre>
<pre tabindex="0"><code>            {
</code></pre><pre><code>            mstest = @&quot;C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEMSTest.exe&quot;;
</code></pre>
<pre tabindex="0"><code>                if (CheckFileExists(mstest) == false)
</code></pre><pre><code>            {
</code></pre>
<pre tabindex="0"><code>                    throw new System.IO.FileNotFoundException(&#34;MsTest file cannot be found&#34;);
</code></pre><pre><code>            }
</code></pre>
<pre tabindex="0"><code>            }
</code></pre><pre><code>        return mstest;
</code></pre>
<pre tabindex="0"><code>        }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// Helper method so we log in both the VS Build and Debugger modes
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    /// &lt;param name=&quot;context&quot;&gt;The workflow context&lt;/param&gt;
</code></pre>
<pre tabindex="0"><code>        /// &lt;param name=&#34;message&#34;&gt;Our message&lt;/param&gt;
</code></pre><pre><code>    /// &lt;param name=&quot;logLevel&quot;&gt;Team build importance level&lt;/param&gt;
</code></pre>
<pre tabindex="0"><code>        private static void LogMessage(CodeActivityContext context, string message, BuildMessageImportance logLevel)
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            TrackingExtensions.TrackBuildMessage(context, message, logLevel);
</code></pre><pre><code>        Debug.WriteLine(message);
</code></pre>
<pre tabindex="0"><code>        }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// Helper method so we log in both the VS Build and Debugger modes
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    /// &lt;param name=&quot;context&quot;&gt;The workflow context&lt;/param&gt;
</code></pre>
<pre tabindex="0"><code>        /// &lt;param name=&#34;message&#34;&gt;Our message&lt;/param&gt;
</code></pre><pre><code>    private static void LogError(CodeActivityContext context, string message)
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        TrackingExtensions.TrackBuildError(context, message);
</code></pre>
<pre tabindex="0"><code>            Debug.WriteLine(message);
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// Helper to check a file name to make sure it not null and that the file it name is present
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// &lt;param name=&#34;fileName&#34;&gt;&lt;/param&gt;
</code></pre><pre><code>    /// &lt;returns&gt;&lt;/returns&gt;
</code></pre>
<pre tabindex="0"><code>        private static bool CheckFileExists(string fileName)
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            return !((string.IsNullOrEmpty(fileName) == true) || (File.Exists(fileName) == false));
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code>    }
</code></pre><p>}</p>
<pre tabindex="0"><code>
This activity does need a good bit of configuring to use it in a real build. However, as said previously, the options it takes are basically those needed for the MSTest activity, so you just replace the existing calls to the MSTest activities as shown in the graph below should be enough.

[![image](/wp-content/uploads/sites/2/historic/image_thumb_1728F95D.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_3744061A.png)

**Note:** The version of the ExternalTestRunner activity in this post does not handle tests based on Metadata parameters (blue box above), but should be OK for all other usages (it is just that these parameters have not been wired through yet). The red box show the new activity in place (this is the path taken if the tests are controlled by a test setting file) and the green box contains an MSTest activity waiting to be swapped out (this is the path taken if no test setting or metadata files are provided).

The parameters on the activity in the red box are as follows, as said before they are basically the same as parameters for the standard MSTest activity.

[![image](/wp-content/uploads/sites/2/historic/image_thumb_6FEEB027.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_1009BCE5.png)

The Result parameter (the Execute() method return value) does need to be associated with a variable declared in the workflow, in my case **ExternalTestRunnerResult**. This is defined at the sequence scope, the scope it is defined at must be such that it can be read by any other steps in the workflow that require the value. It is declared as being of the enum type **ExternalTestRunnerReturnCode** defined in the custom activity.

[![image](/wp-content/uploads/sites/2/historic/image_thumb_7DC0F622.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_36D7D325.png)

Further on in the workflow you need to edit the if statement that branches on whether the tests passed or not to use this **ExtenalTestRunnerResult** value

[![image](/wp-content/uploads/sites/2/historic/image_thumb_5DA5E965.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_44AA1920.png)

Once all this is done you should have all your MSTests running inside a Typemock’ed wrapper and all the results should be shown correctly in the build summary

[![image](/wp-content/uploads/sites/2/historic/image_thumb_0473FFA6.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_0F9D89F0.png)

And the log of the build should show you all the parameters that got passed through to the MSTest program.

[![image](/wp-content/uploads/sites/2/historic/image_thumb_2B4215E6.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_366BA030.png)

### Is there a better way to test a custom activity project?

Whilst sorting out the logic for the custom activity I did not want to have to go through the whole process of running the team build to test the activity, it just took too long. To speed this process I did the following

1.  In my solution I created a new Console Workflow project
2.  I referenced my custom activity project from this new workflow project
3.  I added my custom activity as the only item in my workflow
4.  For each parameter of the custom activity I created a matching argument for the workflow and wired the two together.  
      
    [![image](/wp-content/uploads/sites/2/historic/image_thumb_6F164A3D.png &#34;image&#34;)](/wp-content/uploads/sites/2/historic/image_763586B5.png)  
    
5.  I then created a Test Project that referenced the workflow project and custom activity project.
6.  In this I could write unit tests (well more integration tests really) that exercise many of the options in the custom activity. To help in this process I created some simple Test Projects assemblies that contained just passing tests, just failing test and a mixture of both.
7.  A sample test is shown below  
    
    ```
    \[TestMethod\]
    ``````
    public void RunTestWithTwoNamedAssembly\_OnePassingOneFailingTestsNoPublishNoMstestSpecified\_FailMessage()
    ``````
    {
    ```
    
    ```
        // make sure we have no results file, MSTest fails if the file is present
    ``````
        File.Delete(Directory.GetCurrentDirectory() + @&#34;TestResult.trx&#34;);
    ```
    
    ```
        var wf = new Workflow1();
    ```
    
    ```
        Dictionary&lt;string, object\&gt; wfParams = new Dictionary&lt;string, object\&gt;
    ``````
        {
    ``````
            { &#34;BuildNumber&#34;, string.Empty },
    ``````
            { &#34;Flavour&#34;, &#34;Debug&#34; },
    ``````
            { &#34;MsTestExecutable&#34;, string.Empty },
    ``````
            { &#34;Platform&#34;, &#34;Any CPU&#34; },
    ``````
            { &#34;ProjectCollection&#34;,string.Empty },
    ``````
            { &#34;TeamProjectName&#34;, string.Empty },
    ``````
            { &#34;TestAssemblyNames&#34;, new string\[\] { 
    ``````
                Directory.GetCurrentDirectory() + @&#34;TestProjectWithPassingTest.dll&#34;,
    ``````
                Directory.GetCurrentDirectory() + @&#34;TestProjectWithfailingTest.dll&#34;
    ``````
            }},
    ``````
            { &#34;TestRunnerExecutable&#34;, @&#34;C:Program Files (x86)TypemockIsolator6.0TMockRunner.exe&#34; },
    ``````
            { &#34;ResultsFile&#34;, &#34;TestResult.trx&#34; }
    ``````
        };
    ```
    
    ```
        var results = WorkflowInvoker.Invoke(wf, wfParams);
    ```
    
    ```
        Assert.AreEqual(TypemockBuildActivity.ExternalTestRunnerReturnCode.Failed, results\[&#34;ResultSummary&#34;\]);
    ``````
    }
    ```
    
8.  The only real limit here is that some of the options (the publishing ones) need a TFS server to be tested. You have to make a choice as to whether this type of publishing test is worth the effort of filling your local TFS server with test runs  from the test project or whether you want to test these features manually in a real build environment, especially give the issues I mention in my past post

### Summary

So I have a working implementation of a custom activity that makes it easy to run Typemock based tests without losing any of the other features of a Team Build. Butt as I learnt getting around the deployment issues can be a real pain.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Getting sick of seeing the Vodafone error “HTTP Error 403: The service you requested is restricted”</title>
      <link>https://blog.richardfennell.net/posts/getting-sick-of-seeing-the-vodafone-error-http-error-403-the-service-you-requested-is-restricted/</link>
      <pubDate>Mon, 08 Mar 2010 09:26:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-sick-of-seeing-the-vodafone-error-http-error-403-the-service-you-requested-is-restricted/</guid>
      <description>&lt;p&gt;Of late I keep getting ‘HTTP Error 403: The service you requested is restricted’ when I try to use my HTC Diamond2 mobile phone on Vodafone. I see the problem whether browsing the internet with mobile IE or Opera and also when using the phone as a 3G modem  from my Windows 7 laptop. Seems to happen every other day of late.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_7D184995.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_7CAC16A0.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Interestingly, when I got the error shown above whilst trying to browse to Bing, a connection, on the same 3G link, from my PC to our Exchange server was working fine.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Of late I keep getting ‘HTTP Error 403: The service you requested is restricted’ when I try to use my HTC Diamond2 mobile phone on Vodafone. I see the problem whether browsing the internet with mobile IE or Opera and also when using the phone as a 3G modem  from my Windows 7 laptop. Seems to happen every other day of late.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_7D184995.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7CAC16A0.png" title="image"></a></p>
<p>Interestingly, when I got the error shown above whilst trying to browse to Bing, a connection, on the same 3G link, from my PC to our Exchange server was working fine.</p>
<p>This error seems to be a reoccurring problem judging from posts on the various Vodafone forums. In most cases the problem just seems to go away if you wait long enough, which is my experience. I can find nothing I can do or set on the phone that makes a difference. My guess is a proxy error in the Vodafone cloud. <strong>They really need to get it sorted out.</strong></p>
<p>I have been a happy Vodafone customer for years, since I first got a mobile phone in the early 90s, but this is happening too often and it is seriously making me think of moving. Mobile internet is becoming too important to be to have it as such an unreliable service.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The Teamprise Eclipse plug in for TFS gets a new name</title>
      <link>https://blog.richardfennell.net/posts/the-teamprise-eclipse-plug-in-for-tfs-gets-a-new-name/</link>
      <pubDate>Mon, 08 Mar 2010 09:22:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-teamprise-eclipse-plug-in-for-tfs-gets-a-new-name/</guid>
      <description>&lt;p&gt;As I am sure you remember a &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2009/11/09/microsoft-has-acquired-the-teamprise-client-suite.aspx&#34;&gt;few months ago Microsoft bought Teamprise&lt;/a&gt; and their Java clients for TFS. Well the team has got out their first Microsoft branded release, details can be found on &lt;a href=&#34;http://www.woodwardweb.com/teamprise/whats_in_a_name.html&#34;&gt;Martin Woodward’s&lt;/a&gt; and &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2010/03/04/microsoft-visual-studio-team-explorer-2010.aspx&#34;&gt;Brian Harry’s&lt;/a&gt; blogs. &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?displaylang=en&amp;amp;FamilyID=3c9454e0-523a-4ee1-b436-5c6fc2110b34&#34;&gt;This beta provides the first support for TFS2010&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This release is very timely as I will be talking on the Java integration via the Eclipse plug-in at &lt;a href=&#34;http://qconlondon.com/london-2010/presentation/The&amp;#43;Interoperable&amp;#43;Platform&#34;&gt;QCON next week&lt;/a&gt; and at the &lt;a href=&#34;http://msdn.microsoft.com/en-gb/architecture/ee959240.aspx&#34;&gt;Architect Insight Conference&lt;/a&gt; at the end of the month. This  “Eaglestone” release means I can hopefully do my demos against TFS2010.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As I am sure you remember a <a href="http://blogs.msdn.com/bharry/archive/2009/11/09/microsoft-has-acquired-the-teamprise-client-suite.aspx">few months ago Microsoft bought Teamprise</a> and their Java clients for TFS. Well the team has got out their first Microsoft branded release, details can be found on <a href="http://www.woodwardweb.com/teamprise/whats_in_a_name.html">Martin Woodward’s</a> and <a href="http://blogs.msdn.com/bharry/archive/2010/03/04/microsoft-visual-studio-team-explorer-2010.aspx">Brian Harry’s</a> blogs. <a href="http://www.microsoft.com/downloads/details.aspx?displaylang=en&amp;FamilyID=3c9454e0-523a-4ee1-b436-5c6fc2110b34">This beta provides the first support for TFS2010</a></p>
<p>This release is very timely as I will be talking on the Java integration via the Eclipse plug-in at <a href="http://qconlondon.com/london-2010/presentation/The&#43;Interoperable&#43;Platform">QCON next week</a> and at the <a href="http://msdn.microsoft.com/en-gb/architecture/ee959240.aspx">Architect Insight Conference</a> at the end of the month. This  “Eaglestone” release means I can hopefully do my demos against TFS2010.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Martin Fowler is speaking at Agile Yorkshire this month</title>
      <link>https://blog.richardfennell.net/posts/martin-fowler-is-speaking-at-agile-yorkshire-this-month/</link>
      <pubDate>Tue, 02 Mar 2010 12:56:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/martin-fowler-is-speaking-at-agile-yorkshire-this-month/</guid>
      <description>&lt;p&gt;A slight change to the usually arrangement this month at Agile Yorkshire. The speaker will &lt;a href=&#34;http://martinfowler.com/&#34;&gt;Martin Fowler&lt;/a&gt; who will be talking on “Software Design in the 21st Century”, but the meeting will be on the 17th March (which is a week later than usual) and at a new venue &lt;a href=&#34;http://theadelphi.co.uk/&#34;&gt;The Adelphi in Leeds&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This promises to be a very interesting session. I saw Martin speak at a JavaOne conference, a good few years ago now, and it is still one of the best presentations I have seen at any conference. He is an engaging speaker with much to say about our industry.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A slight change to the usually arrangement this month at Agile Yorkshire. The speaker will <a href="http://martinfowler.com/">Martin Fowler</a> who will be talking on “Software Design in the 21st Century”, but the meeting will be on the 17th March (which is a week later than usual) and at a new venue <a href="http://theadelphi.co.uk/">The Adelphi in Leeds</a>.</p>
<p>This promises to be a very interesting session. I saw Martin speak at a JavaOne conference, a good few years ago now, and it is still one of the best presentations I have seen at any conference. He is an engaging speaker with much to say about our industry.</p>
<p>So I am sad to say will not be able to make the event, <a href="http://www.nebytes.net/post/March-Event-e2809cVisual-Studio-2010e2809d-and-e2809cSystem-Center-in-the-R2-Wavee2809d.aspx">I am speaking at at NEByte</a> that night on VS2010. What a shame, and after I had arrange my trip to Newcastle not to clash with the usual Agile Yorkshire night.</p>
<p>Anyway if you are interested in seeing Martin’s session look at <a href="http://www.agileyorkshire.org/">www.agileyorkshire.org</a> for more details and please register on the website if you intend to go along. I am sure you won’t be disappointed.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD Community Events News</title>
      <link>https://blog.richardfennell.net/posts/ddd-community-events-news/</link>
      <pubDate>Tue, 02 Mar 2010 11:33:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-community-events-news/</guid>
      <description>&lt;p&gt;Some DDD event news&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.developerdeveloperdeveloper.com/scotland2010&#34;&gt;Registration has opened for DDD Scotland&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;There is a &lt;a href=&#34;http://dddsouthwest.com/Home/tabid/36/Default.aspx&#34;&gt;call for speakers at DDD South West&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;There is a &lt;a href=&#34;http://www.sqlbits.com/information/SessionSubmission.aspx.&#34;&gt;call for speakers at SQLBits&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you have never been to a DDD style event before why not give it a try; they are free and you are sure to learn something. Unlike the DDD events in Reading these ones don’t fill up so fast that you cannot register because you stupidly went to make a up of tea just before they opened registration and it was full before you got back.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Some DDD event news</p>
<ul>
<li><a href="http://www.developerdeveloperdeveloper.com/scotland2010">Registration has opened for DDD Scotland</a></li>
<li>There is a <a href="http://dddsouthwest.com/Home/tabid/36/Default.aspx">call for speakers at DDD South West</a></li>
<li>There is a <a href="http://www.sqlbits.com/information/SessionSubmission.aspx.">call for speakers at SQLBits</a></li>
</ul>
<p>If you have never been to a DDD style event before why not give it a try; they are free and you are sure to learn something. Unlike the DDD events in Reading these ones don’t fill up so fast that you cannot register because you stupidly went to make a up of tea just before they opened registration and it was full before you got back.</p>
<p>If you have attended one but never spoken they not give that a go, I am sure you have something to say based on your work experience with .NET that would be interesting to the community. All the DDD events are always looking for new speakers. You can do a 10 minutes grok talk over lunch time (there is a form for this at the Scottish site) or submit a full session for Bristol for SQLBits. Go on it might be fun!</p>
]]></content:encoded>
    </item>
    <item>
      <title>The importance of using parameters in vs2010 build workflows</title>
      <link>https://blog.richardfennell.net/posts/the-importance-of-using-parameters-in-vs2010-build-workflows/</link>
      <pubDate>Tue, 02 Mar 2010 11:13:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-importance-of-using-parameters-in-vs2010-build-workflows/</guid>
      <description>&lt;p&gt;I have been doing some more work &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/22/running-typemock-isolator-based-tests-in-tfs-2010-team-build.aspx&#34;&gt;integrating Typemock and VS 2010 Team Build&lt;/a&gt;. I have just wasted a good few hours wondering why my test results are not being published.&lt;/p&gt;
&lt;p&gt;If I looked at the build log I saw my tests ran (and pass or failed as expected) and then were published without error.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_71013D7B.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_5079FDC9.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;But when I checked the build summary it said there were no tests associated with the build, it reporting “No Test Results”&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been doing some more work <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/22/running-typemock-isolator-based-tests-in-tfs-2010-team-build.aspx">integrating Typemock and VS 2010 Team Build</a>. I have just wasted a good few hours wondering why my test results are not being published.</p>
<p>If I looked at the build log I saw my tests ran (and pass or failed as expected) and then were published without error.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_71013D7B.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_5079FDC9.png" title="image"></a></p>
<p>But when I checked the build summary it said there were no tests associated with the build, it reporting “No Test Results”</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_1043E44F.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_500DCAD4.png" title="image"></a></p>
<p>This was strange it had been working in the past. After much fiddling around I found the problem, it was twofold:</p>
<ul>
<li>The main problems was that in my InvokeMethod call to run Typemock/MSTest I had hard coded the Platform: and Flavor: values. This meant irrespective of the build I asked for, I published my test results to the <strong>Any CPU|Debug</strong> configurations. MSTest lets you do this, even if no build of that configuration exists at the time.</li>
</ul>
<blockquote>
<p>My InvokeMethod argument parameter should be been something like</p></blockquote>
<blockquote>
<p>&ldquo;&ldquo;&ldquo;C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEMSTest.exe&rdquo;&rdquo;  /nologo /testcontainer:&rdquo;&quot;&quot; + String.Format(&quot;{0}BinariesBusinessLogic.Tests.dll&quot;, BuildDirectory) + &quot;&quot;&quot; /publish:&quot;&quot;<a href="http://typhoon:8080/tfs/DefaultCollection%22%22">http://typhoon:8080/tfs/DefaultCollection&quot;&quot;</a> /publishbuild:&quot;&quot;&quot; + BuildDetail.Uri.ToString() + &quot;&quot;&quot; /teamproject:&quot;&quot;&quot; + BuildDetail.TeamProject + &quot;&quot;&quot; /platform:&quot;&quot;&quot; + platformConfiguration.Platform + &quot;&quot;&quot; /flavor:&quot;&quot;&quot; + platformConfiguration.Configuration + &quot;&quot;&quot; /resultsfile:&quot;&quot;&quot; + String.Format(&quot;{0}BinariesTest.Trx&quot;, BuildDirectory) + &quot;&quot;&quot;  &quot;</p></blockquote>
<ul>
<li>The second issues was that I had failed, on at least one of my test build definitions, to set the <strong>Configurations to Build</strong> setting. This meant the build defaulted to <strong>Mixed Platforms|Debug</strong> (hence not matching my hard coded <strong>Any CPU|Debug</strong> configuration). Interesting to note here if that the parameters used above (platformConfiguration.Configuration and platformConfiguration.Platform ) are both empty if the <strong>Configurations to Build</strong> setting is not set. MSBuild is the activity that chooses the defaults not the workflow. So in effect you must always set these values for your build, or you will need to handle these empty strings in the workflow if you don’t want MSTest to fail saying the Platforms and Flavour parameters are empty. Seem to me explicitly setting them is good practice anyway.</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_6FBCA49C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3AB0156C.png" title="image"></a></p>
<p>So the technical tip here is make sure that you correctly us all the parameters associated with a workflow in activities. You cannot trust an activity to give and error or warning if you pass it strange values.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Do you use a Symbol Server?</title>
      <link>https://blog.richardfennell.net/posts/do-you-use-a-symbol-server/</link>
      <pubDate>Mon, 01 Mar 2010 11:56:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/do-you-use-a-symbol-server/</guid>
      <description>&lt;p&gt;I find one of the most often overlooked new features of 2010 is the Symbol Server. This is a file share where the .PDB symbol files are stored for any given build (generated by a build server, &lt;a href=&#34;http://blogs.msdn.com/jimlamb/archive/2009/06/15/enabling-symbol-and-source-server-support-in-tfs-build-2010-beta-1.aspx&#34;&gt;see Jim Lamb’s post on the setup&lt;/a&gt;). If you look on the symbol server share you will see directories for each built assembly with a GUID named subdirectory containing the PDB files for each unique build.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I find one of the most often overlooked new features of 2010 is the Symbol Server. This is a file share where the .PDB symbol files are stored for any given build (generated by a build server, <a href="http://blogs.msdn.com/jimlamb/archive/2009/06/15/enabling-symbol-and-source-server-support-in-tfs-build-2010-beta-1.aspx">see Jim Lamb’s post on the setup</a>). If you look on the symbol server share you will see directories for each built assembly with a GUID named subdirectory containing the PDB files for each unique build.</p>
<p>So what is this Symbol Server used for? Well you can use the Symbol Server to enable debug features such as <a href="http://msdn.microsoft.com/en-us/library/dd264915%28VS.100%29.aspx">Intellitrace</a>, vital if you are using Lab Manager. In effect this means that when viewing an Intellitrace log Visual Studio is able to go to the Symbol Server to get the correct .PDB file for the assemblies being run, even if the source is not available, thus allowing you to step through the code. It can also be used for remote debugging of ASP.NET servers.</p>
<p>A bonus is that you can debug release code, as long as you produced .PDB symbols and placed them on the <a href="http://msdn.microsoft.com/en-us/magazine/cc301459.aspx">Symbol Server when you built the release</a> (by altering the advanced build properties shown below).</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2A5CD5C2.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_03228C8D.png" title="image"></a></p>
<p>Key to remember here is that the client PC that generates the Intellitrace file does not need access to the PDB files, only the PC handling the debugging process needs to be able to access the symbols. Perfect for release codes scenarios.</p>
<p>This ability to debug into code that you don’t have the source for extends to <a href="http://msdn.microsoft.com/en-us/library/cc667410.aspx">debugging into Microsoft .NET framework code</a>. Microsoft have made public a Symbol Server for just this purpose. To use it you have to enable it using the Tool &gt; Option &gt; Debugging &gt; Symbols dialog.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_4A0BAF8A.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_29846FD8.png" title="image"></a></p>
<p>All this should make debugging that hard to track problem just that bit easier.</p>
]]></content:encoded>
    </item>
    <item>
      <title>This year’s DDD South West announced</title>
      <link>https://blog.richardfennell.net/posts/this-years-ddd-south-west-announced/</link>
      <pubDate>Mon, 01 Mar 2010 09:34:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/this-years-ddd-south-west-announced/</guid>
      <description>&lt;p&gt;This year’s DDD South West has been announced for the 5th of June in Bristol, for details see the &lt;a href=&#34;http://dddsouthwest.com/Home/tabid/36/Default.aspx&#34;&gt;dddsouthwest.com site&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/DDDSouthWest2BadgeSmall1_790B523F.png&#34;&gt;&lt;img alt=&#34;DDDSouthWest2BadgeSmall[1]&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/DDDSouthWest2BadgeSmall1_thumb_79EFC9F7.png&#34; title=&#34;DDDSouthWest2BadgeSmall[1]&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This year’s DDD South West has been announced for the 5th of June in Bristol, for details see the <a href="http://dddsouthwest.com/Home/tabid/36/Default.aspx">dddsouthwest.com site</a></p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/DDDSouthWest2BadgeSmall1_790B523F.png"><img alt="DDDSouthWest2BadgeSmall[1]" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/DDDSouthWest2BadgeSmall1_thumb_79EFC9F7.png" title="DDDSouthWest2BadgeSmall[1]"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at QCon on TFS and Java Integration</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-qcon-on-tfs-and-java-integration/</link>
      <pubDate>Fri, 26 Feb 2010 17:11:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-qcon-on-tfs-and-java-integration/</guid>
      <description>&lt;p&gt;Week after next I will be speaking at &lt;a href=&#34;http://qconlondon.com/&#34;&gt;QCon London&lt;/a&gt; with &lt;a href=&#34;http://qconlondon.com/london-2010/presentation/The&amp;#43;Interoperable&amp;#43;Platform&#34;&gt;Simon Thurman of Microsoft on “The Interoperable Platform”&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So what does that title mean? Well for me, for this session, it will be about how you can use the ALM features of TFS even when using Eclipse for Java development. So it will be a demo led session on the &lt;a href=&#34;http://www.teamprise.com/products/plugin/&#34;&gt;Teamprise tools for Eclipse&lt;/a&gt; and how they can allow you to build a unified development team that works in both .NET and Java.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Week after next I will be speaking at <a href="http://qconlondon.com/">QCon London</a> with <a href="http://qconlondon.com/london-2010/presentation/The&#43;Interoperable&#43;Platform">Simon Thurman of Microsoft on “The Interoperable Platform”</a>.</p>
<p>So what does that title mean? Well for me, for this session, it will be about how you can use the ALM features of TFS even when using Eclipse for Java development. So it will be a demo led session on the <a href="http://www.teamprise.com/products/plugin/">Teamprise tools for Eclipse</a> and how they can allow you to build a unified development team that works in both .NET and Java.</p>
<p>Should be an interesting event, the list of speaker looks great. Shame I will only be there for a day</p>
]]></content:encoded>
    </item>
    <item>
      <title>Logging results from InvokeProcess in a VS2010 Team Build</title>
      <link>https://blog.richardfennell.net/posts/logging-results-from-invokeprocess-in-a-vs2010-team-build/</link>
      <pubDate>Tue, 23 Feb 2010 09:48:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/logging-results-from-invokeprocess-in-a-vs2010-team-build/</guid>
      <description>&lt;p&gt;When you use the InvokeProcess activity, &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/22/running-typemock-isolator-based-tests-in-tfs-2010-team-build.aspx&#34;&gt;as I did in my Typemock post&lt;/a&gt;, you really need to setup the logging. This is because by default nothing will be logged other than the command line invoked, not usually the best option. There are a couple of gotta’s here that initially caused me a problem and I suspect could cause a new user of the 2010 build process a problem too.&lt;/p&gt;
&lt;p&gt;The first is that you need to declare the variable names for the InvokeProcess to drop the output and errors into. This is done in the workflow designer putting the variable names in the relevant textboxes (there is no need to declare the variable names anywhere else) as shown below. Use any name you fancy, I used &lt;em&gt;stdOutput&lt;/em&gt; and &lt;em&gt;stdError&lt;/em&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you use the InvokeProcess activity, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/22/running-typemock-isolator-based-tests-in-tfs-2010-team-build.aspx">as I did in my Typemock post</a>, you really need to setup the logging. This is because by default nothing will be logged other than the command line invoked, not usually the best option. There are a couple of gotta’s here that initially caused me a problem and I suspect could cause a new user of the 2010 build process a problem too.</p>
<p>The first is that you need to declare the variable names for the InvokeProcess to drop the output and errors into. This is done in the workflow designer putting the variable names in the relevant textboxes (there is no need to declare the variable names anywhere else) as shown below. Use any name you fancy, I used <em>stdOutput</em> and <em>stdError</em>.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2C547234.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_72D1623C.png" title="image"></a></p>
<p>You then need to add the WriteBuildMessage and WriteBuildError activities by dragging them from the toolbox into the hander areas of the InvokeProcess activity.</p>
<p>The second gotta is that the WriteBuildMessage takes a logging level parameter. This defaults to normal, this means the message will not be displayed in the standard build view (unless the build’s detail level is altered). To get ground this, as I would normally want to see the output of the process being invoked, I would set the Importance of the message to High. Remember you also need to set the Message parameter to the previously declared variable name, in my case <em>stdOutput</em>. This is done in the properties windows as shown below.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_4B971907.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_0B60FF8D.png" title="image"></a></p>
<p>Note that you don’t need to set an importance on the WriteBuildError activity as this is automatically always displayed, you just need to set the Message parameter to <em>stdError.</em></p>
<p>Once you make these changes and run the build, you see the output of the command line (green) in the build log as well as the command line (red). This should help with debugging your InvokeProcess activities in your build process.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_6E37299D.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6717ED25.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>MTLM becomes MTM</title>
      <link>https://blog.richardfennell.net/posts/mtlm-becomes-mtm/</link>
      <pubDate>Mon, 22 Feb 2010 15:00:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mtlm-becomes-mtm/</guid>
      <description>&lt;p&gt;You may have noticed that Microsoft have had another burst of renaming. The tester’s tool in VS2010 started with the codename of Camaro during the CTP phase, this became Microsoft Test &amp;amp; Lab Manager (MTLM) in the Beta 1 and 2 and now in the RC it is call Microsoft Test Manager (MTM).&lt;/p&gt;
&lt;p&gt;Other than me constantly referring to things by the wrong name, the main effect of this is to make searching on the Internet a bit awkward, you have to try all three names to get good coverage. In my small corner of the Internet, I will try to help by updating my existing MTLM tag to MTM and update the description appropriately.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You may have noticed that Microsoft have had another burst of renaming. The tester’s tool in VS2010 started with the codename of Camaro during the CTP phase, this became Microsoft Test &amp; Lab Manager (MTLM) in the Beta 1 and 2 and now in the RC it is call Microsoft Test Manager (MTM).</p>
<p>Other than me constantly referring to things by the wrong name, the main effect of this is to make searching on the Internet a bit awkward, you have to try all three names to get good coverage. In my small corner of the Internet, I will try to help by updating my existing MTLM tag to MTM and update the description appropriately.</p>
]]></content:encoded>
    </item>
    <item>
      <title>So where have I been all week?</title>
      <link>https://blog.richardfennell.net/posts/so-where-have-i-been-all-week/</link>
      <pubDate>Mon, 22 Feb 2010 10:28:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/so-where-have-i-been-all-week/</guid>
      <description>&lt;p&gt;A bit a a double question here, physically I have been at the the MVP Summit in Redmond, having a great time with my fellow “Team System” MVPs and the Microsoft product group members.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.flickr.com/photos/bgervin/4368620611/&#34;&gt;&lt;img alt=&#34;4368620611_d1ce34e06a&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/4368620611_d1ce34e06a_06C7D0C1.jpg&#34; title=&#34;4368620611_d1ce34e06a&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;But my blog has also been on and off all week, so I guess you could say my online presence has been away. This is because [Black Marble has moved office](&lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&amp;amp;title=Black&#34;&gt;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&amp;title=Black&lt;/a&gt; Marble has relocated to Woodland Park) and our blog server has had intermittent connectivity, which hopefully should be resolved soon.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A bit a a double question here, physically I have been at the the MVP Summit in Redmond, having a great time with my fellow “Team System” MVPs and the Microsoft product group members.</p>
<p><a href="http://www.flickr.com/photos/bgervin/4368620611/"><img alt="4368620611_d1ce34e06a" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/4368620611_d1ce34e06a_06C7D0C1.jpg" title="4368620611_d1ce34e06a"></a></p>
<p>But my blog has also been on and off all week, so I guess you could say my online presence has been away. This is because [Black Marble has moved office](<a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&amp;title=Black">http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&title=Black</a> Marble has relocated to Woodland Park) and our blog server has had intermittent connectivity, which hopefully should be resolved soon.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking in Edinburgh on VS2010 and ALM</title>
      <link>https://blog.richardfennell.net/posts/speaking-in-edinburgh-on-vs2010-and-alm/</link>
      <pubDate>Mon, 22 Feb 2010 10:27:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-in-edinburgh-on-vs2010-and-alm/</guid>
      <description>&lt;p&gt;I will be joining &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/iangus/default.aspx&#34;&gt;Iain Angus (Black Marble)&lt;/a&gt; and &lt;a href=&#34;http://blogs.msdn.com/ukvsts/&#34;&gt;Giles Davies (Microsoft)&lt;/a&gt; this week at &lt;a href=&#34;http://www.bing.com/maps/?v=2&amp;amp;cid=164AD0BC4754740A!148&#34;&gt;Microsoft’s Offices in Edinburgh&lt;/a&gt; to [present on Application Life Cycle Management](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Microsoft&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Microsoft&lt;/a&gt; and Black Marble present Visual Studio 2010 and Managing the Application Lifecycle with Team Foundation Server). We will be looking at how VS2010 can help project managers, architects, developers and testers to build better solutions.&lt;/p&gt;
&lt;p&gt;There are [still a few places left](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Microsoft&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Microsoft&lt;/a&gt; and Black Marble present Visual Studio 2010 and Managing the Application Lifecycle with Team Foundation Server),I hope to see you there if you are in the area.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be joining <a href="http://blogs.blackmarble.co.uk/blogs/iangus/default.aspx">Iain Angus (Black Marble)</a> and <a href="http://blogs.msdn.com/ukvsts/">Giles Davies (Microsoft)</a> this week at <a href="http://www.bing.com/maps/?v=2&amp;cid=164AD0BC4754740A!148">Microsoft’s Offices in Edinburgh</a> to [present on Application Life Cycle Management](<a href="http://www.blackmarble.co.uk/events.aspx?event=Microsoft">http://www.blackmarble.co.uk/events.aspx?event=Microsoft</a> and Black Marble present Visual Studio 2010 and Managing the Application Lifecycle with Team Foundation Server). We will be looking at how VS2010 can help project managers, architects, developers and testers to build better solutions.</p>
<p>There are [still a few places left](<a href="http://www.blackmarble.co.uk/events.aspx?event=Microsoft">http://www.blackmarble.co.uk/events.aspx?event=Microsoft</a> and Black Marble present Visual Studio 2010 and Managing the Application Lifecycle with Team Foundation Server),I hope to see you there if you are in the area.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New ‘ALTdotNet Beers North’ group starting</title>
      <link>https://blog.richardfennell.net/posts/new-altdotnet-beers-north-group-starting/</link>
      <pubDate>Thu, 11 Feb 2010 16:24:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-altdotnet-beers-north-group-starting/</guid>
      <description>&lt;p&gt;If you look on &lt;a href=&#34;http://groups.google.com/group/altnet-beers-north?hl=en&#34;&gt;Google Groups&lt;/a&gt; you will find the start of a thread trying to organise some &lt;a href=&#34;http://www.altnetpedia.com/Default.aspx?Page=AltNetBeers&amp;amp;AspxAutoDetectCookieSupport=1&#34;&gt;AltNet Beers session&lt;/a&gt; in Leeds, only the same lines as the London, Bristol etc. events. There appears to be no date set as yet, but keep an eye open, they are a great way to meet like minded people.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you look on <a href="http://groups.google.com/group/altnet-beers-north?hl=en">Google Groups</a> you will find the start of a thread trying to organise some <a href="http://www.altnetpedia.com/Default.aspx?Page=AltNetBeers&amp;AspxAutoDetectCookieSupport=1">AltNet Beers session</a> in Leeds, only the same lines as the London, Bristol etc. events. There appears to be no date set as yet, but keep an eye open, they are a great way to meet like minded people.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Tonight is Agile Yorkshire</title>
      <link>https://blog.richardfennell.net/posts/tonight-is-agile-yorkshire/</link>
      <pubDate>Wed, 10 Feb 2010 16:39:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tonight-is-agile-yorkshire/</guid>
      <description>&lt;p&gt;A late reminder but tonight is the &lt;a href=&#34;http://www.agileyorkshire.org/event-announcements/10Feb2010&#34;&gt;monthly Agile Yorkshire meeting&lt;/a&gt;. This month is an open floor meeting with short presentations from members. Currently the planned subjects are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;REST and OpenRasta&lt;/li&gt;
&lt;li&gt;Silverlight&lt;/li&gt;
&lt;li&gt;F#&lt;/li&gt;
&lt;li&gt;Thoughts on Test Driven Development practices&lt;/li&gt;
&lt;li&gt;Behaviour Driven Development&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hope to see you there, usual place usual time (Victoria Hotel, 28 Great George St, Leeds. See &lt;a href=&#34;http://maps.google.co.uk/maps?q=28%20Great%20George%20St%2C%20Leeds&#34;&gt;here&lt;/a&gt; for directions, 7pm)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A late reminder but tonight is the <a href="http://www.agileyorkshire.org/event-announcements/10Feb2010">monthly Agile Yorkshire meeting</a>. This month is an open floor meeting with short presentations from members. Currently the planned subjects are:</p>
<ul>
<li>REST and OpenRasta</li>
<li>Silverlight</li>
<li>F#</li>
<li>Thoughts on Test Driven Development practices</li>
<li>Behaviour Driven Development</li>
</ul>
<p>Hope to see you there, usual place usual time (Victoria Hotel, 28 Great George St, Leeds. See <a href="http://maps.google.co.uk/maps?q=28%20Great%20George%20St%2C%20Leeds">here</a> for directions, 7pm)</p>
]]></content:encoded>
    </item>
    <item>
      <title>Is Pex and Moles the answer to Sharepoint testing?</title>
      <link>https://blog.richardfennell.net/posts/is-pex-and-moles-the-answer-to-sharepoint-testing/</link>
      <pubDate>Tue, 09 Feb 2010 23:08:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/is-pex-and-moles-the-answer-to-sharepoint-testing/</guid>
      <description>&lt;p&gt;I have got round to watching &lt;a href=&#34;http://channel9.msdn.com/tags/Peli-de-Halleux/&#34;&gt;Peli de Halleux&lt;/a&gt;’s presentation &lt;a href=&#34;http://channel9.msdn.com/posts/matthijs/Pex-Unit-Testing-of-SharePoint-Services-that-Rocks/&#34;&gt;on testing SharePoint with moles&lt;/a&gt; from the &lt;a href=&#34;http://www.devconnections.com/shows/NED2010SP/default.asp?s=149&#34;&gt;SharePoint Connections 2010 event in Amsterdam&lt;/a&gt;, very interesting. This brings a whole new set of tools to the testing of Sharepoint. I think it is best to view the subject of this presentation in two parts &lt;a href=&#34;http://research.microsoft.com/en-us/projects/pex/default.aspx&#34;&gt;Pex and Moles, even though they are from the same stable&lt;/a&gt;; Moles being produced to enable Pex. But rather than me explaining how it all works just watch the video.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have got round to watching <a href="http://channel9.msdn.com/tags/Peli-de-Halleux/">Peli de Halleux</a>’s presentation <a href="http://channel9.msdn.com/posts/matthijs/Pex-Unit-Testing-of-SharePoint-Services-that-Rocks/">on testing SharePoint with moles</a> from the <a href="http://www.devconnections.com/shows/NED2010SP/default.asp?s=149">SharePoint Connections 2010 event in Amsterdam</a>, very interesting. This brings a whole new set of tools to the testing of Sharepoint. I think it is best to view the subject of this presentation in two parts <a href="http://research.microsoft.com/en-us/projects/pex/default.aspx">Pex and Moles, even though they are from the same stable</a>; Moles being produced to enable Pex. But rather than me explaining how it all works just watch the video.</p>
<p>So to my thoughts, the easier bit to consider is Pex. If you can express your unit tests in a parameterised manner then this is a great tool for you. The example that Peli gives of an event handler that parses a string is a good one. We all have loads of places where this type of testing is needed, especially in Sharepoint. The problem here, as he points out, is that you need to use some form of mocking framework to allow the easy execution of these tests for both developers and automated build servers. <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/03/testing-sharepoint-workflows-using-typemock-isolator.aspx">I would usually use Typemock Isolator to provide this mocking</a>, the problem is that Pex and Isolator at this time can’t run together. The Pex Explorer does not start the Typemock mocking interceptor, and thus far I can’t find a way to get round the problem.</p>
<p>So enters Moles, this is Microsoft Research’s subbing framework that ‘<em>detour any .NET method to user-defined delegates, e.g., replace any call to the SharePoint Object Model by a user-defined delegate’</em>. Now I find the Moles syntax is a bit strange. I suspect it is down to my past experience, but I still find the Typemock Isolator AAA syntax easier to read than Moles’. However, there are some nice wrapper classes provided to make it easier to use the Moles framework with Sharepoint.</p>
<p>So where does this leave us? At this time if you want to use Pex (and I certainly would like to) you have to use Moles (if you need stubbing). But you also have to remember that Pex &amp; Moles are research projects. They are available for commercial evaluation, but at this time there seems to be no plans to productise it or roll it into Visual Studio, this means on effect no support. I don’t see either of these points as being a major barrier, as long as you make the choice to accept them knowingly.</p>
<p>However for ultimate flexibility it would be really nice to see Typemock Isolator being interoperable Pex, thus allowing me to use the new techniques of Pex against legacy tests already written using Isolator.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Errors Faking Sharepoint with Typemock due to assembly versions</title>
      <link>https://blog.richardfennell.net/posts/errors-faking-sharepoint-with-typemock-due-to-assembly-versions/</link>
      <pubDate>Tue, 09 Feb 2010 11:43:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/errors-faking-sharepoint-with-typemock-due-to-assembly-versions/</guid>
      <description>&lt;p&gt;I was doing some work today where I needed to fake out a SPWeb object. No problem you think, I am using Typemock Isolator so I just use the line&lt;/p&gt;
&lt;p&gt;&lt;em&gt;var fakeWeb = Isolate.Fake.Instance&lt;SPWeb&gt;();&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;But I got the error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Microsoft.SharePoint.SPWeb.SPWebConstructor(SPSite site, String url, Boolean bExactWebUrl, SPRequest request)&lt;br&gt;
Microsoft.SharePoint.SPWeb..ctor(SPSite, site, String url)&lt;br&gt;
eo.CreateFakeInstance[T](Members behavior, Constructor constructorFlag, Constructor baseConstructorFlag, Type baseType, Object[] ctorArgs)&lt;br&gt;
eo.Instance[T](Members behavior)&lt;br&gt;
(Points to the SPWeb web line as source of error)&lt;br&gt;
TypeMock.MockManager.a(String A_0, String A_1, Object A_2, Object A_3, Boolean A_4, Object[] A_5)&lt;br&gt;
TypeMock.InternalMockManager.getReturn(Object that, String typeName, String methodNAme, Object methodParameters, Boolean isInjected)&lt;br&gt;
(Points to line 0 of my test class)&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was doing some work today where I needed to fake out a SPWeb object. No problem you think, I am using Typemock Isolator so I just use the line</p>
<p><em>var fakeWeb = Isolate.Fake.Instance<SPWeb>();</em></p>
<p>But I got the error</p>
<p><em>Microsoft.SharePoint.SPWeb.SPWebConstructor(SPSite site, String url, Boolean bExactWebUrl, SPRequest request)<br>
Microsoft.SharePoint.SPWeb..ctor(SPSite, site, String url)<br>
eo.CreateFakeInstance[T](Members behavior, Constructor constructorFlag, Constructor baseConstructorFlag, Type baseType, Object[] ctorArgs)<br>
eo.Instance[T](Members behavior)<br>
(Points to the SPWeb web line as source of error)<br>
TypeMock.MockManager.a(String A_0, String A_1, Object A_2, Object A_3, Boolean A_4, Object[] A_5)<br>
TypeMock.InternalMockManager.getReturn(Object that, String typeName, String methodNAme, Object methodParameters, Boolean isInjected)<br>
(Points to line 0 of my test class)</em></p>
<p>This seemed strange I was doing nothing clever, and something I have done many times before. Turns out the issue was the version of the Typemock assemblies I was referencing. I have referenced the 5.4 versions, once I repointed to the new 6.0 (or I suspect older 5.3 ones) all was OK.</p>
]]></content:encoded>
    </item>
    <item>
      <title>At last, my creature it lives……..</title>
      <link>https://blog.richardfennell.net/posts/at-last-my-creature-it-lives/</link>
      <pubDate>Fri, 05 Feb 2010 11:32:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/at-last-my-creature-it-lives/</guid>
      <description>&lt;p&gt;I have at last worked all the way through setting up my portable end to end demo of  &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/27/so-you-want-to-demo-vs2010-lab-manager.aspx&#34;&gt;testing using Windows Test and Lab Manager&lt;/a&gt;. The last error I had to resolve was the tests not running in the lab environment (though working locally on the development PC). My the Lab Workflow build was recorded as a partial success i.e. it built, it deployed but all the tests failed.&lt;/p&gt;
&lt;p&gt;I have not found a way to see the detail of why the tests failed in VS2010 Build Explorer. However, if you:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have at last worked all the way through setting up my portable end to end demo of  <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/27/so-you-want-to-demo-vs2010-lab-manager.aspx">testing using Windows Test and Lab Manager</a>. The last error I had to resolve was the tests not running in the lab environment (though working locally on the development PC). My the Lab Workflow build was recorded as a partial success i.e. it built, it deployed but all the tests failed.</p>
<p>I have not found a way to see the detail of why the tests failed in VS2010 Build Explorer. However, if you:</p>
<ol>
<li>Go into MTLM,</li>
<li>Pick Testing Center</li>
<li>Select the Test Tab</li>
<li>Pick the Analyze Test Results link</li>
<li>Pick the test run you want view</li>
<li>The last item in the summary is the error message , as you can see in my case it was that the whole run failed not any of the individual tests themselves</li>
</ol>
<p><a href="/wp-content/uploads/sites/2/historic/image_2FDAD505.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_7B0A0762.png" title="image"></a></p>
<p>So my error was “Build directory of the test run is not specified or does not exist”. <a href="http://blogs.msdn.com/aseemb/archive/2009/11/25/error-starting-the-test-run-build-directory-of-the-test-run-is-not-specified-or-does-not-exist.aspx">This was caused because the Test Controller (for me running as <strong>Network Service</strong>) could not see the contents of the drop directory.</a> The drop directory is where the test automation assemblies are published as part of the build. Once I gave <strong>Network Service</strong> read rights to access the <strong>\TFS2010Drops</strong> share my tests, and hence my build, ran to completion.</p>
<p>It has been a interesting journey to get this system up and running. MTLM when you initially look at it is very daunting, you have to get a lot of <a href="http://www.wisegeek.com/where-did-the-term-get-your-ducks-in-a-row-come-from.htm">ducks in a row</a> and there are many pitfalls on the way. If any part fails then nothing works, it feels like a bit of a house of cards. However if you work though it step by step I think you will come to see that the underlying architecture of how it hangs together is not as hard to understand as it initially seems. It is complex and has to be done right, but you can at least see why things need to be done. Much of this perceived complexity for me a developer is that I had to setup a number of ITPro products I am just not that familiar with such as SCOM and Hyper-V Manager. Maybe the answer is to make your evaluation of this product a joint Dev/ITPro project so you both learn.</p>
<p>I would say that getting the first build going (and hence the underlying infrastructure) seems to be the worst part. I feel that now I have a platform I understand reasonably, that producing different builds will not be too bad. I suspect the next raft of complexity will appear when I need a radically different test VM (or worse still a networks of VMs) to deploy and test against.</p>
<p>So my recommendation to anyone who is interest in this product is to get your hands dirty, you are not going to understand it by reading or watching videos, you need to build one. So find some hardware, lots of hardware!</p>
]]></content:encoded>
    </item>
    <item>
      <title>ASPNETCOMPILER: error 1003 with TFS2010 Team build</title>
      <link>https://blog.richardfennell.net/posts/aspnetcompiler-error-1003-with-tfs2010-team-build/</link>
      <pubDate>Thu, 04 Feb 2010 14:56:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/aspnetcompiler-error-1003-with-tfs2010-team-build/</guid>
      <description>&lt;p&gt;I have been looking at &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/27/so-you-want-to-demo-vs2010-lab-manager.aspx&#34;&gt;TFS 2010 Lab Manager recently&lt;/a&gt;. One problem I had was that using the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/27/so-you-want-to-demo-vs2010-lab-manager.aspx&#34;&gt;sample code from the Lab Manager Blog Walkthru&lt;/a&gt; the building of the CALC ASP.NET web site failed on the build server, I got an error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;ASPNETCOMPILER: error 1003 The directory ‘c:build1LabWalkthruCalculator –BuildCalc’ does not exist.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;and the build service was right it didn’t exist; it should have been ‘c:build1LabWalkthruCalculator –BuildSourceCalc’.&lt;/p&gt;
&lt;p&gt;This was due to a &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/ms400810%28VS.80%29.aspx&#34;&gt;problem detailed here&lt;/a&gt;. The Solution file had the wrong path in the Debug.AspNetCompiler.PhysicalPath property. It was set to “..Calc” when it should have been “.Calc”. Once this was altered the build could find the files.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been looking at <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/27/so-you-want-to-demo-vs2010-lab-manager.aspx">TFS 2010 Lab Manager recently</a>. One problem I had was that using the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/01/27/so-you-want-to-demo-vs2010-lab-manager.aspx">sample code from the Lab Manager Blog Walkthru</a> the building of the CALC ASP.NET web site failed on the build server, I got an error</p>
<p><em>ASPNETCOMPILER: error 1003 The directory ‘c:build1LabWalkthruCalculator –BuildCalc’ does not exist.</em></p>
<p>and the build service was right it didn’t exist; it should have been ‘c:build1LabWalkthruCalculator –BuildSourceCalc’.</p>
<p>This was due to a <a href="http://msdn.microsoft.com/en-us/library/ms400810%28VS.80%29.aspx">problem detailed here</a>. The Solution file had the wrong path in the Debug.AspNetCompiler.PhysicalPath property. It was set to “..Calc” when it should have been “.Calc”. Once this was altered the build could find the files.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problem creating workitems on TFS2010 in the morning</title>
      <link>https://blog.richardfennell.net/posts/problem-creating-workitems-on-tfs2010-in-the-morning/</link>
      <pubDate>Tue, 02 Feb 2010 21:15:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problem-creating-workitems-on-tfs2010-in-the-morning/</guid>
      <description>&lt;p&gt;I recently been working with a client who has been seeing strange problems when they try to create new workitems via a SharePoint portal site on a TFS2010 Beta2 installation. They appeared to have a fully working TFS2010 installation, but when they came in on a morning they found that even though they could login to the TFS created SharePoint team site they could not create a new workitems, they got an “Error 403 Access Forbidden”.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently been working with a client who has been seeing strange problems when they try to create new workitems via a SharePoint portal site on a TFS2010 Beta2 installation. They appeared to have a fully working TFS2010 installation, but when they came in on a morning they found that even though they could login to the TFS created SharePoint team site they could not create a new workitems, they got an “Error 403 Access Forbidden”.</p>
<p>If they logged into SharePoint as a user with system administration rights it all worked fine. Now here is the strange bit, if they then logged in the user who got the 403 error it all worked fine, but when they came in the next morning it all happened again.</p>
<p>Turned out the issue was due to underling file access rights, once these were fixed all was OK. basically only the admin user had enough rights to populate a cache. Why this had occurred was still a bit of a mystery, but it is something you might see on any SharePoint installation. If you see a issue similar to this, the best option is to use <a href="http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx">Process Monitor</a> to see if there are any file IO problems. This should point you in the right direction</p>
]]></content:encoded>
    </item>
    <item>
      <title>The Barry Dorrans’ farewell DDD performance</title>
      <link>https://blog.richardfennell.net/posts/the-barry-dorrans-farewell-ddd-performance/</link>
      <pubDate>Sun, 31 Jan 2010 09:06:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-barry-dorrans-farewell-ddd-performance/</guid>
      <description>&lt;p&gt;Seems I missed an interesting  at session at DDD8, &lt;a href=&#34;http://idunno.org/archive/2010/01/30/a-developers-guide-to-encryption.aspx&#34;&gt;Barry Dorrans’ final DDD performance, with assistance with of other speakers.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Barry you will be missed&lt;/p&gt;
&lt;p&gt;Update 5 Feb, links to the interruptions&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://vimeo.com/9205053&#34;&gt;Plip&amp;rsquo;s Book Advert&lt;/a&gt;.&lt;br&gt;
&lt;a href=&#34;http://vimeo.com/9205726&#34;&gt;Liam&amp;rsquo;s Eulogy&lt;/a&gt;.&lt;br&gt;
&lt;a href=&#34;http://vimeo.com/9205478&#34;&gt;Colin Mackay&amp;rsquo;s new source of presentations&lt;/a&gt;.&lt;br&gt;
&lt;a href=&#34;http://vimeo.com/9205839&#34;&gt;Craig Murphy insulting not one but two ex-UK community folks&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Seems I missed an interesting  at session at DDD8, <a href="http://idunno.org/archive/2010/01/30/a-developers-guide-to-encryption.aspx">Barry Dorrans’ final DDD performance, with assistance with of other speakers.</a></p>
<p>Barry you will be missed</p>
<p>Update 5 Feb, links to the interruptions</p>
<p><a href="http://vimeo.com/9205053">Plip&rsquo;s Book Advert</a>.<br>
<a href="http://vimeo.com/9205726">Liam&rsquo;s Eulogy</a>.<br>
<a href="http://vimeo.com/9205478">Colin Mackay&rsquo;s new source of presentations</a>.<br>
<a href="http://vimeo.com/9205839">Craig Murphy insulting not one but two ex-UK community folks</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A call for speaker at next month Agile Yorkshire meeting</title>
      <link>https://blog.richardfennell.net/posts/a-call-for-speaker-at-next-month-agile-yorkshire-meeting/</link>
      <pubDate>Fri, 29 Jan 2010 10:11:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-call-for-speaker-at-next-month-agile-yorkshire-meeting/</guid>
      <description>&lt;p&gt;Next months Agile Yorkshire meeting (10th Feb) is an open floor meeting - any subject, any format, 10 minute maximum.&lt;/p&gt;
&lt;p&gt;If you fancy doing something in one of the 10 minute slots - whether it is a presentation, a demonstration, a discussion around a problem area - then visit&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.agileyorkshire.org/event-announcements/10Feb2010&#34;&gt;http://www.agileyorkshire.org/event-announcements/10Feb2010&lt;/a&gt; to register your idea.&lt;/p&gt;
&lt;p&gt;Presentations can be marked as provisional if you like the idea but are unsure until later whether you will be ready, available, etc.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Next months Agile Yorkshire meeting (10th Feb) is an open floor meeting - any subject, any format, 10 minute maximum.</p>
<p>If you fancy doing something in one of the 10 minute slots - whether it is a presentation, a demonstration, a discussion around a problem area - then visit</p>
<p><a href="http://www.agileyorkshire.org/event-announcements/10Feb2010">http://www.agileyorkshire.org/event-announcements/10Feb2010</a> to register your idea.</p>
<p>Presentations can be marked as provisional if you like the idea but are unsure until later whether you will be ready, available, etc.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Empty page being show for Silverlight application running out of browser</title>
      <link>https://blog.richardfennell.net/posts/empty-page-being-show-for-silverlight-application-running-out-of-browser/</link>
      <pubDate>Fri, 29 Jan 2010 10:10:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/empty-page-being-show-for-silverlight-application-running-out-of-browser/</guid>
      <description>&lt;p&gt;Whilst preparing demo’s for [our design event next week](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Understanding&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Understanding&lt;/a&gt; the Microsoft Design and Digital Agency Proposition) I hit a problem with the out of browser experience in Silverlight 3. I had decided to add the out of browser settings to our &lt;a href=&#34;http://www.blackmarble.co.uk/Xmas09/default.aspx&#34;&gt;2009 Christmas Card&lt;/a&gt; Silverlight application. To do this all you need to do is check a box on the project settings&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_797C9F08.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_3946858E.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I ran the application in the browser and all was OK, I right clicked to installed it to the desktop and it ran OK but I got an empty white screen&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst preparing demo’s for [our design event next week](<a href="http://www.blackmarble.co.uk/events.aspx?event=Understanding">http://www.blackmarble.co.uk/events.aspx?event=Understanding</a> the Microsoft Design and Digital Agency Proposition) I hit a problem with the out of browser experience in Silverlight 3. I had decided to add the out of browser settings to our <a href="http://www.blackmarble.co.uk/Xmas09/default.aspx">2009 Christmas Card</a> Silverlight application. To do this all you need to do is check a box on the project settings</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_797C9F08.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3946858E.png" title="image"></a></p>
<p>I ran the application in the browser and all was OK, I right clicked to installed it to the desktop and it ran OK but I got an empty white screen</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_6B3E2618.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_4718CB89.png" title="image"></a></p>
<p>Turns out the problem was due to a trick we had pulled in the page load. The comic page is a bit long, we had found that of we set the SIlverlight control in the browser page to the comic height the loading spinner could often appear off the bottom of the browser window (especially if the window was not maximised). To address this we set the height of the Silverlight Control small in the containing web page, then at the end of the loading reset it the required height. The fact we were trying to access an HTML Control was the problem, when you are out of the browser there is no HTML page to access.</p>
<p>The solution was single, wrapper the call in conditional test</p>
<pre tabindex="0"><code> if (Application.Current.IsRunningOutOfBrowser == false)  {       // resize to the correct size, we keep the height small during the load so the loading spinner is easy to see       // remember this only works if in a browser       HtmlPage.Document.GetElementById(&#34;silverlightControlHost&#34;).SetStyleAttribute(&#34;height&#34;, &#34;930px&#34;);  }
</code></pre><p>One this change was made the application worked fine both inside and outside the browser</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_60149BCE.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_06E2B20F.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Giving Twitter Notify a try</title>
      <link>https://blog.richardfennell.net/posts/giving-twitter-notify-a-try/</link>
      <pubDate>Thu, 28 Jan 2010 15:08:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/giving-twitter-notify-a-try/</guid>
      <description>&lt;p&gt;After attending &lt;a href=&#34;http://eileenbrown.wordpress.com/&#34;&gt;Eileen Brown&lt;/a&gt;’s session on [Social networking last night](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Using&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Using&lt;/a&gt; Digital Marketing and Social Media to Build and Maintain your Online Brand&amp;amp;Code=), I though I should make more of an effort. So I am giving &lt;a href=&#34;http://gallery.live.com/liveItemDetail.aspx?li=6b2b5ffe-936a-4cb3-869c-c01de29de176&amp;amp;bt=9&amp;amp;pl=8&#34;&gt;Twitter Notify&lt;/a&gt; a try, so at least there is some activity on &lt;a href=&#34;http://twitter.com/richardfennell&#34;&gt;my Twitter account&lt;/a&gt; when I blog.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After attending <a href="http://eileenbrown.wordpress.com/">Eileen Brown</a>’s session on [Social networking last night](<a href="http://www.blackmarble.co.uk/events.aspx?event=Using">http://www.blackmarble.co.uk/events.aspx?event=Using</a> Digital Marketing and Social Media to Build and Maintain your Online Brand&amp;Code=), I though I should make more of an effort. So I am giving <a href="http://gallery.live.com/liveItemDetail.aspx?li=6b2b5ffe-936a-4cb3-869c-c01de29de176&amp;bt=9&amp;pl=8">Twitter Notify</a> a try, so at least there is some activity on <a href="http://twitter.com/richardfennell">my Twitter account</a> when I blog.</p>
]]></content:encoded>
    </item>
    <item>
      <title>So you want to demo VS2010 Lab Manager…….</title>
      <link>https://blog.richardfennell.net/posts/so-you-want-to-demo-vs2010-lab-manager/</link>
      <pubDate>Wed, 27 Jan 2010 14:50:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/so-you-want-to-demo-vs2010-lab-manager/</guid>
      <description>&lt;p&gt;I recently decided to build a demo system for VS2010 Lab Manager. This was for a number of reasons, not least I just wanted to have a proper play with it, but also that I was hoping to do a session on Microsoft Test and Lab Manager at &lt;a href=&#34;http://www.developerdeveloperdeveloper.com/ddd8/Schedule.aspx&#34;&gt;DDD8&lt;/a&gt; (as it turns out my session did not get voted for, maybe better luck for &lt;a href=&#34;http://www.developerdeveloperdeveloper.com/scotland2010/Users/VoteForSessions.aspx&#34;&gt;DDS&lt;/a&gt;, you can still vote for that conference’s sessions).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently decided to build a demo system for VS2010 Lab Manager. This was for a number of reasons, not least I just wanted to have a proper play with it, but also that I was hoping to do a session on Microsoft Test and Lab Manager at <a href="http://www.developerdeveloperdeveloper.com/ddd8/Schedule.aspx">DDD8</a> (as it turns out my session did not get voted for, maybe better luck for <a href="http://www.developerdeveloperdeveloper.com/scotland2010/Users/VoteForSessions.aspx">DDS</a>, you can still vote for that conference’s sessions).</p>
<p>Anyway if any of you have looked at the Lab Manager side of MTLM you will know that getting it going is no quick task. Firstly I cannot recommend highly enough the Lab Management Teams’ blog posts ‘Getting started with Lab Management’ Parts <a href="http://blogs.msdn.com/lab_management/archive/2009/11/18/Getting-started-with-Lab-Management-_2800_Part-1_2900_.aspx">1</a>, <a href="http://blogs.msdn.com/lab_management/archive/2009/11/18/getting-started-with-lab-management-part-2.aspx">2</a> ,<a href="http://blogs.msdn.com/lab_management/archive/2009/11/20/getting-started-with-lab-management-part-3.aspx">3</a> and <a href="http://blogs.msdn.com/lab_management/archive/2009/11/23/getting-started-with-lab-management-part-4.aspx">4</a>. This type of walkthrough post is a great way to move into a new complex product such as this. It provides the framework to get you going, it doesn’t fix all your problems but gives you a map to follow into the main documentation or other blog posts.</p>
<p>The architecture I was trying to build was as below. My hardware was a <a href="http://www.shuttle.com/">Shuttle PC</a> as this was all I could find in the office that could take 8Gb of memory, the bare minimum for this setup. Not as convenient as a laptop for demos, but I was not going to bankrupt myself getting an 8Gb laptop!</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_5AC64872.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_09CFC072.png" title="image"></a></p>
<p>As I wanted my system to be mobile, it needed to be it’s own domain (demo.com). This was my main problem during the install. MTLM assumes the host server and all the VMs are in the same domain, but that the domain controller (DC) is on some other device on the domain. I installed the DC on the host server; this meant I had to do the following to get it all to work (I should say I did all of these to get my system running, but they may not all be essential, but they are all sensible practice so probably worth doing)</p>
<ul>
<li>Run the VMM Host as a user other than the default of <strong>Local System</strong> (this is an option set during the installation). The default <strong>Local System</strong> user has reduced rights on a domain controller, and so is not able to do all that it needs to. I create a new domain account (<strong>demoVMMserver)</strong> and used this as the service account for the VMM.</li>
<li>The ‘<a href="http://blogs.msdn.com/lab_management/archive/2009/11/18/getting-started-with-lab-management-part-2.aspx">Getting Started’ blog posts</a> suggest a basic install of TFS, this just installs source control, work item tracking and build services using a SQL Express instance. This is fine, but this mode defaults to using the <strong>Network Service</strong> account to run the TFS web services. This has the same potential issues as the <strong>Local System</strong> account on the DC, so I swapped this to use a domain account (<strong>demoTFSservice)</strong> using the TFS Administration console. </li>
<li>_AND THIS IS THE WIERD ONE AND I SUSPECT THE MOST IMPORTANT. A_s I was using the host system as a DNS and DHCP the VMs needed to be connected to the physical LAN of the host machine to make use of these services. However as I did not want them to pickup my office’s DHCP service I left the physical server’s Ethernet port unplugged. This meant that when I tried to create a new lab environment I got a TF259115 error. Plugging in a standalone Ethernet hub (connected to nothing else) fixed this problem. I am told this is because part of the LAN stack on the physical host is disabled due to the lack of a physical Ethernet link, even though the DNS and DHCP services were unaffected. The other option would have been to run the DNS, DHCP etc on Hyper-V VM(s).</li>
<li>When configuring the virtual lab in TFS Administration console the ‘Network Location’ was blank. If you ignore this missing Network location or manually enter it you get a TF259210 error when you verify the settings in TFS Administration. This is a known problem in SCVMM and was fixed by <a href="http://blogs.technet.com/chengw/archive/2009/05/08/vmm-network-location-and-network-tag.aspx">overriding the discovered network and entering demo.com</a>.</li>
</ul>
<p>So I now had a working configuration, but when I try to import my prepared test VM into Lab Center, I got an “Import failed, the specified owner is not a valid Active Directory Domain Services account, Specify a valid  Active Directory Domain Services account and try again” error. If I check the SCVMM jobs logs (in SCVMM Admin console) I saw this was an Error 813 in the ‘create hardware setup’ step. However, the account the job was running as was a domain user, as was the service account the host was running on (after I had made the changes detailed above) as I was confused.</p>
<p>This turns out to be a user too stupid error; I was logged in as the TFS servers local administrator (<strong>tfs2010administrator)</strong> not the domain one (<strong>demoadministrator),</strong> or actually any domain account with VMM administrator rights. Once I logged in on the TFS server (where I was running MTLM) as a domain account all was OK. Actually I suspect moving to the VMMService and TFSService accounts was not vital, but did not harm.</p>
<p>I could now create my virtual test environment and actually start to create Team Builds that make use of my test lab environment. Also I think having worked though these problems I have a better understanding of how all the various parts underpinning MTLM hang together, a vital piece of knowledge if you intend to make real use of these tools.</p>
<p>Oh and thanks to everyone who helped me when I got stuck</p>
]]></content:encoded>
    </item>
    <item>
      <title>The uptake of Agile and Alt.Net practices in places a bit away from the major development hotspots</title>
      <link>https://blog.richardfennell.net/posts/the-uptake-of-agile-and-alt-net-practices-in-places-a-bit-away-from-the-major-development-hotspots/</link>
      <pubDate>Mon, 25 Jan 2010 14:03:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-uptake-of-agile-and-alt-net-practices-in-places-a-bit-away-from-the-major-development-hotspots/</guid>
      <description>&lt;p&gt;Last week I got into an interesting discussion via email with Nieve a developer from Lille, France.The chat was on the uptake of Agile and Alt.Net practices in places a bit away from the major development hotspots. We both thought it could make an interesting post so, here goes, starting with Nieve’s first post…..&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Hello there,I&amp;rsquo;ve stumbled upon your blog while googling for the terms alt.net yorkshire.I&amp;rsquo;m a .NET developer working in Paris and living in the north of France (Lille area). Now, the reason I&amp;rsquo;m writing is that we&amp;rsquo;re having an alt.net lunch next month, and I would like to talk a bit about the differences between the (alt).net communities in france and england. Now since I did my studies in Leeds, the fact that yorkshire and la région du nord are (surprise surprise) in the north (plus a shared history of mines) brought me to google for alt.net and yorkshire.Over here in Lille/the north of France the situation is rather grim. job offers that entail agile practices and or tools in .NET environment are as rare as an eclipse, managers and developers alike are literally afraid of any framework/tool that isn&amp;rsquo;t microsoft yet somehow miraculously written in a .net language. I suppose you get the picture. I was wondering if you would mind sharing with me (and/or others, on your blog) your thoughts on the situation in yorkshire.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Last week I got into an interesting discussion via email with Nieve a developer from Lille, France.The chat was on the uptake of Agile and Alt.Net practices in places a bit away from the major development hotspots. We both thought it could make an interesting post so, here goes, starting with Nieve’s first post…..</p>
<p><em>Hello there,I&rsquo;ve stumbled upon your blog while googling for the terms alt.net yorkshire.I&rsquo;m a .NET developer working in Paris and living in the north of France (Lille area). Now, the reason I&rsquo;m writing is that we&rsquo;re having an alt.net lunch next month, and I would like to talk a bit about the differences between the (alt).net communities in france and england. Now since I did my studies in Leeds, the fact that yorkshire and la région du nord are (surprise surprise) in the north (plus a shared history of mines) brought me to google for alt.net and yorkshire.Over here in Lille/the north of France the situation is rather grim. job offers that entail agile practices and or tools in .NET environment are as rare as an eclipse, managers and developers alike are literally afraid of any framework/tool that isn&rsquo;t microsoft yet somehow miraculously written in a .net language. I suppose you get the picture. I was wondering if you would mind sharing with me (and/or others, on your blog) your thoughts on the situation in yorkshire.</em></p>
<p>My reply</p>
<p><em>I don&rsquo;t know if you have heard of Ian Cooper, he was one of the organisers of the ALT.NET events in the UK. Well he just posted on his blog on a subject very close to your question</em> <a href="http://codebetter.com/blogs/ian_cooper/archive/2010/01/19/whither-alt-net.aspx"><em>http://codebetter.com/blogs/ian_cooper/archive/2010/01/19/whither-alt-net.aspx</em></a></p>
<p><em>In my opinion there has not been a drop of in interest over the tools and practices of ALT.NET but it has lost it’s label a bit. Ian is right the main people pushing it have moved more towards Twitter etc. which has reduced visibility if you don&rsquo;t follow them.</em></p>
<p><em>Local groups are still on the go. I myself attend Agile Yorkshire</em> <a href="http://www.agileyorkshire.org/"><em>http://www.agileyorkshire.org/</em></a> <em>which is a group driven by development process (being JAVA and .NET) but did help organise the ALT.NET in the North event last year. We hope to run something this year, but we doubt it will be under the ALT.NET banner as it was felt this alienated JAVA members</em></p>
<p><em>As to who is using the tools, no as many as I would hope. But you find them in surprising places. I found out that a dev teams in the NHS (usually known to be very bureaucratic and fixed management process) are using Kanban, nHibernate etc. and finding them useful. Getting adoption is all down to someone showing there is an advantage, the problem is so few people in our industry care about improving their skills, it all comes back as Ian said to the software craftsmanship movement</em></p>
<p>Nieve again</p>
<p><em>First of all let me begin by saying I only wish I could tell you how much I am thankful. Reading Ian&rsquo;s post was a something of an epiphany moment :) At some points he brought it so close to home that I had to stop and think &lsquo;hold on, is he just talking about software development or is there a hidden message about the state of France..?&rsquo; Over here it&rsquo;s not only the IT industry that breeds this sort of position holders that are fine where they are and just won&rsquo;t bother changing anything. I always think of it as &lsquo;with all that revolution going on, you don&rsquo;t get any evolution&rsquo;; the idea is that everyone here are jumping to their feet and straight to the street to cry against whatever change that is offered, that nothing ever gets to change hence no evolution&hellip;</em></p>
<p><em>To get back to the issue in question, I think one of the things Ian, and for that matter many of the</em> <a href="http://ALT.NET"><em>ALT.NET</em></a> <em>people, tend to forget or simply overlook is the fact that while at some parts of the world people may think the battle was won, or that it&rsquo;s about time to wake up from our comfortable twitter hibernation, in some other parts the battle hasn&rsquo;t even began, which brings me back to my original question. See, you guys up the in England and esp. in the north can be very proud of your community, and not only the</em> <a href="http://alt.net/agile/software"><em>alt.net/agile/software</em></a> <em>development/IT one, but also the local-geographical community. I had to go and look for a job in Paris, which entails a couple of hours on the train each and every day and which is bound to end by leaving Lille (and no wonder I&rsquo;m considering moving back to yorkshire); Not only developers and managers are afraid of anything that is not microsoft, the actual idea of software craftsmanship is an abnormality in our region. There is a Nord-agile group that works here and have meetings every couple of months and consists of 5 to 7 people, none of them a .net person. And we&rsquo;re talking about a huge region and one of france&rsquo;s 5 biggest cities.</em></p>
<p><em>With that in mind, there&rsquo;s also the fact that roughly each and every year a new generation of developers is arriving to the market which makes it even more difficult to those (esp the beginners to senior-juniors) who wants to learn and work on their coding craftsmanship. (I remember I discovered the</em> <a href="http://alt.net"><em>alt.net</em></a> <em>manifest only a couple of years ago or so, and soon after I remember reading a post of Ayende saying he&rsquo;s going to give Twitter a shot. Thank god, he&rsquo;s one of those who never stopped blogging.)</em></p>
<p><em>As for Paris, things seem to be closer to what Ian said; there are a lot more job offers that ask for a working experience in NH, MVC, NUnit etc&rsquo;, however this feels like the new orthodoxy.</em></p>
<p>… and me again</p>
<p>So to me this shows that the problem we both see are not just down to us at our company/technology/region/country. Craftsman Developers everywhere tend to sit in small isolated pockets, even in large conurbations, and there is nothing to go but to organise locally where you can, go on go for a beer you know you want to, and to join in the virtual communities to get a bigger world view.</p>
<p>Wow, that sounds like a call to revolution, better go into hiding in case the thought people come round, I know I will just have to think I am not in!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Typemock Isolator based tests in TFS 2010 Team Build</title>
      <link>https://blog.richardfennell.net/posts/running-typemock-isolator-based-tests-in-tfs-2010-team-build/</link>
      <pubDate>Fri, 22 Jan 2010 14:29:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-typemock-isolator-based-tests-in-tfs-2010-team-build/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 2nd March 2010:&lt;/strong&gt; Altered the sample arguments see this &lt;a href=&#34;https://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/02/the-importance-of-using-parameters-in-vs2010-build-workflows.aspx&#34;&gt;post for more details&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Updated 27th Jan 2010:&lt;/strong&gt; &lt;a href=&#34;http://www.nablasoft.com/alkampfer&#34;&gt;Gian Maria Ricci, another Team System MVP&lt;/a&gt; has done some more work on this problems and posted on how to create a custom &lt;a href=&#34;http://www.codewrecks.com/blog/index.php/2010/01/27/run-test-with-typemockisolator-during-a-tfs2010-build/&#34;&gt;activity to address the problem&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I have been looking at getting automated builds running on TFS2010 that make use of &lt;a href=&#34;http://www.typemock.com/&#34;&gt;Typemock Isolator&lt;/a&gt;. This is not as straight forward as you would expect.&lt;/p&gt;
&lt;p&gt;The issue is that you have start Isolator’s mocking interceptor before you run any tests that use Typemock (and stop it afterwards). If you are running in the VS IDE this is all done automatically, but is not done as part of an MSBuild Team Build Process by default.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 2nd March 2010:</strong> Altered the sample arguments see this <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/02/the-importance-of-using-parameters-in-vs2010-build-workflows.aspx">post for more details</a></p>
<p><strong>Updated 27th Jan 2010:</strong> <a href="http://www.nablasoft.com/alkampfer">Gian Maria Ricci, another Team System MVP</a> has done some more work on this problems and posted on how to create a custom <a href="http://www.codewrecks.com/blog/index.php/2010/01/27/run-test-with-typemockisolator-during-a-tfs2010-build/">activity to address the problem</a>.</p>
<p>I have been looking at getting automated builds running on TFS2010 that make use of <a href="http://www.typemock.com/">Typemock Isolator</a>. This is not as straight forward as you would expect.</p>
<p>The issue is that you have start Isolator’s mocking interceptor before you run any tests that use Typemock (and stop it afterwards). If you are running in the VS IDE this is all done automatically, but is not done as part of an MSBuild Team Build Process by default.</p>
<p>In prior versions of TFS a solution was <a href="http://www.typemock.com/Docs/UserGuide/MSBuild.html">provide by Typemock in the form a pair of MSBUILD Tasks</a> to start and stop the mocking process which you wired into your team build definition file, something like the example below</p>
<Import Project="C:Program FilesTypemockIsolator4.3TypeMock.MSBuild.Tasks" />  
<Target Name="BeforeTest">  
<TypeMockStart LogPath="C:TypeMockLogs" LogLevel="9" Target="3.5" />  
</Target>  
<Target Name="AfterTest">  
<TypeMockStop />  
</Target>
<p>The problem with 2010 is that you no longer get this single MSBUILD file to manage the Team Build, it is now a XAML workflow. I know I could use the model for legacy support of the older build process, but where is the fun in that?</p>
<p><strong>Attempt 1 – Adding more MSBuild projects</strong></p>
<p><a href="https://blogs.blackmarble.co.uk/blogs/rfennell/clip_image001_67F75B52.jpg"><img alt="clip_image001" loading="lazy" src="https://blogs.blackmarble.co.uk/blogs/rfennell/clip_image001_thumb_60D81EDA.jpg" title="clip_image001"></a></p>
<p>My first idea was to add a MSBUILD activity just before the MSTEST activity block (and another after to stop the mocking interceptor) and pointed it at a Typemock.Proj file (placed in in a suitable location) and calling the correct target. The proj file is shown below</p>
<Project xmlns="[http://schemas.microsoft.com/developer/msbuild/2003](http://schemas.microsoft.com/developer/msbuild/2003)">  
  <PropertyGroup>  
    <TypeMockLocation> C:Program Files (x86)TypemockIsolator6.0</TypeMockLocation>  
  </PropertyGroup>
<p>  <Import Project ="$(TypeMockLocation)TypeMock.MSBuild.Tasks"/><br>
  <Target Name="StartTypeMock"><br>
    <Message Text="StartTypeMock"/><br>
    <TypeMockStart/><br>
  </Target><br>
  <Target Name="StopTypeMock"><br>
    <Message Text="StopTypeMock"/><br>
    <TypeMockStop/> <br>
</Target><br>
</Project></p>
<p>In the build logs I could see the activities ran, and a typemock.log file is created for the MSBuilds (note a problem here that as I use the same proj for both the start and stop build target the log file gets over written, so a good idea is to pass in explicit log file names for each call to MSBUILD). However, even though everything seemed to run OK the test that need Typemock fail with the error</p>
<p>Test method CallTracker.BusinessLogic.Tests.PrintTests.PrintAllCalls_TypeMockedPrinter_2CallsPrinted threw exception:<br>
TypeMock.TypeMockException:<br>
*** Typemock Isolator is not currently enabled.<br>
To enable do one of the following:<br>
* To run Typemock Isolator as part of an automated process you can:<br>
   - run tests via TMockRunner.exe command line tool<br>
   - use &lsquo;TypeMockStart&rsquo; tasks for MSBuild or NAnt<br>
* To work with Typemock Isolator inside Visual Studio.NET:<br>
        set Tools-&gt;Enable Typemock Isolator from within Visual Studio<br>
For more information consult the documentation (see &lsquo;Running&rsquo; topic)</p>
<p>This is exactly the error you expect if you have not started the mocking interceptor.</p>
<p>After a chat with Typemock support I understood the problem. In the TFS 2005/8 model we had a single MSBUILD process running. It compiled the code, ran the tests, it did everything. So if we started mocking interception in this process everything was good and it all worked. Under the 2010 model we have the new XAML based process, this starts a MSBUILD process to do the compile, and in our case another to start mocking, then MSTest to run and finally another MSBUILD to stop mocking. So we have at least four process within the build and the reality is we have only switched on mocking for the duration of the MSBUILD task calling the StartTypemock task.</p>
<p><strong>Attempt 2 – Other task types</strong></p>
<p>I did also look at starting mocking using the INVOKEMETHOD task and by calling the Typemock provided batch files. These all resulted in the same basic problem, mocking was not switched on when MSTest ran.</p>
<p><strong>A solution</strong></p>
<p>The solution is to use the TMockRunner.EXE that is shipped with TypeMock Isolator. This is a wrapper EXE that runs a command line tools passing in the required set of parameters. All the TMockRunner does is, as the name suggests, start and stop the mocking as needed.</p>
<p>To get this to work I removed the MSTest task in the build workflow and replaced it will a InvokeProcess one with the following parameters</p>
<p><strong>FileName:</strong> &ldquo;C:Program Files (x86)TypemockIsolator6.0TMockRunner.exe&rdquo;<br>
<strong>Arguments:</strong></p>
<p>&ldquo;&ldquo;&ldquo;C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEMSTest.exe&rdquo;&rdquo;  /nologo /testcontainer:&rdquo;&quot;&quot; + String.Format(&quot;{0}BinariesTestproject.dll&quot;, BuildDirectory) + &quot;&quot;&quot; /publish:&quot;&quot;<a href="http://typhoon:8080/tfs/DefaultCollection%22%22">http://typhoon:8080/tfs/DefaultCollection&quot;&quot;</a> /publishbuild:&quot;&quot;&quot; + BuildDetail.Uri.ToString() + &quot;&quot;&quot; /teamproject:&quot;&quot;&quot; + BuildDetail.TeamProject + &quot;&quot;&quot; /platform:&ldquo;&ldquo;Any CPU&rdquo;&rdquo; /flavor:&ldquo;&ldquo;Debug&rdquo;&rdquo; /resultsfile:&quot;&quot;&quot; + String.Format(&quot;{0}BinariesTest.Trx&quot;, BuildDirectory) + &quot;&quot;&quot;  &quot;</p>
<p>&ldquo;&ldquo;&ldquo;C:Program Files (x86)Microsoft Visual Studio 10.0Common7IDEMSTest.exe&rdquo;&rdquo;  /nologo /testcontainer:&rdquo;&quot;&quot; + String.Format(&quot;{0}BinariesBusinessLogic.Tests.dll&quot;, BuildDirectory) + &quot;&quot;&quot; /publish:&quot;&quot;<a href="http://typhoon:8080/tfs/DefaultCollection%22%22">http://typhoon:8080/tfs/DefaultCollection&quot;&quot;</a> /publishbuild:&quot;&quot;&quot; + BuildDetail.Uri.ToString() + &quot;&quot;&quot; /teamproject:&quot;&quot;&quot; + BuildDetail.TeamProject + &quot;&quot;&quot; /platform:&quot;&quot;&quot; + platformConfiguration.Platform + &quot;&quot;&quot; /flavor:&quot;&quot;&quot; + platformConfiguration.Configuration + &quot;&quot;&quot; /resultsfile:&quot;&quot;&quot; + String.Format(&quot;{0}BinariesTest.Trx&quot;, BuildDirectory) + &quot;&quot;&quot;  &quot;</p>
<p>A few things to note on the arguments and settings</p>
<ol>
<li>The arguments have to appear as a single string, so this is why you have all the double quotes escaping</li>
<li>The Resultsfile is dumped into the binaries directory as this is cleaned out on each build. It has to go in an existing directory (we have no TestResults by default, of course we could create one, but why go to the effort?). This is a simple solution as the <a href="https://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/03/11/cruisecontrol-amp-mstest-from-visual-studio-2008.aspx">MSTest still seems to fail if the Test.TRX file already exists</a>, Note I did try to use the /resultsfileroot parameter (which is used by default by the 2010 MSTest task) but this is not support via the command line.</li>
<li>I would like to pickup the publish URL, platform and flavor automatically, I need to dig port into the TFS API to find these values at build time.</li>
<li>I have not wired the MSTest output back into the main log, so it is hard to see what has happened without just looking for the creation of the TRX file.</li>
<li>Watch out for DCOM activation errors. There is a good chance the user you are running the build process as does not have the right to run anything else. If your InvokeProcess seems to do nothing check the build PCs error log for DCOM activation errors and either give the build user the correct rights or swap to a user that already has the rights.</li>
</ol>
<p>Once this was all done I could do my build and the test ran and passed.</p>
<p>The one remaining issues was to set the Test status for the build base done these results, I have no solution to that as yet, but I am still looking</p>
]]></content:encoded>
    </item>
    <item>
      <title>Time to exercise your prerogative again</title>
      <link>https://blog.richardfennell.net/posts/time-to-exercise-your-prerogative-again/</link>
      <pubDate>Thu, 21 Jan 2010 12:55:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/time-to-exercise-your-prerogative-again/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.developerdeveloperdeveloper.com/scotland2010/Users/VoteForSessions.aspx&#34;&gt;voting has opened for DDS&lt;/a&gt;, time to make your voice heard as what sessions you would like to see&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.developerdeveloperdeveloper.com/scotland2010/Users/VoteForSessions.aspx">voting has opened for DDS</a>, time to make your voice heard as what sessions you would like to see</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems installing TFS Proxy</title>
      <link>https://blog.richardfennell.net/posts/problems-installing-tfs-proxy/</link>
      <pubDate>Mon, 18 Jan 2010 21:25:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-installing-tfs-proxy/</guid>
      <description>&lt;p&gt;I recently saw an interesting problem install a TFS proxy (in my case it was on an existing TFS 2005 system using Window Server 2003 for the proxy host, but the problem could be seen on any version f TFS). The installation appeared to go fine, and when a Visual Studio client requested files, via the proxy, files appeared in the cache directory, however the client reported it was not using the proxy.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently saw an interesting problem install a TFS proxy (in my case it was on an existing TFS 2005 system using Window Server 2003 for the proxy host, but the problem could be seen on any version f TFS). The installation appeared to go fine, and when a Visual Studio client requested files, via the proxy, files appeared in the cache directory, however the client reported it was not using the proxy.</p>
<p>When I tried to look at the proxy statistics using <a href="http://localhost:8080/VersionControl/v1.0/proxystatistics.asmx">http://localhost:8080/VersionControl/v1.0/proxystatistics.asmx</a> I was shown a standard IE login dialog, not what expected as I was logged in as the administrator. No credentials were found that could get past this dialog.</p>
<p>I started looking a firewall settings, anti virus port blocker, and local loopback settings, but it turned out the problem was far more fundamental. The IIS6 installation was corrupted. When I dropped test files into the proxy server virtual directory I found I the server could render an HTML page but not an .ASP or ASP.Net ones.</p>
<p>So I removed IIS from server, then put it back and on the problems went away. All I can assume is some IIS/.NET installation/patching order issue. I bet it would not have happened with I had started with Server 2003 R2 with .NET already on it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Typemock Isolator 2010 released</title>
      <link>https://blog.richardfennell.net/posts/typemock-isolator-2010-released/</link>
      <pubDate>Mon, 18 Jan 2010 21:08:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/typemock-isolator-2010-released/</guid>
      <description>&lt;p&gt;Today Typemock released their Isolator 6.0 version with full support for VS2010. &lt;a href=&#34;http://blog.typemock.com/2010/01/typemock-isolator-2010-released.html?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A&amp;#43;Typemock&amp;#43;%28The&amp;#43;Typemock&amp;#43;Insider%29&#34;&gt;Check out the full announcement at their site,&lt;/a&gt; there some nice new features there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today Typemock released their Isolator 6.0 version with full support for VS2010. <a href="http://blog.typemock.com/2010/01/typemock-isolator-2010-released.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A&#43;Typemock&#43;%28The&#43;Typemock&#43;Insider%29">Check out the full announcement at their site,</a> there some nice new features there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD8 is full</title>
      <link>https://blog.richardfennell.net/posts/ddd8-is-full/</link>
      <pubDate>Fri, 15 Jan 2010 21:54:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd8-is-full/</guid>
      <description>&lt;p&gt;I have been on a customer site today, locked in a machine room, and so have managed to miss the opening and closing of registration for DDD8. So, as neither of my session’s got enough votes (too test based I guess for the developer mainstream) to be on &lt;a href=&#34;http://www.developerdeveloperdeveloper.com/ddd8/Schedule.aspx&#34;&gt;the schedule&lt;/a&gt; I guess I will not be attending.&lt;/p&gt;
&lt;p&gt;Maybe I will have more luck on both the vote and registration fronts and see some of you a &lt;a href=&#34;http://www.developerdeveloperdeveloper.com/scotland2010/Default.aspx&#34;&gt;DDS&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been on a customer site today, locked in a machine room, and so have managed to miss the opening and closing of registration for DDD8. So, as neither of my session’s got enough votes (too test based I guess for the developer mainstream) to be on <a href="http://www.developerdeveloperdeveloper.com/ddd8/Schedule.aspx">the schedule</a> I guess I will not be attending.</p>
<p>Maybe I will have more luck on both the vote and registration fronts and see some of you a <a href="http://www.developerdeveloperdeveloper.com/scotland2010/Default.aspx">DDS</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2010 Build Service Configuration Wizard fails with TF255425 error</title>
      <link>https://blog.richardfennell.net/posts/tfs-2010-build-service-configuration-wizard-fails-with-tf255425-error/</link>
      <pubDate>Thu, 14 Jan 2010 13:25:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2010-build-service-configuration-wizard-fails-with-tf255425-error/</guid>
      <description>&lt;p&gt;I have at last got round to setting up a full installation &lt;a href=&#34;http://blogs.msdn.com/lab_management/archive/2009/05/18/vsts-2010-lab-management-basic-concepts.aspx&#34;&gt;VS 2010 Test and Lab Manager&lt;/a&gt; &lt;a href=&#34;http://blogs.msdn.com/lab_management/archive/2009/11/18/Getting-started-with-Lab-Management-_2800_Part-1_2900_.aspx&#34;&gt;using the excellent notes from the Lab Management Team&lt;/a&gt;. Whist installing the build server portion I got a strange set of errors.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;TF255425: An error occurred while installing the following Windows service: TFSBuildServiceHost.exe. For more information, open Event Viewer and review the application log.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Error&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;TF255070: Configuring services for Team Foundation Build failed with the following exception: TF255425: An error occurred while installing the following Windows service: TFSBuildServiceHost.exe. For more information, open Event Viewer and review the application log..&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have at last got round to setting up a full installation <a href="http://blogs.msdn.com/lab_management/archive/2009/05/18/vsts-2010-lab-management-basic-concepts.aspx">VS 2010 Test and Lab Manager</a> <a href="http://blogs.msdn.com/lab_management/archive/2009/11/18/Getting-started-with-Lab-Management-_2800_Part-1_2900_.aspx">using the excellent notes from the Lab Management Team</a>. Whist installing the build server portion I got a strange set of errors.</p>
<p><em>TF255425: An error occurred while installing the following Windows service: TFSBuildServiceHost.exe. For more information, open Event Viewer and review the application log.</em></p>
<p><em>Error</em></p>
<p><em>TF255070: Configuring services for Team Foundation Build failed with the following exception: TF255425: An error occurred while installing the following Windows service: TFSBuildServiceHost.exe. For more information, open Event Viewer and review the application log..</em></p>
<p>Further investigation found the installer was claiming it could not find required files and so could not complete the install.</p>
<p>After a good deal of ineffective fiddling I cam to the conclusion the issue must be user access. I was using the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyId=9040a4be-c3cf-44a5-9052-a70314452305&amp;displaylang=en">trial Windows Server 2008 R2 vhds</a> as the basis of my TFS server and test VMs, now these default to US region and keyboard. This had gotten on my nerves and I thought I had changed it to UK settings. However, I must have done it wrong as I had a UK keyboard in WinForm application but not in a Command Prompt. Once I made sure that my region, keyboard (and associated defaults) were all set to UK (and were working as expected in all locations) I tried the wizard again and it worked.</p>
<p>So it seems the issue was an incorrect password being passed to the installer. Some how the @ in Pass@word1 was being translated behind the scenes I guess to “ and causing the wizard to fail, though it always passed the verify stage of the wizard.</p>
<p>So the technical tip is to make sure the keyboard and region are right before you start, bit of a newbie error there!</p>
]]></content:encoded>
    </item>
    <item>
      <title>New UK ALM User Group Formed</title>
      <link>https://blog.richardfennell.net/posts/new-uk-alm-user-group-formed/</link>
      <pubDate>Thu, 14 Jan 2010 12:14:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-uk-alm-user-group-formed/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://stuartpreston.net/blog/&#34;&gt;Stuart Preston&lt;/a&gt; has just started an new &lt;a href=&#34;http://ukalmug.ning.com/&#34;&gt;UK ALM User Group,&lt;/a&gt; to quotes its blurb…&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The UK ALM User Group is for practitioners of Application Lifecycle Management (ALM) and Software Development Lifecycle (SDLC) in the UK to get together and discuss and share ideas tools and techniques, as well as to socialise somewhere other than Agile and Software Development conferences!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Practitioners and enthusiasts from all disciplines are welcome. Membership is free. Please feel free to invite your UK based ALM network.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://stuartpreston.net/blog/">Stuart Preston</a> has just started an new <a href="http://ukalmug.ning.com/">UK ALM User Group,</a> to quotes its blurb…</p>
<p><em>The UK ALM User Group is for practitioners of Application Lifecycle Management (ALM) and Software Development Lifecycle (SDLC) in the UK to get together and discuss and share ideas tools and techniques, as well as to socialise somewhere other than Agile and Software Development conferences!</em></p>
<p><em>Practitioners and enthusiasts from all disciplines are welcome. Membership is free. Please feel free to invite your UK based ALM network.</em></p>
<p>Looks very interesting, I hope there is enough critical mass of attendees to make it thrive. Why not join up and have your say?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Voting has opened for DDD8</title>
      <link>https://blog.richardfennell.net/posts/voting-has-opened-for-ddd8/</link>
      <pubDate>Tue, 12 Jan 2010 11:21:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/voting-has-opened-for-ddd8/</guid>
      <description>&lt;p&gt;You can now &lt;a href=&#34;http://developerdeveloperdeveloper.com/ddd8/Users/VoteForSessions.aspx&#34;&gt;vote for the proposed sessions at the upcoming DDD8&lt;/a&gt; conference, and what a good selection there is to choose from this time.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_401849FF.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_0DB47680.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can now <a href="http://developerdeveloperdeveloper.com/ddd8/Users/VoteForSessions.aspx">vote for the proposed sessions at the upcoming DDD8</a> conference, and what a good selection there is to choose from this time.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_401849FF.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_0DB47680.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Why don’t I love my phone?</title>
      <link>https://blog.richardfennell.net/posts/why-dont-i-love-my-phone/</link>
      <pubDate>Mon, 11 Jan 2010 21:26:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-dont-i-love-my-phone/</guid>
      <description>&lt;p&gt;There seams to be loads of coverage at present over mobile platforms. Maybe I am just noticing it due to coverage of the &lt;a href=&#34;http://www.cesweb.org/&#34;&gt;CES show&lt;/a&gt; and the launch of the &lt;a href=&#34;http://www.google.com/phone&#34;&gt;Nexus One&lt;/a&gt;, but the more mainstream media does seems to be taking a good deal of interest in the future of smartphones (or superphone as Google are calling their new one).&lt;/p&gt;
&lt;p&gt;All the articles seems to Apple Vs. Android (and moving rapidly towards Apple Vs. Google). There is also usually a passing mention of Blackberry, then a ‘wonder where Nokia are?’ but usually very little on Microsoft. The article in this months &lt;a href=&#34;http://www.wired.co.uk/wired-magazine/archive/2010/02/features/the-app-explosion.aspx&#34;&gt;UK edition of Wired is a classic example&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There seams to be loads of coverage at present over mobile platforms. Maybe I am just noticing it due to coverage of the <a href="http://www.cesweb.org/">CES show</a> and the launch of the <a href="http://www.google.com/phone">Nexus One</a>, but the more mainstream media does seems to be taking a good deal of interest in the future of smartphones (or superphone as Google are calling their new one).</p>
<p>All the articles seems to Apple Vs. Android (and moving rapidly towards Apple Vs. Google). There is also usually a passing mention of Blackberry, then a ‘wonder where Nokia are?’ but usually very little on Microsoft. The article in this months <a href="http://www.wired.co.uk/wired-magazine/archive/2010/02/features/the-app-explosion.aspx">UK edition of Wired is a classic example</a>.</p>
<p>As a reasonably happy Windows Mobile 6.5 user (I have an <a href="http://www.htc.com/www/product/touchdiamond2/overview.html">HTC Diamond 2</a>) I find this all very interesting. My phone works most of the time, does most of what I need and certainly does not need to be rebooted as much previous smartphones I have had. However, I have to say, it does not engender me with the missionary zeal iPhone (and I suspect future Nexus One) user have. They all seem to have a pure pleasure in the ownership and use of their device. My phone is a bit of kit that does the job most of time, I don’t love it or hate, it is what it is.</p>
<p>I do wonder if I moved to an iPhone would I be the convert so many others seem to be; or is it just my nature to not be such a devotee of any phone/car/coffee machines etc. or in fact objects and brands in general?</p>
<p>This all said, it is very noticeable that the Microsoft mobile platform (and actually the supporting eco-system e.g. the iPhone App Store) is lagging behind, the silence of Windows Mobile 7 just seems to drag on and on. Whatever comes out is going to have to make a big leap to catch up (let alone overtake) other vendors offerings.</p>
<p>Anyway whilst I was writing this post I see that <a href="http://scobleizer.com/2010/01/11/is-the-mobile-tech-press-wrong-in-positioning-apple-vs-google/">Robert Scoble has posted probably a more consider review of the current state of the mobile space</a>. Great minds think a like?</p>
]]></content:encoded>
    </item>
    <item>
      <title>The Agile Yorkshire AGM is this week</title>
      <link>https://blog.richardfennell.net/posts/the-agile-yorkshire-agm-is-this-week/</link>
      <pubDate>Mon, 11 Jan 2010 12:23:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-agile-yorkshire-agm-is-this-week/</guid>
      <description>&lt;p&gt;The AGM is on the 14th Jan at the Victoria Hotel, 28 Great George St, Leeds. See &lt;a href=&#34;http://maps.google.co.uk/maps?q=28%20Great%20George%20St%2C%20Leeds&#34;&gt;here&lt;/a&gt; for directions. Come along and let us know the direction you would like the user group to follow.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.agileyorkshire.org/&#34;&gt;&lt;img alt=&#34;Logo&#34; loading=&#34;lazy&#34; src=&#34;http://www.agileyorkshire.org/_/rsrc/1256391502292/config/app/images/customLogo/customLogo.gif?revision=11&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The AGM is on the 14th Jan at the Victoria Hotel, 28 Great George St, Leeds. See <a href="http://maps.google.co.uk/maps?q=28%20Great%20George%20St%2C%20Leeds">here</a> for directions. Come along and let us know the direction you would like the user group to follow.</p>
<p><a href="http://www.agileyorkshire.org/"><img alt="Logo" loading="lazy" src="http://www.agileyorkshire.org/_/rsrc/1256391502292/config/app/images/customLogo/customLogo.gif?revision=11"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>A new user group - North East Bytes</title>
      <link>https://blog.richardfennell.net/posts/a-new-user-group-north-east-bytes/</link>
      <pubDate>Fri, 08 Jan 2010 20:51:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-new-user-group-north-east-bytes/</guid>
      <description>&lt;p&gt;There is a new user group starting up in the North East. It is taking an interestingly route of having meetings with two one hour sessions, one targeted at developers and the other for IT pros. The user group is free and there will be food and giveaways,&lt;/p&gt;
&lt;p&gt;I like this idea, the danger with too many user groups is that they focus too much on their own little area; getting in some cross fertilisation between people with different views on problems is a great way to learn more.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is a new user group starting up in the North East. It is taking an interestingly route of having meetings with two one hour sessions, one targeted at developers and the other for IT pros. The user group is free and there will be food and giveaways,</p>
<p>I like this idea, the danger with too many user groups is that they focus too much on their own little area; getting in some cross fertilisation between people with different views on problems is a great way to learn more.</p>
<p>NE Byres launch event is on the 20th of January in Room G11 of the Percy Building at Newcastle University. It will have sessions on Silverlight and SharePoint 2010.</p>
<p>For more details check out <a href="http://www.nebytes.net/" title="http://www.nebytes.net/">http://www.nebytes.net/</a></p>
<p><a href="http://www.nebytes.net/"><img loading="lazy" src="http://profile.ak.fbcdn.net/object2/377/17/s235475982945_7704.jpg"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>My Christmas Message to the world</title>
      <link>https://blog.richardfennell.net/posts/my-christmas-message-to-the-world/</link>
      <pubDate>Wed, 23 Dec 2009 10:35:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-christmas-message-to-the-world/</guid>
      <description>&lt;p&gt;Like the &lt;a href=&#34;http://en.wikipedia.org/wiki/Royal_Christmas_Message&#34;&gt;Queen, I have recorded a Christmas message&lt;/a&gt; this year. Now I have no prior knowledge of what her Majesty will speak about this year, but I will lay good odds it is not about using Typemock Isolator.&lt;/p&gt;
&lt;p&gt;On the &lt;a href=&#34;http://site.typemock.com/black-marble-using-typemock-is/2009/12/22/black-marble-using-typemock-isolator.html&#34;&gt;Typemock site you will find a short video&lt;/a&gt; on how we at Black Marble make use of Isolator to tackle testing problems that do no lean themselves to traditional mocking patterns.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Like the <a href="http://en.wikipedia.org/wiki/Royal_Christmas_Message">Queen, I have recorded a Christmas message</a> this year. Now I have no prior knowledge of what her Majesty will speak about this year, but I will lay good odds it is not about using Typemock Isolator.</p>
<p>On the <a href="http://site.typemock.com/black-marble-using-typemock-is/2009/12/22/black-marble-using-typemock-isolator.html">Typemock site you will find a short video</a> on how we at Black Marble make use of Isolator to tackle testing problems that do no lean themselves to traditional mocking patterns.</p>
<p>So if you are at a loose end over the holidays why not curl up with your loved ones and partake in this festive IT video.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Making a TFS2010 Beta2 server use SSL Ports</title>
      <link>https://blog.richardfennell.net/posts/making-a-tfs2010-beta2-server-use-ssl-ports/</link>
      <pubDate>Fri, 18 Dec 2009 16:34:18 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/making-a-tfs2010-beta2-server-use-ssl-ports/</guid>
      <description>&lt;p&gt;There any many good document on how to migrate a TFS server from it’s default ports of 8080 (tfs) and 80 (Sharepoint/Reports) to 8443 and 443, usuall to allow Internet access. A good place to start &lt;a href=&#34;http://blogs.msdn.com/ablock/archive/2009/08/24/exposing-tfs-2010-beta-2-to-the-internet.aspx&#34;&gt;is Aaron Block’s post on the subject&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I did find a problem whist sorting this on a system today, this was that although we had modified all the services to operate on the SSL secured ports the vaious TFS team project WSS sites were still trying to access reports on &lt;a href=&#34;http://servername/reports&#34;&gt;http://servername/reports&lt;/a&gt; not the new &lt;a href=&#34;https://tfs.mydoman.com/reports&#34;&gt;https://tfs.mydoman.com/reports&lt;/a&gt; url. The reason for this was that the tfsredirect.aspx cache needed to be cleared, WSS did not know we had updated the server.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There any many good document on how to migrate a TFS server from it’s default ports of 8080 (tfs) and 80 (Sharepoint/Reports) to 8443 and 443, usuall to allow Internet access. A good place to start <a href="http://blogs.msdn.com/ablock/archive/2009/08/24/exposing-tfs-2010-beta-2-to-the-internet.aspx">is Aaron Block’s post on the subject</a></p>
<p>I did find a problem whist sorting this on a system today, this was that although we had modified all the services to operate on the SSL secured ports the vaious TFS team project WSS sites were still trying to access reports on <a href="http://servername/reports">http://servername/reports</a> not the new <a href="https://tfs.mydoman.com/reports">https://tfs.mydoman.com/reports</a> url. The reason for this was that the tfsredirect.aspx cache needed to be cleared, WSS did not know we had updated the server.</p>
<p>Again I found a few posts on this, but they all seem to date from the Beta1 era and had the same problems i.e. an error was returned. Turns out that the URL to clear the cache is</p>
<p><a href="http://servrname/sites/MyCollection/Project1/_layouts/TfsRedirect.aspx?tf:Type=ReportList&amp;tf:ClearCache=1&amp;tf:Test=1">http://servrname/sites/MyCollection/Project1/_layouts/TfsRedirect.aspx?tf:Type=ReportList&amp;tf:ClearCache=1&amp;tf:Test=1</a></p>
<p>Note the tf:Type parameter is ReportList and not ReportLists as most blog posts state. Once this clear cache was run the WSS Reports leapt into life.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upcoming Community Conferences</title>
      <link>https://blog.richardfennell.net/posts/upcoming-community-conferences/</link>
      <pubDate>Fri, 11 Dec 2009 20:16:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upcoming-community-conferences/</guid>
      <description>&lt;p&gt;I am a bit behind the curve here but if you have not noticed &lt;a href=&#34;http://developerdeveloperdeveloper.com/ddd8/&#34;&gt;DDD8&lt;/a&gt; is planed for January in Reading and &lt;a href=&#34;http://www.developerdeveloperdeveloper.com/scotland2010/Default.aspx&#34;&gt;DDD Scotland&lt;/a&gt; in Glasgow for May.&lt;/p&gt;
&lt;p&gt;Both conferences have open calls for speakers, so get your sessions in quick.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am a bit behind the curve here but if you have not noticed <a href="http://developerdeveloperdeveloper.com/ddd8/">DDD8</a> is planed for January in Reading and <a href="http://www.developerdeveloperdeveloper.com/scotland2010/Default.aspx">DDD Scotland</a> in Glasgow for May.</p>
<p>Both conferences have open calls for speakers, so get your sessions in quick.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Agile Yorkshire meeting - Kaban For Software Engineering</title>
      <link>https://blog.richardfennell.net/posts/agile-yorkshire-meeting-kaban-for-software-engineering/</link>
      <pubDate>Tue, 08 Dec 2009 22:33:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/agile-yorkshire-meeting-kaban-for-software-engineering/</guid>
      <description>&lt;p&gt;It is time again for Agile Yorkshire. This month the meeting is a usual the second wednesday in the month, but at a different venue - Old Broadcasting House (&lt;a href=&#34;http://www.ntileeds.co.uk/old-broadcasting-house/&#34;&gt;http://www.ntileeds.co.uk/old-broadcasting-house/&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;The session is on &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/09Dec2009&#34;&gt;Kaban For Software Engineering by David Joyce and Peter Camfield from BBC Worldwide&lt;/a&gt;, as lean seems all the rage at present this should be very interesting. Hope to see you there&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is time again for Agile Yorkshire. This month the meeting is a usual the second wednesday in the month, but at a different venue - Old Broadcasting House (<a href="http://www.ntileeds.co.uk/old-broadcasting-house/">http://www.ntileeds.co.uk/old-broadcasting-house/</a>)</p>
<p>The session is on <a href="http://www.agileyorkshire.org/2009-event-announcements/09Dec2009">Kaban For Software Engineering by David Joyce and Peter Camfield from BBC Worldwide</a>, as lean seems all the rage at present this should be very interesting. Hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>A busy week of presenting</title>
      <link>https://blog.richardfennell.net/posts/a-busy-week-of-presenting/</link>
      <pubDate>Tue, 01 Dec 2009 12:02:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-busy-week-of-presenting/</guid>
      <description>&lt;p&gt;The interest in Visual Studio 2010 is growing, I am presenting at two events this week and have another day of less formal meetings on the subject.&lt;/p&gt;
&lt;p&gt;The event on Thursday is the [Architecture Forum in the North](Architecture Forum in the North) we are hosting with Microsoft, there are still a few spaces available is if you are interested in the learning more about new techniques and tools why not come along. You even get to hear me talking about using &lt;a href=&#34;http://www.teamprise.com/products/plugin/&#34;&gt;TFS as a Java developers via Teamprise&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The interest in Visual Studio 2010 is growing, I am presenting at two events this week and have another day of less formal meetings on the subject.</p>
<p>The event on Thursday is the [Architecture Forum in the North](Architecture Forum in the North) we are hosting with Microsoft, there are still a few spaces available is if you are interested in the learning more about new techniques and tools why not come along. You even get to hear me talking about using <a href="http://www.teamprise.com/products/plugin/">TFS as a Java developers via Teamprise</a>.</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Post PDC Thoughts</title>
      <link>https://blog.richardfennell.net/posts/post-pdc-thoughts/</link>
      <pubDate>Tue, 01 Dec 2009 10:43:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-pdc-thoughts/</guid>
      <description>&lt;p&gt;I realised I never did another post after my first at the PDC, now what does that tell you?&lt;/p&gt;
&lt;p&gt;One thing it tells me is that blogs are not they primary news form for events now, it has moved onto [Twitter](RefName – Enter a unique Identifier for the field in TFS. The identifier must have at least one period in the name; for example, Test.Test1.). Though as yet I am still lagging behind on this one, I have an &lt;a href=&#34;http://twitter.com/richardfennell&#34;&gt;account but no tweets&lt;/a&gt; as yet. I find there is too much noise most of the on Twitter, it is useful when at an event like PDC to get the buzz, but for me not day to day (though I know I am missing stuff because of this view)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I realised I never did another post after my first at the PDC, now what does that tell you?</p>
<p>One thing it tells me is that blogs are not they primary news form for events now, it has moved onto [Twitter](RefName – Enter a unique Identifier for the field in TFS. The identifier must have at least one period in the name; for example, Test.Test1.). Though as yet I am still lagging behind on this one, I have an <a href="http://twitter.com/richardfennell">account but no tweets</a> as yet. I find there is too much noise most of the on Twitter, it is useful when at an event like PDC to get the buzz, but for me not day to day (though I know I am missing stuff because of this view)</p>
<p>For me the key story a the PDC as a whole was that the <a href="http://www.microsoft.com/windowsazure/dotnetservices/">Azure fabric can extend into your IT systems using AppFabric</a>. This means I can easily see a day where you write an application for an internal IT system that can dynamically grow to an Azure data centre when needed for load or disaster recover, all without any special coding model because the Azure/AppFabic is ubiquitous.</p>
<p>So a light PDC from the blogging front, but one for of future architectural promise</p>
<p>Oh one last thought, on past trips to the USA I have been to the Baseball, which I like, not that dissimilar a night out to one at the <a href="http://en.wikipedia.org/wiki/Twenty20">Twenty20 cricket</a>. This time we tried Basketball, less to my taste. When watched live it seem the game play just gets in the way of the adverts and the other various audience participation entertainment. I have never seen a sport with so many ways to stop the clock (and for so long!). Looks like I need to stay with bat and ball games.</p>
]]></content:encoded>
    </item>
    <item>
      <title>PDC Keynote Day 1 thoughts</title>
      <link>https://blog.richardfennell.net/posts/pdc-keynote-day-1-thoughts/</link>
      <pubDate>Tue, 17 Nov 2009 19:39:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pdc-keynote-day-1-thoughts/</guid>
      <description>&lt;p&gt;So the PDC2009 day 1 keynote is over and what was the story? Well it is more of a vision thing, but then again this is a PDC not a TechEd so what do you expect. For me the two major themes were&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Dallas – a centralised data service that allows unified access to both public and private via subscriptions. Thus allowing core data being used for any purpose the user requires within the EULA of the data in question. It will be interesting what will be published in this manner, is there a market for a centralised data clearing house? only time will tell.&lt;/li&gt;
&lt;li&gt;AppFabric – Basically taking the operating model for the Azure services and allow a company to have a similar model in their own IT system. Thus allowing code to be written that can work on the corporate system or Azure cloud without alteration. This I see as being big.,&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So what was not mentioned, well it was mobile. The only comment was a ‘come to Mix in the spring for stuff about the next mobile offering. Whatever is shown there is going to have to very good to address the momentum of the iPhone. I think a good bet is that leveraging the Azure fabric might be important for the mobile offering&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So the PDC2009 day 1 keynote is over and what was the story? Well it is more of a vision thing, but then again this is a PDC not a TechEd so what do you expect. For me the two major themes were</p>
<ul>
<li>Dallas – a centralised data service that allows unified access to both public and private via subscriptions. Thus allowing core data being used for any purpose the user requires within the EULA of the data in question. It will be interesting what will be published in this manner, is there a market for a centralised data clearing house? only time will tell.</li>
<li>AppFabric – Basically taking the operating model for the Azure services and allow a company to have a similar model in their own IT system. Thus allowing code to be written that can work on the corporate system or Azure cloud without alteration. This I see as being big.,</li>
</ul>
<p>So what was not mentioned, well it was mobile. The only comment was a ‘come to Mix in the spring for stuff about the next mobile offering. Whatever is shown there is going to have to very good to address the momentum of the iPhone. I think a good bet is that leveraging the Azure fabric might be important for the mobile offering</p>
]]></content:encoded>
    </item>
    <item>
      <title>November Agile Yorkshire Meeting</title>
      <link>https://blog.richardfennell.net/posts/november-agile-yorkshire-meeting/</link>
      <pubDate>Mon, 09 Nov 2009 15:00:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/november-agile-yorkshire-meeting/</guid>
      <description>&lt;p&gt;Time for the usual reminder for the next Agile Yorkshire meeting. This month it is by &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/%E2%80%8E11thnov-techniquesfordealingwithdifficultconversationsandnegotiationsinsoftwaredevelopment&#34;&gt;Mark Stringer on ‘Techniques for dealing with difficult conversations &amp;amp; negotiations in software development&lt;/a&gt;’. Usual time, usual place, usual free beer.&lt;/p&gt;
&lt;p&gt;….and just a heads up for the December meeting, as this is at a different venue, Old Broadcasting House (&lt;a href=&#34;http://www.ntileeds.co.uk/old-broadcasting-house/&#34;&gt;http://www.ntileeds.co.uk/old-broadcasting-house/&lt;/a&gt;), it is going to be by &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/09Dec2009&#34;&gt;David Joyce of the BBC on ‘Kanban for Software Engineering’&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Time for the usual reminder for the next Agile Yorkshire meeting. This month it is by <a href="http://www.agileyorkshire.org/2009-event-announcements/%E2%80%8E11thnov-techniquesfordealingwithdifficultconversationsandnegotiationsinsoftwaredevelopment">Mark Stringer on ‘Techniques for dealing with difficult conversations &amp; negotiations in software development</a>’. Usual time, usual place, usual free beer.</p>
<p>….and just a heads up for the December meeting, as this is at a different venue, Old Broadcasting House (<a href="http://www.ntileeds.co.uk/old-broadcasting-house/">http://www.ntileeds.co.uk/old-broadcasting-house/</a>), it is going to be by <a href="http://www.agileyorkshire.org/2009-event-announcements/09Dec2009">David Joyce of the BBC on ‘Kanban for Software Engineering’</a></p>
<p><a href="http://www.agileyorkshire.org/"><img loading="lazy" src="http://groups.google.com/group/agilist-administration/web/agileYorksLogo_dt.png?hl=en-GB&display=thumb&width=200&height=200"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Notes on TFS2010 Beta1 to Beta 2 Upgrade</title>
      <link>https://blog.richardfennell.net/posts/notes-on-tfs2010-beta1-to-beta-2-upgrade/</link>
      <pubDate>Tue, 03 Nov 2009 21:54:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/notes-on-tfs2010-beta1-to-beta-2-upgrade/</guid>
      <description>&lt;p&gt;I have recently upgraded my dual tier TFS 2010 Beta1 instance to Beta2. This is not an officially supported migration but it is certainly possible. The basic process of the update is straight forward:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Remove Beta1 from the AT&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install Beta2&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the configuration wizard in upgrade mode (this took a few hours for DB upgrades)&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Once this was complete I had what I thought was a working Beta2 server (I saw no errors), but there were problems.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have recently upgraded my dual tier TFS 2010 Beta1 instance to Beta2. This is not an officially supported migration but it is certainly possible. The basic process of the update is straight forward:</p>
<ul>
<li>
<p>Remove Beta1 from the AT</p>
</li>
<li>
<p>Install Beta2</p>
</li>
<li>
<p>Run the configuration wizard in upgrade mode (this took a few hours for DB upgrades)</p>
</li>
</ul>
<p>Once this was complete I had what I thought was a working Beta2 server (I saw no errors), but there were problems.</p>
<p>Firstly for some reason the upgrade had lost the binding of port 8443 on the TFS web site (we use this to talk to the TFS server over Internet using SSL). This was fixed by rebinding the port and certificate in IIS manager. Now I could access the version control and work items both inside and outside our firewall.</p>
<p>The more major problem was when I tried to load the SharePoint sites associated with the Team Projects. I always got the error</p>
<blockquote>
<p><em>Parser Error Message: Could not load type &lsquo;Microsoft.TeamFoundation.SharePoint.Dashboards.ApplicationMasterModule&rsquo; from assembly &lsquo;Microsoft.TeamFoundation.SharePoint.Dashboards, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a&rsquo;. (C:inetpubwwwrootwssVirtualDirectories80web.config line 94)</em></p></blockquote>
<p>I had to comment out this HTTP MODULE at line 94 in the web.config. The reason for this error was that this module was a Beta1 assembly and not present at all in the Beta2 release. This edit meant I could now load the SharePoint sites for the TFS collections e.g. <a href="http://tfs2010/sites/DefaultCollection">http://tfs2010/sites/DefaultCollection</a> but when I tried to open a project site e.g. <a href="http://tfs2010/sites/DefaultCollection/Project1">http://tfs2010/sites/DefaultCollection/Project1</a> I still got an error. The basic problem was that the SharePoint templates (created from TFS Process Template) were of the old Beta 1 type, which pointed to all the wrong reports and work item types.</p>
<p><a href="http://blogs.msdn.com/granth/archive/2009/08/31/tfs2010-how-to-make-beta1-sharepoint-sites-work-after-upgrading-to-beta2.aspx">Grant Holiday has written a great post</a> on what needs to be done to start to address this problem. Following his process to change the template I got rid of the main of the error (due to wrong master page), but not all the problems. The page loaded but many of the webparts failed to render. As most of the errors were now related to reports before this could fix SharePoint I had to fix the reporting services.</p>
<p>The reports did not work as the TFS Warehouse schema is different between Beta1 and Beta2. I got the new Beta2 reports out the the zip file <strong>C:Program FilesMicrosoft Team Foundation Server 2010ToolsDeployProcessTemplateManagerFilesMsfAgileTemplate.zip.</strong> I then replaced the existing reporting on the AT’s reporting services instance for each team project. When doing this, watch out that some of the reports have changed their names, so it is best to deleted any old reports you are not replacing to avoid confusion. Also remember that you have to reconnect all the data sources in the report properties before they can be run.</p>
<p>Now the reports worked in Reporting Services (and Team Explorer) I could now return to the SharePoint site. The main problem was that I was getting TF262600 errors, stating that the SharePoint site was not linked to a TFS project. This was easily fixed, in Team Explorer right click on a Team Project and select the Portal Site menu option. You see the dialog</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_700433E3.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_06965C5E.png" title="image"></a></p>
<p>It looks and reports that it is correctly configured, but it is not. The answer is to uncheck the ‘Reports and dashboards refer to data for this team project’ press OK and then open the dialog again and recheck it. This re-establishes the link between the portal site TFS. If the site is now reloaded the webparts that show work items now work and the TF262600 errors at the top of the page have gone.</p>
<p>The final step is to fix the reports in the dashboards web pages. The problem here was purely the names of the reports were wrong. A simple edit of the report viewer webparts fixed this and all the dashboards worked.</p>
<p>So now the sever was working and clients could access all the sub parts of the TFS instance.</p>
<p>There remained a related job and that was to fix the builds box. Again I uninstalled Beta1 and reinstalled Beta2. When I ran the configuration tool it picked up the old configuration without a problem for the agent and controller. However, when I tried to queue a build it errored, Again the problem was the template had changed. For each project I replaced the XAML template file with the equivalent one from <strong>C:Program FilesMicrosoft Team Foundation Server 2010ToolsDeployProcessTemplateManagerFilesMsfAgileTemplate.zipBuildTemplates.</strong> I then had to edit each build to set the revised properties. I could then run a build. Again it did initially fail as TFS claimed there were workspace map conflicts. To get around this I just deleted all workspaces for the TFSbuild user and let them be recreated, and then all worked.</p>
<p>So now I had an error free Beta2 system, there was a good deal of manual work to do to sort out SharePoint, but it is all fairly easy once you work though it step by step.</p>
<p>Next I need to look at our main TFS 2008 instance</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF53010 error and no TFS Warehouse updates after a SQL migrate</title>
      <link>https://blog.richardfennell.net/posts/tf53010-error-and-no-tfs-warehouse-updates-after-a-sql-migrate/</link>
      <pubDate>Wed, 21 Oct 2009 10:53:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf53010-error-and-no-tfs-warehouse-updates-after-a-sql-migrate/</guid>
      <description>&lt;p&gt;We recently moved our central SQL server to new SAN hardware and at the same time upgraded from SQL2005 to SQL2008. Once this was done we noticed that our TFS Reports were running against old Warehouse data.&lt;/p&gt;
&lt;p&gt;Checking the TFS Application Tier event log we saw:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;TF53010: The following error has occurred in a Team Foundation component or extension:&lt;br&gt;
Date (UTC): 21/10/2009 10:27:25&lt;br&gt;
Machine: TFSAT&lt;br&gt;
Application Domain: /LM/W3SVC/287244640/Root/Warehouse-2-129005451884971104&lt;br&gt;
Assembly: Microsoft.TeamFoundation.Warehouse, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a; v2.0.50727&lt;br&gt;
Process Details:&lt;br&gt;
Process Name: w3wp&lt;br&gt;
Process Id: 2716&lt;br&gt;
Thread Id: 2848&lt;br&gt;
Account name: MYDOMAINTFSSERVICE&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We recently moved our central SQL server to new SAN hardware and at the same time upgraded from SQL2005 to SQL2008. Once this was done we noticed that our TFS Reports were running against old Warehouse data.</p>
<p>Checking the TFS Application Tier event log we saw:</p>
<blockquote>
<p>TF53010: The following error has occurred in a Team Foundation component or extension:<br>
Date (UTC): 21/10/2009 10:27:25<br>
Machine: TFSAT<br>
Application Domain: /LM/W3SVC/287244640/Root/Warehouse-2-129005451884971104<br>
Assembly: Microsoft.TeamFoundation.Warehouse, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a; v2.0.50727<br>
Process Details:<br>
Process Name: w3wp<br>
Process Id: 2716<br>
Thread Id: 2848<br>
Account name: MYDOMAINTFSSERVICE</p>
<p>Detailed Message: Cube processing runtime error: rnMicrosoft.TeamFoundation.Warehouse.WarehouseException: The following database is not accessible in the Analysis Server: TfsWarehouse at Microsoft.TeamFoundation.Warehouse.OlapCreator.ProcessOlapNoTransaction(Boolean  schemaUpdated, UpdateStatusStore updateStatus, Server server, SqlTransaction transaction)<br>
   at Microsoft.TeamFoundation.Warehouse.OlapCreator.ProcessOlap(Boolean schemaUpdated, UpdateStatusStore updateStatus)<br>
   at Microsoft.TeamFoundation.Warehouse.AdapterScheduler.RunCubeProcess()</p></blockquote>
<p>The problem was missing rights on the new 2008 Analysis Service instance. The quick fix was to give the MYDOMAINTFSSERVICE account administrator rights on the instance (SQL Management Studio, Connect to Analysis Service Instance, right click on instance, properties, security, add the user). Once this was done I could <a href="http://ozgrant.com/2006/05/15/forcing-data-warehouse-update-for-tfs/">force a reprocess</a> and all was OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>A video on Access 2010</title>
      <link>https://blog.richardfennell.net/posts/a-video-on-access-2010/</link>
      <pubDate>Wed, 21 Oct 2009 07:04:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-video-on-access-2010/</guid>
      <description>&lt;p&gt;I recently posted on &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/10/19/access-services-in-sharepoint-2010-or-how-i-learned-to-stop-worrying-and-love-access-2010.aspx&#34;&gt;my experiences of Access 2010&lt;/a&gt;, well if you want to know more have a look at the &lt;a href=&#34;http://channel9.msdn.com/shows/Access/Microsoft-Access-2010-Demo/&#34;&gt;video by Clint Covington and Ryan McMinn on Channel 9&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently posted on <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/10/19/access-services-in-sharepoint-2010-or-how-i-learned-to-stop-worrying-and-love-access-2010.aspx">my experiences of Access 2010</a>, well if you want to know more have a look at the <a href="http://channel9.msdn.com/shows/Access/Microsoft-Access-2010-Demo/">video by Clint Covington and Ryan McMinn on Channel 9</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2010 Beta 2 release and a new licensing model</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2010-beta-2-release-and-a-new-licensing-model/</link>
      <pubDate>Mon, 19 Oct 2009 19:34:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2010-beta-2-release-and-a-new-licensing-model/</guid>
      <description>&lt;p&gt;About 2 hours ago &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2009/10/19/vs-2010-beta-2-is-now-available-for-msdn-subscriber-download.aspx&#34;&gt;Visual Studio 2010 Beta 2 was released to MSDN subscribers&lt;/a&gt;, the usual wait for the download now start!&lt;/p&gt;
&lt;p&gt;Also the &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2009/10/19/vs-2010-licensing-changes.aspx&#34;&gt;revised SKU and licensing model for 2010&lt;/a&gt; was made public by Brian Harry. This is meant to be simplification, but still remains fairly complex with the usually questions over ‘is that in this SKU or that?’&lt;/p&gt;
&lt;p&gt;The key point for me is that the cost of entry for TFS is going to drop significantly, as the TFS server and a client CAL is included in the MSDN subscriptions with the VS Pro, Premium and Ultimate SKUs. Should break down a few barriers&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>About 2 hours ago <a href="http://blogs.msdn.com/bharry/archive/2009/10/19/vs-2010-beta-2-is-now-available-for-msdn-subscriber-download.aspx">Visual Studio 2010 Beta 2 was released to MSDN subscribers</a>, the usual wait for the download now start!</p>
<p>Also the <a href="http://blogs.msdn.com/bharry/archive/2009/10/19/vs-2010-licensing-changes.aspx">revised SKU and licensing model for 2010</a> was made public by Brian Harry. This is meant to be simplification, but still remains fairly complex with the usually questions over ‘is that in this SKU or that?’</p>
<p>The key point for me is that the cost of entry for TFS is going to drop significantly, as the TFS server and a client CAL is included in the MSDN subscriptions with the VS Pro, Premium and Ultimate SKUs. Should break down a few barriers</p>
]]></content:encoded>
    </item>
    <item>
      <title>Access Services in SharePoint 2010 or: How I Learned to Stop Worrying and Love Access 2010</title>
      <link>https://blog.richardfennell.net/posts/access-services-in-sharepoint-2010-or-how-i-learned-to-stop-worrying-and-love-access-2010/</link>
      <pubDate>Mon, 19 Oct 2009 18:52:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/access-services-in-sharepoint-2010-or-how-i-learned-to-stop-worrying-and-love-access-2010/</guid>
      <description>&lt;p&gt;So what I have I been doing of late? The blog has been a bit quiet. Well I have been having a good look at Access Services in SharePoint 2010. This has been an interesting experience, as I am not historically what you might call an avid Access developer.&lt;/p&gt;
&lt;p&gt;Like most .NET developers I had looked as Access as more of a file format than an application, something from the past. Something that I might use for a small data store, maybe in a web application hosted on a cheaper ISP that does not provide ‘a real’ SQL DB, or where an XML data files don’t seem right, often because I just can’t be bothered to work out the XPATH. When using Access as a data format it seems easier to get at my data using basic hand crafted SQL commands or maybe at most via a OLEDB Data Adaptor/DataSet. All very old old school. Thinking about Access in this way just seems an easy way out, playing it safe with the knowledge I have. I don’t for a second propose that this a good idea, you should not be looking at using any technology just because it is there and you already know it. There are obvious downsides, using Access in this manner meant that from the ADO.NET developer side I could not:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So what I have I been doing of late? The blog has been a bit quiet. Well I have been having a good look at Access Services in SharePoint 2010. This has been an interesting experience, as I am not historically what you might call an avid Access developer.</p>
<p>Like most .NET developers I had looked as Access as more of a file format than an application, something from the past. Something that I might use for a small data store, maybe in a web application hosted on a cheaper ISP that does not provide ‘a real’ SQL DB, or where an XML data files don’t seem right, often because I just can’t be bothered to work out the XPATH. When using Access as a data format it seems easier to get at my data using basic hand crafted SQL commands or maybe at most via a OLEDB Data Adaptor/DataSet. All very old old school. Thinking about Access in this way just seems an easy way out, playing it safe with the knowledge I have. I don’t for a second propose that this a good idea, you should not be looking at using any technology just because it is there and you already know it. There are obvious downsides, using Access in this manner meant that from the ADO.NET developer side I could not:</p>
<ul>
<li>make use of the newer <a href="http://msdn.microsoft.com/en-us/library/bb397687.aspx">lambda expression</a> based syntax of <a href="http://msdn.microsoft.com/en-us/netframework/aa904594.aspx">LINQ</a></li>
<li>make use of newer <a href="http://en.wikipedia.org/wiki/Object-relational_mapping">ORM solutions</a> such as <a href="http://msdn.microsoft.com/en-us/library/aa697427%28VS.80%29.aspx">Entity Framework</a> or <a href="http://nhforge.org/Default.aspx">nHibernate</a>.</li>
</ul>
<p>But equally, by treating Access as just a data format I was not able to make use of it as the Rapid Development tool it is. I was too hung up in the unpleasant idea of an MDB sitting of a server being poor at locking and saturating the network with unwanted traffic. I was not even considering Access as a front end to a MS-SQL solution, and it is not as if that is new technology, it has been around for ages. I was just sitting happily with my prejudices.</p>
<p>I don’t think this position is that rare for .NET developers these days. Access seems just looked down upon as something old in the Office pack that is best ignored, no good would come of using it in a business environment.</p>
<p>So enters <a href="http://www.mssharepointconference.com">Office 2010 and SharePoint 2010 Access Services</a>, for me this changes the game. For those who don’t know this technology, you can create an Access database locally on your PC then publish it to SharePoint. Tables become SharePoint lists, macros become workflows and forms well become forms. Access becomes a RAD tool to create data driven SharePoint sites.</p>
<p>So how has this new technology been working for me? Well I can’t say I have grown to love the Access client, but I think that is mostly down to that fact that I am still not thinking right for it. Access is all about data binding, you don’t have to think about what form fields need to be copied to which DB columns, the wizards make a really good attempt to design forms for you based on the relationship of the tables in your DB and this just all seem unnatural to me. I think this is because I am usually working with design patterns to reduces the linkage between forms and data to a minimum e.g. the MVC pattern, and so consider this good practice; automated data binding seems seems wrong. So in Access I keep wanting to build things from first principles, but this is just not sensible. Better to let the tool get you close and then you add the polish, put away any thoughts of implementing design patterns as you would in a language such as C# or VB.NET.</p>
<p>I think this is the key to the degree of irritation I feel with the product, if you have got used to architecting from the ground up, especially in a Test Driven Development style, you have to turn everything on it head. It feels like you are cheating, not doing the job properly.</p>
<p>But wait! look at the benefits. A while ago I was involved in a project to provide a resource management data driven web site that was hosted within SharePoint. It contained the usual things, data entry forms, links to SQL and reports. It took a couple of weeks to build. I think I could write the same system in Access with SharePoint 2010 in an afternoon, and would be happy to have a client’s business analyst sit next to me while I did it, in a pair programming style, to design the forms, report layouts and columns as I went along. For the smaller scale data driven site Access Services is a great tool, but obviously it is not perfect. I do keep hitting points where I think ‘if I were in C# I could just do that’ but then I remember ‘yes but it would take would have taken me three days to get here not an hour’. Most project don’t need that last 10-20% you can only reach on .NET custom code, the client with be far happier with 80% done quickly and flexibly rather than 95% done a lot later. Also we have to factor in my relative lack of experience with Access as a RAD tool, reducing the productivity that could potentially be achieved by a more experienced Access developer.</p>
<p>Actually the bulk of the time I have spent has been on looking at how you can extend Access Service to reach that last 20% of functionality, and it not that hard. The key to remember is that the Access Services are just built on standard SharePoint objects. Ok there is a new service running to render the pages, but underneath there are just SharePoint lists and workflow, and where these exist there are events that you can programmatically handle. I have found that by trapping events such as ItemAdd() for the Access created SharePoint lists there is no real limit to what you can achieve via the SharePoint Object Model. And this development process is made even easier by the new Visual Studio 2010 templates for SharePoint features. If nothing else the fact that all the templates create a WSP for deployment as standard makes for far more robust feature development.</p>
<p>There is one major difference between a standard SharePoint site and one created by Access, and it is that SharePoint Designer cannot open the Access site. I thought this would be an issue when I first heard about the limitation, but it turn out not to be. Anything you might have wanted to do in SharePoint Designer you can do quicker and easier in Access for this type of data driven site. Ok the range of things you can do is more limited, but again you get that 80% you need with much less pain.</p>
<p>So how has my experience with Access 2010 been? Exasperating, frustrating but undeniably productive. I am not sure it is the right product for an ISV style company who want to roll out single solution to many client sites (but it could be used for this if needed via the SharePoint site template gallery); but for a smaller data driven site (with or without custom extensions) written within an IT department it is a very strong contender. Taking Access in many ways back to it roots.</p>
<p>So if you need small data driven sites I would suggest you put aside your prejudices and have a look at the beta program for Office/SharePoint 2010, I think you will be surprised.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I&#39;m off to Ireland this week</title>
      <link>https://blog.richardfennell.net/posts/im-off-to-ireland-this-week/</link>
      <pubDate>Sun, 18 Oct 2009 17:22:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/im-off-to-ireland-this-week/</guid>
      <description>&lt;p&gt;I am off to present in Ireland this week in a double header with another of Black Marble&amp;rsquo;s test team, Robert Hancock. We will be appearing at the Microsoft Ireland Visual Studio Academy. Our subject is &lt;a href=&#34;http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032428280&amp;amp;Culture=en-IE&#34;&gt;Improved efficiency throughout the test cycle&lt;/a&gt;. As registration is still open I guess there are still spaces, so if it is of interest to you why not sign up (and who is not interested in testing?)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am off to present in Ireland this week in a double header with another of Black Marble&rsquo;s test team, Robert Hancock. We will be appearing at the Microsoft Ireland Visual Studio Academy. Our subject is <a href="http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032428280&amp;Culture=en-IE">Improved efficiency throughout the test cycle</a>. As registration is still open I guess there are still spaces, so if it is of interest to you why not sign up (and who is not interested in testing?)</p>
<p>So while many of our [staff are off living it up in Las Vegas](<a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&amp;title=Black">http://www.blackmarble.co.uk/SectionDisplay.aspx?name=News&title=Black</a> Marble hits Vegas for the SharePoint 2009 Conference) for the SharePoint conference, Robert and myself have been busy building a demo rich session for the Dublin event that touchs on a whole host of different testing tools and techniques. So we hope to see you there for what should be a very interesting session.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Local events reminder</title>
      <link>https://blog.richardfennell.net/posts/local-events-reminder/</link>
      <pubDate>Mon, 12 Oct 2009 21:27:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/local-events-reminder/</guid>
      <description>&lt;p&gt;Wednesday this week is the next Agile Yorkshire meeting, there are 2 presentations and a discussion planned:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Agile War Stories: A Project Managers Perspective. Ian Carroll&lt;/li&gt;
&lt;li&gt;User Story Estimation: Alan Williams&lt;/li&gt;
&lt;li&gt;An open discussion about Lean and Kanban in software development.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As usual the meeting is  at The Victoria Hotel, Leeds from 6:30 onwards for a 7:00 start. See &lt;a href=&#34;http://www.agileyorkshire.org&#34;&gt;www.agileyorkshire.org&lt;/a&gt; for more details.&lt;/p&gt;
&lt;p&gt;On Tuesday next week there is the &lt;a href=&#34;http://www.westyorkshire.bcs.org/2009/08/14/the-world-of-ubuntu-and-open-source/&#34;&gt;West Yorkshire BCS meeting&lt;/a&gt; on ‘The World of Ubuntu and Open Source’ by Matthew Barker of Canonical Ltd. As usual the venue is the The Met, King Street, Leeds, LS1 2HQ&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Wednesday this week is the next Agile Yorkshire meeting, there are 2 presentations and a discussion planned:</p>
<ul>
<li>Agile War Stories: A Project Managers Perspective. Ian Carroll</li>
<li>User Story Estimation: Alan Williams</li>
<li>An open discussion about Lean and Kanban in software development.</li>
</ul>
<p>As usual the meeting is  at The Victoria Hotel, Leeds from 6:30 onwards for a 7:00 start. See <a href="http://www.agileyorkshire.org">www.agileyorkshire.org</a> for more details.</p>
<p>On Tuesday next week there is the <a href="http://www.westyorkshire.bcs.org/2009/08/14/the-world-of-ubuntu-and-open-source/">West Yorkshire BCS meeting</a> on ‘The World of Ubuntu and Open Source’ by Matthew Barker of Canonical Ltd. As usual the venue is the The Met, King Street, Leeds, LS1 2HQ</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF252005 Error when creating new collections on TFS 2010 Beta1</title>
      <link>https://blog.richardfennell.net/posts/tf252005-error-when-creating-new-collections-on-tfs-2010-beta1/</link>
      <pubDate>Wed, 07 Oct 2009 19:26:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf252005-error-when-creating-new-collections-on-tfs-2010-beta1/</guid>
      <description>&lt;p&gt;This week I tried to add a new project collection to our TFS 2010 Beta1 test system. All seemed to be going OK, the settings verified without issue but I got the error TF252005 when it tried to create the associated SharePoint site (on the WSS 3.0 instance on the TFS application tier).&lt;/p&gt;
&lt;p&gt;Now unlike TFS 2008 this was not a blocking problem. On 2008 if any team project step failed, such as the team WSS site creation, then the team project was rolled back. It is a really nice feature of 2010 that if there is an error it does try to do as much as it can. So in my case, I ended up with TFS collection with no associated SharePoint site collections, not what I wanted but usable.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This week I tried to add a new project collection to our TFS 2010 Beta1 test system. All seemed to be going OK, the settings verified without issue but I got the error TF252005 when it tried to create the associated SharePoint site (on the WSS 3.0 instance on the TFS application tier).</p>
<p>Now unlike TFS 2008 this was not a blocking problem. On 2008 if any team project step failed, such as the team WSS site creation, then the team project was rolled back. It is a really nice feature of 2010 that if there is an error it does try to do as much as it can. So in my case, I ended up with TFS collection with no associated SharePoint site collections, not what I wanted but usable.</p>
<p>The detail of the error I got are shown below:</p>
<p><em>[10/6/2009 9:35:57 PM][Warning] TF252005: Configuration of SharePoint Products and Technologies failed with the following error: Server was unable to process request. &mdash;&gt; User cannot be found.. Failed to create a path to the SharePoint site. Your account might not have the required permissions to create a sub-site on this server.<br>
[10/6/2009 9:35:57 PM] Servicing step Configure SharePoint Settings passed, with warnings. (ServicingOperation: Install; Step group: Install.TfsSharePoint)<br>
[10/6/2009 9:35:57 PM] Executing servicing step Create Test Management Database. (ServicingOperation: Install; Step group: Install.TfsTestManagement)</em></p>
<p>So the question was ‘what user could it not find?’</p>
<p>Upon further investigation I found that I could not give users rights on any of the previously created SharePoint sites on this WSS instance. The SharePoint People Picker said it could not find users or groups for the domain user accounts I entered. Also I noticed that if I entered the central admin it said that all my site administrators’ accounts were invalid (all underlined red in the configuration form). Basically it seemed my WSS instance could not find the domain controller.</p>
<p>I next checked that the Reporting Services instance on the same box, this had no problems resolving accounts, so I knew it was specifically a SharePoint issue.</p>
<p>So an investigation of SharePoint settings started. In the end it was <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx">Rik</a> who found the root cause, and it was obvious with hindsight, the authentication provider was unset. Once this was set to Windows for all the zones everything leapt back into life.</p>
<p>The question remains how this setting could have become unset. We did have a domain controller hardware failure last week but I cannot see why this should cause this issue. Anyway at least it is fixed now.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS for the small team</title>
      <link>https://blog.richardfennell.net/posts/tfs-for-the-small-team/</link>
      <pubDate>Sun, 04 Oct 2009 17:56:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-for-the-small-team/</guid>
      <description>&lt;p&gt;Over the weekend &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2009/10/01/tfs-2010-for-sourcesafe-users.aspx&#34;&gt;Brian Harry posted on this blog about a new ‘Basic’ edition of TFS.&lt;/a&gt; This is aimed squarely small team currently using Visual SourceSafe. It will provide version control with work item tracking and can be run on SQLExpress. The key difference for the ‘Standard ‘ edition is that it will not have SharePoint or Reporting Services integration. For far fuller details check the &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2009/10/01/tfs-2010-for-sourcesafe-users.aspx&#34;&gt;blog post&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I think this is going to be a really interesting addition to TFS. This announcement answers a question I was repeated asked at our [TFS2010 event last week](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Managing&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Managing&lt;/a&gt; the Application Lifecycle&amp;amp;Code=), ‘we are a small team with VSS, I want something simple to move us forward, what should I use? TFS seems a bit complex and expensive’ So for me it is well timed and well placed within the marketplace.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the weekend <a href="http://blogs.msdn.com/bharry/archive/2009/10/01/tfs-2010-for-sourcesafe-users.aspx">Brian Harry posted on this blog about a new ‘Basic’ edition of TFS.</a> This is aimed squarely small team currently using Visual SourceSafe. It will provide version control with work item tracking and can be run on SQLExpress. The key difference for the ‘Standard ‘ edition is that it will not have SharePoint or Reporting Services integration. For far fuller details check the <a href="http://blogs.msdn.com/bharry/archive/2009/10/01/tfs-2010-for-sourcesafe-users.aspx">blog post</a>.</p>
<p>I think this is going to be a really interesting addition to TFS. This announcement answers a question I was repeated asked at our [TFS2010 event last week](<a href="http://www.blackmarble.co.uk/events.aspx?event=Managing">http://www.blackmarble.co.uk/events.aspx?event=Managing</a> the Application Lifecycle&amp;Code=), ‘we are a small team with VSS, I want something simple to move us forward, what should I use? TFS seems a bit complex and expensive’ So for me it is well timed and well placed within the marketplace.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Some good new for Bletchley Park</title>
      <link>https://blog.richardfennell.net/posts/some-good-new-for-bletchley-park/</link>
      <pubDate>Tue, 29 Sep 2009 10:36:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/some-good-new-for-bletchley-park/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://news.bbc.co.uk/1/hi/england/beds/bucks/herts/8279926.stm&#34;&gt;Bletchley Park have announced&lt;/a&gt; that they have got some Lottery Grant funding, which is great news. Again I you urge you to &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/09/07/your-support-can-keep-our-industries-history-alive.aspx&#34;&gt;visit if you are interested in the history of our trade&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://news.bbc.co.uk/1/hi/england/beds/bucks/herts/8279926.stm">Bletchley Park have announced</a> that they have got some Lottery Grant funding, which is great news. Again I you urge you to <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/09/07/your-support-can-keep-our-industries-history-alive.aspx">visit if you are interested in the history of our trade</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Agile Yorkshire Twitter feeds</title>
      <link>https://blog.richardfennell.net/posts/agile-yorkshire-twitter-feeds/</link>
      <pubDate>Sat, 26 Sep 2009 18:02:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/agile-yorkshire-twitter-feeds/</guid>
      <description>&lt;p&gt;You can now get updates on Agile Yorkshire events via twitter &lt;a href=&#34;http://twitter.com/search#search?q=%40agileyorkshire%20&#34; title=&#34;http://twitter.com/search#search?q=%40agileyorkshire%20&#34;&gt;@agileyorkshire&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;twitter-icons&#34; loading=&#34;lazy&#34; src=&#34;http://ts1.mm.bing.net/images/thumbnail.aspx?q=1244135163772&amp;id=dad372b6d3083d1c792cdb9ee41f5211&amp;url=http%3a%2f%2fwp.clicrbs.com.br%2fprofissaoestagiario%2ffiles%2f2009%2f08%2ftwitter-icons.jpg&#34; title=&#34;twitter-icons&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can now get updates on Agile Yorkshire events via twitter <a href="http://twitter.com/search#search?q=%40agileyorkshire%20" title="http://twitter.com/search#search?q=%40agileyorkshire%20">@agileyorkshire</a></p>
<p><img alt="twitter-icons" loading="lazy" src="http://ts1.mm.bing.net/images/thumbnail.aspx?q=1244135163772&id=dad372b6d3083d1c792cdb9ee41f5211&url=http%3a%2f%2fwp.clicrbs.com.br%2fprofissaoestagiario%2ffiles%2f2009%2f08%2ftwitter-icons.jpg" title="twitter-icons"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Licensing kicks off again</title>
      <link>https://blog.richardfennell.net/posts/licensing-kicks-off-again/</link>
      <pubDate>Tue, 22 Sep 2009 12:55:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/licensing-kicks-off-again/</guid>
      <description>&lt;p&gt;Must be that time of year, I see we are looking at an &lt;a href=&#34;http://www.scrumalliance.org/resources/1059&#34;&gt;exam for Scrum Masters&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I don’t see any point in my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/bm-bloggers/pages/232.aspx&#34;&gt;repeating myself&lt;/a&gt;, especially when &lt;a href=&#34;http://gojko.net/2009/09/22/joe-the-developer-doesnt-need-a-certificate/&#34;&gt;Gojko has covered essence of the subject so well&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Must be that time of year, I see we are looking at an <a href="http://www.scrumalliance.org/resources/1059">exam for Scrum Masters</a></p>
<p>I don’t see any point in my <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/pages/232.aspx">repeating myself</a>, especially when <a href="http://gojko.net/2009/09/22/joe-the-developer-doesnt-need-a-certificate/">Gojko has covered essence of the subject so well</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TF21508 Error, cannot see build server after moving a Hyper-V image</title>
      <link>https://blog.richardfennell.net/posts/tf21508-error-cannot-see-build-server-after-moving-a-hyper-v-image/</link>
      <pubDate>Thu, 17 Sep 2009 12:18:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf21508-error-cannot-see-build-server-after-moving-a-hyper-v-image/</guid>
      <description>&lt;p&gt;We have been consolidating our Hyper-V system of late, moving various older systems onto a new SAN based cluster. This has meant we have just moved our virtual TFS2008 build machines. After the move I started seeing the following error on all the builds using that server, re-enabling the build machine had no effect.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_5991C24B.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_5FD898D9.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I restarted the build VM but this did not fix the problem; I had to also restart the TFS2008 AT, once this was done all was fine. I guess that there was a cached IP/ARP table somewhere routing packets the wrong way.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We have been consolidating our Hyper-V system of late, moving various older systems onto a new SAN based cluster. This has meant we have just moved our virtual TFS2008 build machines. After the move I started seeing the following error on all the builds using that server, re-enabling the build machine had no effect.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_5991C24B.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_5FD898D9.png" title="image"></a></p>
<p>I restarted the build VM but this did not fix the problem; I had to also restart the TFS2008 AT, once this was done all was fine. I guess that there was a cached IP/ARP table somewhere routing packets the wrong way.</p>
<p><strong>Technical Tip</strong>: if you can’t see the build server don’t just assume it is always the build server at fault, it might be the other calling machine i.e. the TFS AT</p>
<p>Interestingly our TFS2010 build machines did not suffer this problem.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows 7 Boot from VHD</title>
      <link>https://blog.richardfennell.net/posts/windows-7-boot-from-vhd/</link>
      <pubDate>Thu, 17 Sep 2009 10:47:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-7-boot-from-vhd/</guid>
      <description>&lt;p&gt;I have been having a play with the boot from VHD functions in Windows 7, it seems like a really useful feature when you need the raw power of your PC, but would like the ease of management of Virtual PCs (i.e. can copy them around and archive them), There are many posts on the steps that are required to add a boot from VHD partition  to an existing standard install (remember the VHD must be for a Windows 7 or Windows 2008 R2 operating system), I followed notes on &lt;a href=&#34;http://blogs.msdn.com/knom/archive/2009/04/07/windows-7-vhd-boot-setup-guideline.aspx&#34;&gt;knom’s developer corner&lt;/a&gt;. Just a couple of things that got me:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been having a play with the boot from VHD functions in Windows 7, it seems like a really useful feature when you need the raw power of your PC, but would like the ease of management of Virtual PCs (i.e. can copy them around and archive them), There are many posts on the steps that are required to add a boot from VHD partition  to an existing standard install (remember the VHD must be for a Windows 7 or Windows 2008 R2 operating system), I followed notes on <a href="http://blogs.msdn.com/knom/archive/2009/04/07/windows-7-vhd-boot-setup-guideline.aspx">knom’s developer corner</a>. Just a couple of things that got me:</p>
<ul>
<li>The notes say to press Shift F10 to open the console and enter the DISKPART commands to create and mount the new VHD. This is fine, but I then closed the window to continue, this is wrong. In step 6 the notes <strong>do say</strong> to Alt Tab back to the installer and this <strong>is vital</strong>. If you close the command window, as I did, the new VDISK is dismounted so you cannot install to it.</li>
<li>After the install I could dual boot, I had a ‘real’ Windows 7 install and my ‘boot from VHD’ install. The boot manager showed both in the menu, but they both had the same name ‘Windows 7’, only trial and error showed me which was which. Also my new VHD boot was the default. All a bit confusing and not what I was after. As I find the command line to BCDEDIT not the friendliest for editing the boot setting I tried to use <a href="http://neosmart.net/dl.php?id=1">EasyBCD</a> to edit one of the name to ‘Windows 7 VHD’ and alter the default to my original installation. This caused me to end up with two boot options that both pointed to the ‘real’ installation. My guess is that EasyBCD does not understand VHD boot on Windows 7. I therefore had to use the manual command as listed on <a href="http://technet.microsoft.com/en-us/library/dd799299%28WS.10%29.aspx">TechEd</a>. Once this was done all was OK</li>
</ul>
<p>The next step is to try a VHD boot from an external USB or eSATA disk.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New free tools to help manager or TFS based projects from Telerik</title>
      <link>https://blog.richardfennell.net/posts/new-free-tools-to-help-manager-or-tfs-based-projects-from-telerik/</link>
      <pubDate>Wed, 09 Sep 2009 15:51:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-free-tools-to-help-manager-or-tfs-based-projects-from-telerik/</guid>
      <description>&lt;p&gt;Telerik have just release a new work item manager and dashboard application for TFS, first impressions are very good. &lt;a href=&#34;http://www.telerik.com/products/tfsmanager-and-tfsdashboard.aspx&#34;&gt;Why not download your copy and have a look?&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Telerik have just release a new work item manager and dashboard application for TFS, first impressions are very good. <a href="http://www.telerik.com/products/tfsmanager-and-tfsdashboard.aspx">Why not download your copy and have a look?</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Your support can keep our industry&#39;s history alive</title>
      <link>https://blog.richardfennell.net/posts/your-support-can-keep-our-industrys-history-alive/</link>
      <pubDate>Mon, 07 Sep 2009 10:38:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/your-support-can-keep-our-industrys-history-alive/</guid>
      <description>&lt;p&gt;We had a company outing at the weekend to &lt;a href=&#34;http://www.bletchleypark.org.uk/&#34;&gt;Bletchley Park&lt;/a&gt; for their Annual Enigma Reunion event. A great chance to see the place where &lt;a href=&#34;http://www.bletchleypark.org.uk/content/hist/wartime.rhtm&#34;&gt;Enigma was cracked&lt;/a&gt; and some of the equipment they used to do it, such as the working  &lt;a href=&#34;http://www.jharper.demon.co.uk/bombe1.htm&#34;&gt;rebuild of a Turing Bombe&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/Bombe_Front1_600DE4FD.jpg&#34;&gt;&lt;img alt=&#34;Turing Bombe rebuild&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/Bombe_Front1_thumb_31B45F50.jpg&#34; title=&#34;Turing Bombe rebuild&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Whilst down there we also took the chance to have a good look around the &lt;a href=&#34;http://www.tnmoc.org/home.aspx&#34;&gt;National Museum of Computing&lt;/a&gt;, which shares the site; you know are are getting old when a &lt;a href=&#34;http://www.tnmoc.org/hands-on.aspx&#34;&gt;third of a museum is devoted to equipment&lt;/a&gt; you have worked on!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We had a company outing at the weekend to <a href="http://www.bletchleypark.org.uk/">Bletchley Park</a> for their Annual Enigma Reunion event. A great chance to see the place where <a href="http://www.bletchleypark.org.uk/content/hist/wartime.rhtm">Enigma was cracked</a> and some of the equipment they used to do it, such as the working  <a href="http://www.jharper.demon.co.uk/bombe1.htm">rebuild of a Turing Bombe</a></p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/Bombe_Front1_600DE4FD.jpg"><img alt="Turing Bombe rebuild" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/Bombe_Front1_thumb_31B45F50.jpg" title="Turing Bombe rebuild"></a></p>
<p>Whilst down there we also took the chance to have a good look around the <a href="http://www.tnmoc.org/home.aspx">National Museum of Computing</a>, which shares the site; you know are are getting old when a <a href="http://www.tnmoc.org/hands-on.aspx">third of a museum is devoted to equipment</a> you have worked on!</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/IMG_01271_386768D3.jpg"><img alt="Black Marble at the Mueseum of Computing by Colosus" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/IMG_01271_thumb_3F1A7256.jpg" title="Black Marble at the Mueseum of Computing by Colosus"></a></p>
<p>I would urge anyone interested in the history of our industry to take the time to drop by Bletchley Park to have a look at both the museums on the site. And if you can, donate to aid their upkeep as neither <a href="http://www.bletchleypark.org.uk/content/contact/donation.rhtm">Bletchley</a> or the <a href="http://www.tnmoc.org/supp.aspx">Computing Museum</a> get any governmental support. Think of them live steam preservation societies, full of keen volunteers, with loads of ideas and partially working equipment that just need a bit of money to save a history the UK led the world in.</p>
]]></content:encoded>
    </item>
    <item>
      <title>September’s Agile Yorkshire meeting</title>
      <link>https://blog.richardfennell.net/posts/septembers-agile-yorkshire-meeting/</link>
      <pubDate>Mon, 07 Sep 2009 10:09:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/septembers-agile-yorkshire-meeting/</guid>
      <description>&lt;p&gt;This Wednesday is the night for the regular &lt;a href=&#34;http://www.agileyorkshire.org/&#34;&gt;Agile Yorkshire&lt;/a&gt; meeting, It was meant to be on &lt;a href=&#34;http://research.microsoft.com/en-us/projects/Pex/&#34;&gt;Pex&lt;/a&gt; but the speaker has had to cancel (&lt;a href=&#34;http://blog.benhall.me.uk/2009/02/ddd7-session-video-microsoft-pex-future.html&#34;&gt;but you can see a video he did on the subject at DDD online&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;So in Ben’s place we are doing some &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/09Sept09&#34;&gt;Grok Talks, short ad hoc open mike talks.&lt;/a&gt; It should make for lively discussions, and is that not the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/08/25/welcome-to-the-past-of-software-development.aspx&#34;&gt;key purpose of any user group&lt;/a&gt;?&lt;/p&gt;
&lt;p&gt;So, free beer usual place, Wednesday night, hope to see you there&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This Wednesday is the night for the regular <a href="http://www.agileyorkshire.org/">Agile Yorkshire</a> meeting, It was meant to be on <a href="http://research.microsoft.com/en-us/projects/Pex/">Pex</a> but the speaker has had to cancel (<a href="http://blog.benhall.me.uk/2009/02/ddd7-session-video-microsoft-pex-future.html">but you can see a video he did on the subject at DDD online</a>).</p>
<p>So in Ben’s place we are doing some <a href="http://www.agileyorkshire.org/2009-event-announcements/09Sept09">Grok Talks, short ad hoc open mike talks.</a> It should make for lively discussions, and is that not the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/08/25/welcome-to-the-past-of-software-development.aspx">key purpose of any user group</a>?</p>
<p>So, free beer usual place, Wednesday night, hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF250020 error creating new Team Projects</title>
      <link>https://blog.richardfennell.net/posts/tf250020-error-creating-new-team-projects/</link>
      <pubDate>Wed, 02 Sep 2009 20:00:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf250020-error-creating-new-team-projects/</guid>
      <description>&lt;p&gt;Whilst working on our TFS2010 Beta 1 test server today I got the following error when I tried to create a new team project&lt;/p&gt;
&lt;p&gt;&lt;em&gt;TF250020: The following SharePoint Web application is not valid:&lt;/em&gt; &lt;a href=&#34;http://vs2010.mydomain.com&#34;&gt;&lt;em&gt;http://vs2010.mydomain.com&lt;/em&gt;&lt;/a&gt;&lt;em&gt;. Verify that you have the correct URL.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I thought this strange, as I was doing nothing I had not done before, creating a MSF Agile team project with default settings. What I did notice was that in the new project wizard I had the server URL as &lt;a href=&#34;http://vs2010&#34;&gt;http://vs2010&lt;/a&gt; but the error message said &lt;a href=&#34;http://vs2010.mydomain.com&#34;&gt;&lt;em&gt;http://vs2010.mydomain.com&lt;/em&gt;&lt;/a&gt;. So I checked the alternate access mappings in the WSS central admin, only vs2010 was listed, so I added vs2010.mydomain.com as the Internet alias and it all leapt into life when I tried the wizard again.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst working on our TFS2010 Beta 1 test server today I got the following error when I tried to create a new team project</p>
<p><em>TF250020: The following SharePoint Web application is not valid:</em> <a href="http://vs2010.mydomain.com"><em>http://vs2010.mydomain.com</em></a><em>. Verify that you have the correct URL.</em></p>
<p>I thought this strange, as I was doing nothing I had not done before, creating a MSF Agile team project with default settings. What I did notice was that in the new project wizard I had the server URL as <a href="http://vs2010">http://vs2010</a> but the error message said <a href="http://vs2010.mydomain.com"><em>http://vs2010.mydomain.com</em></a>. So I checked the alternate access mappings in the WSS central admin, only vs2010 was listed, so I added vs2010.mydomain.com as the Internet alias and it all leapt into life when I tried the wizard again.</p>
<p>Question remains how did this work before, operating system service packs maybe?</p>
]]></content:encoded>
    </item>
    <item>
      <title>And another nice feature for Windows 7</title>
      <link>https://blog.richardfennell.net/posts/and-another-nice-feature-for-windows-7/</link>
      <pubDate>Mon, 31 Aug 2009 11:17:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-another-nice-feature-for-windows-7/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://windows.microsoft.com/en-US/windows7/products/features/homegroup&#34;&gt;Homegroup&lt;/a&gt; certainly make home networking easier, especially when the PC is part of a domain as well, it just works. No more fiddling with rights between home accounts and domain users.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://windows.microsoft.com/en-US/windows7/products/features/homegroup">Homegroup</a> certainly make home networking easier, especially when the PC is part of a domain as well, it just works. No more fiddling with rights between home accounts and domain users.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Update on Media Center on Windows 7</title>
      <link>https://blog.richardfennell.net/posts/update-on-media-center-on-windows-7/</link>
      <pubDate>Mon, 31 Aug 2009 10:36:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/update-on-media-center-on-windows-7/</guid>
      <description>&lt;p&gt;Since the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/08/14/upgrading-my-media-center-to-windows-7.aspx&#34;&gt;upgrade of my Media Center PC&lt;/a&gt; to Windows 7 I have had a few problems with fast forward on recorded TV and DVDs (which I had not seen on Vista). It was as if the fast forward button on my remote got jammed on and I could not go back to standard playback easily, it took a few seconds for the message to get through, then you ended up back where you started.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/08/14/upgrading-my-media-center-to-windows-7.aspx">upgrade of my Media Center PC</a> to Windows 7 I have had a few problems with fast forward on recorded TV and DVDs (which I had not seen on Vista). It was as if the fast forward button on my remote got jammed on and I could not go back to standard playback easily, it took a few seconds for the message to get through, then you ended up back where you started.</p>
<p>I decided it was probably a CPUprocessing speed issue, so upgrade my 3 year old AMD/ASUS single core motherboard to a current entry level dual core system, a MSI motherboard and Intel E5300 Dual Core (the brand choice was just down to what was cheap and in stock at my local supplier). This fixed the issue completely, but did require a reinstall of Windows 7, as the Intel Dual Core needed a different <a href="http://en.wikipedia.org/wiki/Hardware_abstraction_layer">HAL</a> to the AMD single core. However the reinstall was not a major issue as I run a dedicated PC as a Media Center and it is practically a default installation,</p>
]]></content:encoded>
    </item>
    <item>
      <title>Recording of my SQLBits Session on Visual Studio 2008</title>
      <link>https://blog.richardfennell.net/posts/recording-of-my-sqlbits-session-on-visual-studio-2008/</link>
      <pubDate>Mon, 31 Aug 2009 10:16:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/recording-of-my-sqlbits-session-on-visual-studio-2008/</guid>
      <description>&lt;p&gt;A webcast recording of &lt;a href=&#34;http://sqlbits.com/Agenda/event4/Making_the_SQL_developer_one_of_the_family_with_Visual_Studio_Team_System/default.aspx&#34;&gt;SQLBits IV session ‘Making the SQL developer one of the family with Visual Studio Team System’&lt;/a&gt; is now available on the SQLBits site. This discusses the features of the VS2008 Database GDR Edition.&lt;/p&gt;
&lt;p&gt;Unfortunately I will not be proposing a session for this years &lt;a href=&#34;http://sqlbits.com/&#34;&gt;SQLBits community event on the 21st of November 2009 at Celtic Manor in Newport&lt;/a&gt;, as I will be travelling back from the &lt;a href=&#34;http://microsoftpdc.com/&#34;&gt;Microsoft PDC in LA&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A webcast recording of <a href="http://sqlbits.com/Agenda/event4/Making_the_SQL_developer_one_of_the_family_with_Visual_Studio_Team_System/default.aspx">SQLBits IV session ‘Making the SQL developer one of the family with Visual Studio Team System’</a> is now available on the SQLBits site. This discusses the features of the VS2008 Database GDR Edition.</p>
<p>Unfortunately I will not be proposing a session for this years <a href="http://sqlbits.com/">SQLBits community event on the 21st of November 2009 at Celtic Manor in Newport</a>, as I will be travelling back from the <a href="http://microsoftpdc.com/">Microsoft PDC in LA</a></p>
<h4 id="sqlbitslogo"><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/SQLBitsLogo_628BB08D.png"><img alt="SQLBitsLogo" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/SQLBitsLogo_thumb_3D8DF014.png" title="SQLBitsLogo"></a></h4>
]]></content:encoded>
    </item>
    <item>
      <title>Epicenter follow up</title>
      <link>https://blog.richardfennell.net/posts/epicenter-follow-up/</link>
      <pubDate>Fri, 28 Aug 2009 09:45:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/epicenter-follow-up/</guid>
      <description>&lt;p&gt;Thanks to those people who attended my sessions at &lt;a href=&#34;http://epicenter.ie/&#34;&gt;Epicenter&lt;/a&gt; yesterday in Dublin. For hose who asked you can find [copies of the presentation at the Black Marble web site](&lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;amp;subsection=Conference&#34;&gt;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;subsection=Conference&lt;/a&gt; Papers).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to those people who attended my sessions at <a href="http://epicenter.ie/">Epicenter</a> yesterday in Dublin. For hose who asked you can find [copies of the presentation at the Black Marble web site](<a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;subsection=Conference">http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&subsection=Conference</a> Papers).</p>
]]></content:encoded>
    </item>
    <item>
      <title>Welcome to the past of software development</title>
      <link>https://blog.richardfennell.net/posts/welcome-to-the-past-of-software-development/</link>
      <pubDate>Tue, 25 Aug 2009 21:56:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/welcome-to-the-past-of-software-development/</guid>
      <description>&lt;p&gt;I was at an interesting meeting at my local &lt;a href=&#34;http://www.westyorkshire.bcs.org/&#34;&gt;BCS branch&lt;/a&gt; tonight &lt;a href=&#34;http://www.westyorkshire.bcs.org/2009/08/opening-the-black-box-an-introduction-to-quality-driven-development/&#34;&gt;‘Opening The Black Box: An Introduction to Quality Driven Development’&lt;/a&gt;  by &lt;a href=&#34;http://www.bcs.org/server.php?show=ConBlog.21&#34;&gt;Tim Hunter&lt;/a&gt;. I had heard of &lt;a href=&#34;http://en.wikipedia.org/wiki/Test-driven_development&#34;&gt;TDD&lt;/a&gt; and &lt;a href=&#34;http://en.wikipedia.org/wiki/Domain-driven_design&#34;&gt;DDD&lt;/a&gt; etal. but QDD was new to me.&lt;/p&gt;
&lt;p&gt;What we got was a hour framed by the basic premise that ‘&lt;a href=&#34;http://en.wikipedia.org/wiki/Waterfall_model&#34;&gt;Waterfall&lt;/a&gt; is good - &lt;a href=&#34;http://en.wikipedia.org/wiki/Agile_software_development&#34;&gt;Agile&lt;/a&gt; is bad’ (or progressive methods as the speaker called anything that was not waterfall). As another attendee pointed out in the Q&amp;amp;A, this tone in the presentation tended to cloud the more balanced points, managing to get the backs up of a good few attendees by the speaker’s seeming lack of understanding of god agile practices. He seemed to see agile as developers messing around, no documentation, testing or general engineering discipline. He argued that without waterfall, and specifically quality gates, we could not write quality systems. This is not the Agile I know.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was at an interesting meeting at my local <a href="http://www.westyorkshire.bcs.org/">BCS branch</a> tonight <a href="http://www.westyorkshire.bcs.org/2009/08/opening-the-black-box-an-introduction-to-quality-driven-development/">‘Opening The Black Box: An Introduction to Quality Driven Development’</a>  by <a href="http://www.bcs.org/server.php?show=ConBlog.21">Tim Hunter</a>. I had heard of <a href="http://en.wikipedia.org/wiki/Test-driven_development">TDD</a> and <a href="http://en.wikipedia.org/wiki/Domain-driven_design">DDD</a> etal. but QDD was new to me.</p>
<p>What we got was a hour framed by the basic premise that ‘<a href="http://en.wikipedia.org/wiki/Waterfall_model">Waterfall</a> is good - <a href="http://en.wikipedia.org/wiki/Agile_software_development">Agile</a> is bad’ (or progressive methods as the speaker called anything that was not waterfall). As another attendee pointed out in the Q&amp;A, this tone in the presentation tended to cloud the more balanced points, managing to get the backs up of a good few attendees by the speaker’s seeming lack of understanding of god agile practices. He seemed to see agile as developers messing around, no documentation, testing or general engineering discipline. He argued that without waterfall, and specifically quality gates, we could not write quality systems. This is not the Agile I know.</p>
<p>Agile, if adopted properly is very constraining from an engineering point of view. We have detailed specification by example, open reporting practices, regular re-estimation of remaining work, test driven development, pair programming, automated builds, regular potentially shippable products with quality gates to move products between states of publication so we don’t just release everything we build. The list goes on and on; OK no team is going to use it all, but the tools are there in the tool box. A team can set where on the agility spectrum they choose to sit.</p>
<p>I agree with the sessions premise that quality gates are important, but not that waterfall is the only way to enforce them. You can put the whole methodology choice aside and frame the discussion in how do we get staff who take pride in their work and are empowered produce quality products via their working environment. I would argue there is more hope for this in an agile framework where the whole team buys into the ethos of <a href="http://en.wikipedia.org/wiki/Software_Craftsmanship">software craftsmanship</a>, as opposed to any methodology where an onerous procedure is imposed, a <a href="http://alistair.cockburn.us/Crystal&#43;light&#43;methods">system must be habitable</a> as Alistair Cockburn puts it.</p>
<p>I felt the session was too pessimistic over the quality of people in our industry. The speaker wanting to make rules because he perceived people were of low quality and had to be forced to do a half way decent job. OK I am a bit pessimistic, not too bad a trait for a developer or tester, but we have to hope for more, to strive for more. This is something I think the agile community does do, they are trying to write better software and become better craftsman everyday. They care.</p>
<p>For me the key question is how can we bring more people along with us. Especially the people who have given up and just turn up to do their IT related job and avoid as much hassle as possible. They are the ones who don’t turn up to the BSC, community conference or any user groups or even read a book or blog on the subject. What can we do for them?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Webcast on ASP.NET testing</title>
      <link>https://blog.richardfennell.net/posts/webcast-on-asp-net-testing/</link>
      <pubDate>Tue, 25 Aug 2009 10:02:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/webcast-on-asp-net-testing/</guid>
      <description>&lt;p&gt;I have just attended an excellent free webinar session on &lt;a href=&#34;http://blog.typemock.com/2009/08/unit-testing-aspnet-live-webinar.html&#34;&gt;ASP.NET testing with Ivonna and Typemock&lt;/a&gt; by &lt;a href=&#34;http://gil-zilberfeld.blogspot.com/&#34;&gt;Gil Zilberfeld&lt;/a&gt; of Typemock and &lt;a href=&#34;http://sm-art.biz/Ivonna/Blog.aspx&#34;&gt;Artem Smirnov&lt;/a&gt; creator of &lt;a href=&#34;http://sm-art.biz/Ivonna.aspx&#34;&gt;Ivonna&lt;/a&gt; a Typemock add-on for &lt;a href=&#34;http://www.typemock.com/ASP.NET_unit_testing_page.php&#34;&gt;unit testing ASP.NET&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The session is being repeated today at 2pm GMT and I understand a recording will appear on the Typemock site in due course.&lt;/p&gt;
&lt;p&gt;So if you get a chance this afternoon have look, it is well worth your time if you work in the ASP.NET space. Personally I have also found &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2009/Developer%20Testing%20SharePoint%20using%20Typemock%20Isolator.pdf&#34;&gt;Ivonna useful for Sharepoint testing&lt;/a&gt; too, watch the session it might give you some ideas.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just attended an excellent free webinar session on <a href="http://blog.typemock.com/2009/08/unit-testing-aspnet-live-webinar.html">ASP.NET testing with Ivonna and Typemock</a> by <a href="http://gil-zilberfeld.blogspot.com/">Gil Zilberfeld</a> of Typemock and <a href="http://sm-art.biz/Ivonna/Blog.aspx">Artem Smirnov</a> creator of <a href="http://sm-art.biz/Ivonna.aspx">Ivonna</a> a Typemock add-on for <a href="http://www.typemock.com/ASP.NET_unit_testing_page.php">unit testing ASP.NET</a></p>
<p>The session is being repeated today at 2pm GMT and I understand a recording will appear on the Typemock site in due course.</p>
<p>So if you get a chance this afternoon have look, it is well worth your time if you work in the ASP.NET space. Personally I have also found <a href="http://www.blackmarble.co.uk/ConferencePapers/2009/Developer%20Testing%20SharePoint%20using%20Typemock%20Isolator.pdf">Ivonna useful for Sharepoint testing</a> too, watch the session it might give you some ideas.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Nice explanation of using Kanban for support</title>
      <link>https://blog.richardfennell.net/posts/nice-explanation-of-using-kanban-for-support/</link>
      <pubDate>Fri, 21 Aug 2009 08:45:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/nice-explanation-of-using-kanban-for-support/</guid>
      <description>&lt;p&gt;Doron at Typemock has posted a nice description of how they use &lt;a href=&#34;http://blog.typemock.com/2009/08/utilizing-kanban-to-manage-support-at.html&#34;&gt;Kanban for managing support&lt;/a&gt;. A good introduction for those unfamiliar with lean&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Doron at Typemock has posted a nice description of how they use <a href="http://blog.typemock.com/2009/08/utilizing-kanban-to-manage-support-at.html">Kanban for managing support</a>. A good introduction for those unfamiliar with lean</p>
]]></content:encoded>
    </item>
    <item>
      <title>This past week’s Nxtgen events</title>
      <link>https://blog.richardfennell.net/posts/this-past-weeks-nxtgen-events/</link>
      <pubDate>Thu, 20 Aug 2009 12:21:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/this-past-weeks-nxtgen-events/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my two &lt;a href=&#34;http://www.nxtgenug.net&#34;&gt;Nxtgen&lt;/a&gt; session on Sharepoint and Typemock in Birmingham and Manchester. You can find &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2009/Developer%20Testing%20SharePoint%20using%20Typemock%20Isolator.pdf&#34;&gt;copies of the slide on the Black Marble site.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;There was a good deal of chat in how Typemock could be used for more general ASP.NET testing; if this is of interest to you I would strongly recommended &lt;a href=&#34;http://blog.typemock.com/2009/08/unit-testing-aspnet-live-webinar.html&#34;&gt;Typemock’s next webinar on the 25th of August on Unit testing ASP.NET with Isolator and Ivonna&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my two <a href="http://www.nxtgenug.net">Nxtgen</a> session on Sharepoint and Typemock in Birmingham and Manchester. You can find <a href="http://www.blackmarble.co.uk/ConferencePapers/2009/Developer%20Testing%20SharePoint%20using%20Typemock%20Isolator.pdf">copies of the slide on the Black Marble site.</a></p>
<p>There was a good deal of chat in how Typemock could be used for more general ASP.NET testing; if this is of interest to you I would strongly recommended <a href="http://blog.typemock.com/2009/08/unit-testing-aspnet-live-webinar.html">Typemock’s next webinar on the 25th of August on Unit testing ASP.NET with Isolator and Ivonna</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Red button works in the BBC Interactive</title>
      <link>https://blog.richardfennell.net/posts/red-button-works-in-the-bbc-interactive/</link>
      <pubDate>Sat, 15 Aug 2009 15:28:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/red-button-works-in-the-bbc-interactive/</guid>
      <description>&lt;p&gt;Cool, I just noticed that on Windows 7 Media Center with a Hauppauge Nova 500 T Tuner card the &lt;a href=&#34;http://www.bbc.co.uk/digital/tv/tv_interactive.shtml&#34;&gt;red button&lt;/a&gt; works; so at last I can get Digital Teletext and interactive channels on the BBC without having to know their actual channel numbers (and as I remember they were actually ignored by previous versions of Media Center without a registry hack anyway).&lt;/p&gt;
&lt;p&gt;This makes using MCE just like a standard Digital TV – should help general acceptance. This has certainly improved since the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/06/25/5266.aspx&#34;&gt;older versions&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Cool, I just noticed that on Windows 7 Media Center with a Hauppauge Nova 500 T Tuner card the <a href="http://www.bbc.co.uk/digital/tv/tv_interactive.shtml">red button</a> works; so at last I can get Digital Teletext and interactive channels on the BBC without having to know their actual channel numbers (and as I remember they were actually ignored by previous versions of Media Center without a registry hack anyway).</p>
<p>This makes using MCE just like a standard Digital TV – should help general acceptance. This has certainly improved since the <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/06/25/5266.aspx">older versions</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Nxtgen UK tour reminder</title>
      <link>https://blog.richardfennell.net/posts/nxtgen-uk-tour-reminder/</link>
      <pubDate>Sat, 15 Aug 2009 09:21:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/nxtgen-uk-tour-reminder/</guid>
      <description>&lt;p&gt;A reminder that next week I will be speaking on ‘Developer testing of SharePoint projects using Typemock’ at two Nxtgen user groups:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.nxtgenug.net/ViewEvent.aspx?EventID=217&#34;&gt;Birmingham on the 18th Aug 2009&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.nxtgenug.net/ViewEvent.aspx?EventID=216&#34;&gt;Manchester on the 19th Aug 2009&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A reminder that next week I will be speaking on ‘Developer testing of SharePoint projects using Typemock’ at two Nxtgen user groups:</p>
<ul>
<li><a href="http://www.nxtgenug.net/ViewEvent.aspx?EventID=217">Birmingham on the 18th Aug 2009</a></li>
<li><a href="http://www.nxtgenug.net/ViewEvent.aspx?EventID=216">Manchester on the 19th Aug 2009</a></li>
</ul>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Epicenter 2009, the Irish Software Show</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-epicenter-2009-the-irish-software-show/</link>
      <pubDate>Sat, 15 Aug 2009 09:17:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-epicenter-2009-the-irish-software-show/</guid>
      <description>&lt;p&gt;I am speaking at &lt;a href=&#34;http://epicenter.ie/&#34;&gt;Epicenter 2009, the Irish Software Show&lt;/a&gt; at the end of the month. The conference runs from the Wednesday the 26th to Friday the 28th. My sessions on TFS 2010 are both on the Thursday the 27th as part of the ‘Microsoft zone’&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Application Lifecycle Management - Moving beyond source control&lt;/li&gt;
&lt;li&gt;Making Testers Part of the Development Team&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is an interesting conference as it aims to address a wide variety of technologies i.e. not just Microsoft or Open Source. I find this type of conference a great way to catch up on technology I am not usually that exposed to; just the same &lt;a href=&#34;http://www.agileyorkshire.org/&#34;&gt;ethos as Agile Yorkshire&lt;/a&gt; where Java and .Net developers can constructively compare their worlds.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking at <a href="http://epicenter.ie/">Epicenter 2009, the Irish Software Show</a> at the end of the month. The conference runs from the Wednesday the 26th to Friday the 28th. My sessions on TFS 2010 are both on the Thursday the 27th as part of the ‘Microsoft zone’</p>
<ul>
<li>Application Lifecycle Management - Moving beyond source control</li>
<li>Making Testers Part of the Development Team</li>
</ul>
<p>This is an interesting conference as it aims to address a wide variety of technologies i.e. not just Microsoft or Open Source. I find this type of conference a great way to catch up on technology I am not usually that exposed to; just the same <a href="http://www.agileyorkshire.org/">ethos as Agile Yorkshire</a> where Java and .Net developers can constructively compare their worlds.</p>
<p><a href="http://epicenter.ie/tickets.html">Tickets for Epicenter are now available</a></p>
<p><a href="/wp-content/uploads/sites/2/historic/image_5994EE9D.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_46DFF4E6.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading my Media Center to Windows 7</title>
      <link>https://blog.richardfennell.net/posts/upgrading-my-media-center-to-windows-7/</link>
      <pubDate>Fri, 14 Aug 2009 22:29:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-my-media-center-to-windows-7/</guid>
      <description>&lt;p&gt;Over the past couple of days I have upgraded my Vista based Media Center to Windows 7. After my previous experiences upgrading from XP &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/12/09/Feel-my-Vista-pain_2E002E002E002E002E002E002E00_.aspx&#34;&gt;here&lt;/a&gt;,  &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/12/19/Vista-media-center-update.aspx&#34;&gt;here&lt;/a&gt;  and &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2007/01/13/Coming-out-of-sleep-mode-in-Vista.aspx&#34;&gt;here&lt;/a&gt; I decided to do a new install onto a new 1Tb disk as opposed to an in place upgrade. This all went OK, there was nothing major to note, Windows 7 shipped with a driver for everything in my 3 year old AMD/ASUS based PC bar the sound card built into my motherboard, but that was easily downloaded. It is worth commenting that my Hauppauge Nova 500 T digital TV turner was found OK, but I had to get it to scan for channels three times before it got a signal. Why it worked the third time I don’t know as I did not change anything.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the past couple of days I have upgraded my Vista based Media Center to Windows 7. After my previous experiences upgrading from XP <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/12/09/Feel-my-Vista-pain_2E002E002E002E002E002E002E00_.aspx">here</a>,  <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/12/19/Vista-media-center-update.aspx">here</a>  and <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2007/01/13/Coming-out-of-sleep-mode-in-Vista.aspx">here</a> I decided to do a new install onto a new 1Tb disk as opposed to an in place upgrade. This all went OK, there was nothing major to note, Windows 7 shipped with a driver for everything in my 3 year old AMD/ASUS based PC bar the sound card built into my motherboard, but that was easily downloaded. It is worth commenting that my Hauppauge Nova 500 T digital TV turner was found OK, but I had to get it to scan for channels three times before it got a signal. Why it worked the third time I don’t know as I did not change anything.</p>
<p>The problems I had were when I wanted to copy on media (photos, music and TV recordings) from my old hard disks. The first problem was when I connected them (using external USB cases, one was SATA the other PATA) to a Windows 7 laptop they were not seen. The physical disks were detected but the partitions were not present. When I tried them on a Vista box at work the partitions were seen but marked as foreign when I looked in the administrator disk management tool. Once I selected the ‘import foreign disks’ option they both appeared OK as drives on the Vista box (at this point I copied the files to a network location as a backup, that took a while!). I then tried the disks again on another Windows 7 box and another Vista box, in both cases it now said the drives were dynamic and corrupt. However back on the working Vista box they were both still OK. I was now confused. However, as I now had a backup of the data, on the second Vista box I tried to convert this corrupt dynamic disk to a basic disk, this ‘worked’ but the ‘corrupt’ partition disappeared. When I tried to quick format that failed too, but a full format worked, but seemed slow (a couple of hours for 300Gb). Now once this disk was formatted it could be read and used on all the Vista and Windows 7 boxes. I copied the data back from my network share backup and used the fixed external UBS drive to move the files back onto my new Media Center at home.</p>
<p>Sorry I don’t have a better solution to the disk issues than a format. I have no idea what was gong on there, but I now have a fully working Windows 7 Media Center with all my old media on it. Probably not a quick as an in place upgrade, but I know it is a nice clean install. First impression of the RTM version of Windows 7 Media Center is that it is fast and  the interface clean. We shall see how it is to live with it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Slides from yesterdays presentation at VBUG</title>
      <link>https://blog.richardfennell.net/posts/slides-from-yesterdays-presentation-at-vbug/</link>
      <pubDate>Wed, 12 Aug 2009 22:28:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/slides-from-yesterdays-presentation-at-vbug/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my VBUG session yesterday on ‘Enabling agile development with Visual Studio Team System 2010’. I have just posted the &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2009/Using%20Visual%20Studio%20Team%20System%202010%20to%20enable%20Agile%20Development.pdf&#34;&gt;slides on Black Marble’s web site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my VBUG session yesterday on ‘Enabling agile development with Visual Studio Team System 2010’. I have just posted the <a href="http://www.blackmarble.co.uk/ConferencePapers/2009/Using%20Visual%20Studio%20Team%20System%202010%20to%20enable%20Agile%20Development.pdf">slides on Black Marble’s web site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Expression Suite Version 3 and Source Control</title>
      <link>https://blog.richardfennell.net/posts/expression-suite-version-3-and-source-control/</link>
      <pubDate>Tue, 04 Aug 2009 14:08:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/expression-suite-version-3-and-source-control/</guid>
      <description>&lt;p&gt;It is good to see that at last &lt;a href=&#34;http://www.microsoft.com/expression/&#34;&gt;Expression Web and Blend&lt;/a&gt; have got source control integrated into their IDEs. This is a vital feature if developers and designers are to work together effectively on the same code. It is a shame that the integration is just for TFS, it would be nice to support other source repositories to get an even wider reach, but TFS is enough for me now.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is good to see that at last <a href="http://www.microsoft.com/expression/">Expression Web and Blend</a> have got source control integrated into their IDEs. This is a vital feature if developers and designers are to work together effectively on the same code. It is a shame that the integration is just for TFS, it would be nice to support other source repositories to get an even wider reach, but TFS is enough for me now.</p>
<p>Just remember that to get this integration to work you need to do a bit more than install Expression Suite</p>
<ul>
<li>Install TFS 2008 Team Client</li>
<li>Apply VS2008 SP1</li>
<li><a href="http://blogs.msdn.com/bharry/archive/2007/11/23/tfs-licensing-change-for-tfs-2008.aspx">And the magic codeplex hosted, not full support hotfix</a></li>
</ul>
<p>All seems to work though.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Isn’t the new Paste preview in Office 2010 cools</title>
      <link>https://blog.richardfennell.net/posts/isnt-the-new-paste-preview-in-office-2010-cools/</link>
      <pubDate>Sun, 02 Aug 2009 20:54:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/isnt-the-new-paste-preview-in-office-2010-cools/</guid>
      <description>&lt;p&gt;I love the new paste preview selection in Word 2010&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_4A1B5ECD.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_174B5859.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;This really is cutting down the number of paste, undo, paste special, damn undo again that failed so paste into paint or notepad, reselect and copy try again sequences I am going through.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I love the new paste preview selection in Word 2010</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_4A1B5ECD.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_174B5859.png" title="image"></a></p>
<p>This really is cutting down the number of paste, undo, paste special, damn undo again that failed so paste into paint or notepad, reselect and copy try again sequences I am going through.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Outlook 2010 email searching</title>
      <link>https://blog.richardfennell.net/posts/outlook-2010-email-searching/</link>
      <pubDate>Sat, 25 Jul 2009 07:28:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/outlook-2010-email-searching/</guid>
      <description>&lt;p&gt;Since I installed Outlook 2010 technical preview I have not been able to search emails using the ‘search mail’ boxes in Outlook (which links into Windows desktop search). I had not realised how much I used the feature until it did not work. When I tried a search I was shown a dialog saying there was 40000+ items waiting to be indexed, and this number was not changing. It seemed that the indexing of Outlook contents had stopped.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since I installed Outlook 2010 technical preview I have not been able to search emails using the ‘search mail’ boxes in Outlook (which links into Windows desktop search). I had not realised how much I used the feature until it did not work. When I tried a search I was shown a dialog saying there was 40000+ items waiting to be indexed, and this number was not changing. It seemed that the indexing of Outlook contents had stopped.</p>
<p>The solution I found was to reset the indexing process.</p>
<ol>
<li>In Outlook select the back stage (brown button)</li>
<li>Select Outlook options</li>
<li>Select Search</li>
<li>Select Indexing Options button</li>
<li>Use the modify button to remove all options from the locations to be indexed</li>
<li>Use the advanced button to select a re-index (with nothing selected)</li>
<li>Use modify button again to re-select all the item you want to index (including Outlook)</li>
<li>Finally use the advanced button again to select the re-index option to restart the process</li>
<li>I then had to wait 24 hours, but at least I could see the index items count going up at the top of the Indexing options dialog until I got the message Indexing complete..</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2008 connecting to TFS 2010</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2008-connecting-to-tfs-2010/</link>
      <pubDate>Tue, 21 Jul 2009 15:54:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2008-connecting-to-tfs-2010/</guid>
      <description>&lt;p&gt;You can connect a Visual Studio 2008 IDE instance to a TFS 2010 server, as &lt;a href=&#34;http://geekswithblogs.net/hinshelm/archive/2009/05/26/connecting-vs2008-to-any-tfs2010-project-collection.aspx&#34;&gt;detailed in Martin Hinshelwood’s blog&lt;/a&gt;, but I cannot stress enough you have to have &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?familyid=FBEE1648-7106-44A7-9649-6D9F6D58056E&amp;amp;displaylang=en&#34;&gt;VS2008 SP1 installed&lt;/a&gt;. If you don’t, you get an error dialog when you enter the URL for the TFS server saying the URL contains invalid characters.&lt;/p&gt;
&lt;p&gt;This seems a simple rule, but as I found today it is easy to not realise your PC is not patched. You look in the about dialog for Visual Studio and it says SP1 installed, but the Team Foundation Client has not been. This was because the following installation order was followed:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can connect a Visual Studio 2008 IDE instance to a TFS 2010 server, as <a href="http://geekswithblogs.net/hinshelm/archive/2009/05/26/connecting-vs2008-to-any-tfs2010-project-collection.aspx">detailed in Martin Hinshelwood’s blog</a>, but I cannot stress enough you have to have <a href="http://www.microsoft.com/downloads/details.aspx?familyid=FBEE1648-7106-44A7-9649-6D9F6D58056E&amp;displaylang=en">VS2008 SP1 installed</a>. If you don’t, you get an error dialog when you enter the URL for the TFS server saying the URL contains invalid characters.</p>
<p>This seems a simple rule, but as I found today it is easy to not realise your PC is not patched. You look in the about dialog for Visual Studio and it says SP1 installed, but the Team Foundation Client has not been. This was because the following installation order was followed:</p>
<ol>
<li>Install VS2008</li>
<li>Patch with SP1</li>
<li>Install Team Foundation Client</li>
</ol>
<p>The SP1 needed to be run again to patch the client then all was OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>A major day in history</title>
      <link>https://blog.richardfennell.net/posts/a-major-day-in-history/</link>
      <pubDate>Tue, 21 Jul 2009 13:27:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-major-day-in-history/</guid>
      <description>&lt;p&gt;With all this coverage of the moon landings I am surprised that the other major news item of 40 years ago today has not been covered – the Fennell family moved house. I ask you just what are the priorities of the news media?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With all this coverage of the moon landings I am surprised that the other major news item of 40 years ago today has not been covered – the Fennell family moved house. I ask you just what are the priorities of the news media?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Outlook 2010 crashing after in place upgrade</title>
      <link>https://blog.richardfennell.net/posts/outlook-2010-crashing-after-in-place-upgrade/</link>
      <pubDate>Fri, 17 Jul 2009 14:51:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/outlook-2010-crashing-after-in-place-upgrade/</guid>
      <description>&lt;p&gt;I downloaded and installed the Office 2010 technical preview today. All seemed to go OK (after I realised  I could only do an in place upgrade from 32bit Office 2007 –&amp;gt; 32bit Office 2010), once the upgrade was done I could load Word and using our VSTO based document templates all seemed to work fine. However when I tried Outlook I got a problem.&lt;/p&gt;
&lt;p&gt;On the first loading it took a while, telling me it was doing something to  40K+ items, presumably checking the local OST file, but then seemed to load OK. However after about 30 seconds it crashed, and continued to do this every time I rebooted Outlook even when I was tried safe mode, add-ins switched off and/or offline working. It all had no effect. Checking the event log I saw I had the error:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I downloaded and installed the Office 2010 technical preview today. All seemed to go OK (after I realised  I could only do an in place upgrade from 32bit Office 2007 –&gt; 32bit Office 2010), once the upgrade was done I could load Word and using our VSTO based document templates all seemed to work fine. However when I tried Outlook I got a problem.</p>
<p>On the first loading it took a while, telling me it was doing something to  40K+ items, presumably checking the local OST file, but then seemed to load OK. However after about 30 seconds it crashed, and continued to do this every time I rebooted Outlook even when I was tried safe mode, add-ins switched off and/or offline working. It all had no effect. Checking the event log I saw I had the error:</p>
<p><em>Faulting application name: OUTLOOK.EXE, version: 14.0.4006.1110, time stamp: 0x4a468538<br>
Faulting module name: mlang.dll, version: 6.1.7100.0, time stamp: 0x49eea59f<br>
Exception code: 0xc0000005<br>
Fault offset: 0x00016aef<br>
Faulting process id: 0x1060<br>
Faulting application start time: 0x01ca06e3140fb30d<br>
Faulting application path: C:Program Files (x86)Microsoft OfficeOffice14OUTLOOK.EXE<br>
Faulting module path: C:Windowssystem32mlang.dll<br>
Report Id: 5af5a171-72d6-11de-97df-001636a51764</em></p>
<p>I noticed that the in place upgrade had not removed Office 2007, so I did this via the control panel. After this Outlook loaded and was stable other than the fact the view panel would not load (was grey’d out on the menu), so stable but useless</p>
<p>I then tried a removing Office 2010 (there is no repair option) and reinstalling again using the same 32bit media. This time it knew it was not doing an upgrade but a new install. Once this completed I loaded Outlook and it worked fine including picking up my existing OST file (from a non default location, I keep it on a bitlocker partition) and also still had my CRM 4.0 add-in configured and working.</p>
<p>So it seems the upgrade can get  bit confused</p>
]]></content:encoded>
    </item>
    <item>
      <title>My Glasgow presentation last night.</title>
      <link>https://blog.richardfennell.net/posts/my-glasgow-presentation-last-night/</link>
      <pubDate>Thu, 16 Jul 2009 16:04:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-glasgow-presentation-last-night/</guid>
      <description>&lt;p&gt;Thanks to everyone who turned up for my presentation on Typemock and Sharepoint in Glasgow last night. I have just upload &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2009/Developer%20Testing%20SharePoint%20using%20Typemock%20Isolator.pdf&#34;&gt;my slides onto the Black Marble site&lt;/a&gt;. As the session was quite demo driven the slides don’t offer the best code samples. If you want to experiment yourselves I would suggest you look at my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/tags/Typemock/default.aspx&#34;&gt;related posts on this blog&lt;/a&gt; and remember if you don’t have Typemock you can download trial versions of all the products I used from &lt;a href=&#34;http://www.typemock.com&#34;&gt;www.typemock.com&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who turned up for my presentation on Typemock and Sharepoint in Glasgow last night. I have just upload <a href="http://www.blackmarble.co.uk/ConferencePapers/2009/Developer%20Testing%20SharePoint%20using%20Typemock%20Isolator.pdf">my slides onto the Black Marble site</a>. As the session was quite demo driven the slides don’t offer the best code samples. If you want to experiment yourselves I would suggest you look at my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/tags/Typemock/default.aspx">related posts on this blog</a> and remember if you don’t have Typemock you can download trial versions of all the products I used from <a href="http://www.typemock.com">www.typemock.com</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Scottish Developers on Typemock</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-scottish-developers-on-typemock/</link>
      <pubDate>Mon, 13 Jul 2009 12:19:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-scottish-developers-on-typemock/</guid>
      <description>&lt;p&gt;On Wednesday the 15th this week I will be speaking at &lt;a href=&#34;http://www.eventbrite.com/event/359662761/newsletter&#34;&gt;Scottish Developers on developer testing of SharePoint&lt;/a&gt;. This is a free event so please come along if you are interested and in the area.&lt;/p&gt;
&lt;p&gt;If you are not able to make it to Glasgow but are interested in the subject then why not check out &lt;a href=&#34;http://blog.typemock.com/2009/07/typemock-webinar-on-jul-15-unit-testing.html&#34;&gt;Typemock’s free live webinars also on the 15th&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.typemock.com/proxy/redirect.php?id=0423&#34;&gt;&lt;img alt=&#34;Proud to be a Typemock MVP&#34; loading=&#34;lazy&#34; src=&#34;http://www.typemock.com/images/mvp_button.jpg&#34;&gt;&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On Wednesday the 15th this week I will be speaking at <a href="http://www.eventbrite.com/event/359662761/newsletter">Scottish Developers on developer testing of SharePoint</a>. This is a free event so please come along if you are interested and in the area.</p>
<p>If you are not able to make it to Glasgow but are interested in the subject then why not check out <a href="http://blog.typemock.com/2009/07/typemock-webinar-on-jul-15-unit-testing.html">Typemock’s free live webinars also on the 15th</a>.</p>
<p><a href="http://www.typemock.com/proxy/redirect.php?id=0423"><img alt="Proud to be a Typemock MVP" loading="lazy" src="http://www.typemock.com/images/mvp_button.jpg"></a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Video recording gotta for VS2010 Test Runner</title>
      <link>https://blog.richardfennell.net/posts/video-recording-gotta-for-vs2010-test-runner/</link>
      <pubDate>Sun, 12 Jul 2009 11:00:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/video-recording-gotta-for-vs2010-test-runner/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://channel9.msdn.com/shows/10-4/10-4-Episode-23-An-Introduction-to-Manual-Testing/&#34;&gt;VS2010 has excellent new features to assist the tester in general, and specifically in the area of manual testing&lt;/a&gt;. One of these is the ability to video a manual test run; to get this to work you have to have the &lt;a href=&#34;http://www.microsoft.com/windows/windowsmedia/forpros/encoder/default.mspx&#34;&gt;Windows Media Encoder 9&lt;/a&gt; installed as well as the VS2010 Test Runner . If you don’t have it installed and try to use the video feature you get a dialog that warns you this component is missing and would you like to either download it or disable video recording feature for this test run.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://channel9.msdn.com/shows/10-4/10-4-Episode-23-An-Introduction-to-Manual-Testing/">VS2010 has excellent new features to assist the tester in general, and specifically in the area of manual testing</a>. One of these is the ability to video a manual test run; to get this to work you have to have the <a href="http://www.microsoft.com/windows/windowsmedia/forpros/encoder/default.mspx">Windows Media Encoder 9</a> installed as well as the VS2010 Test Runner . If you don’t have it installed and try to use the video feature you get a dialog that warns you this component is missing and would you like to either download it or disable video recording feature for this test run.</p>
<p>The problem is the warning dialog has a link that takes to you the <a href="http://www.microsoft.com/windows/windowsmedia/forpros/encoder/default.mspx">Windows Media Encoder 9</a> homepage. This gives you the option to install either the 32bit or 64bit version. I mistakenly assumed that I needed the version for operating system I have i.e. 64bit as I run 64bit Windows 7. THIS IS WRONG.</p>
<p>VS2010 Test Runner only seems to work if you have to install the 32bit and the hotfix the Test Runner dialog mentions. Once I installed the 32bit video recording of the test worked perfectly.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Grow your own testing framework</title>
      <link>https://blog.richardfennell.net/posts/grow-your-own-testing-framework/</link>
      <pubDate>Thu, 09 Jul 2009 10:48:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/grow-your-own-testing-framework/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blog.benhall.me.uk/2009/07/mefunit-prototype-of-mef-unit-testing.html&#34;&gt;Ben Hall has written an interesting post&lt;/a&gt; on creating a testing framework with &lt;a href=&#34;http://www.codeplex.com/MEF&#34;&gt;MEF.&lt;/a&gt; As Ben said it is more a learning exercise for MEF than a useful product but it certainly show show the ease you can add meta data and reflection like functionality to your codebase. Wish it had been about when I wrote &lt;a href=&#34;http://guitester.codeplex.com/&#34;&gt;GUITeste&lt;/a&gt;r&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blog.benhall.me.uk/2009/07/mefunit-prototype-of-mef-unit-testing.html">Ben Hall has written an interesting post</a> on creating a testing framework with <a href="http://www.codeplex.com/MEF">MEF.</a> As Ben said it is more a learning exercise for MEF than a useful product but it certainly show show the ease you can add meta data and reflection like functionality to your codebase. Wish it had been about when I wrote <a href="http://guitester.codeplex.com/">GUITeste</a>r</p>
]]></content:encoded>
    </item>
    <item>
      <title>And eScrum is no more</title>
      <link>https://blog.richardfennell.net/posts/and-escrum-is-no-more/</link>
      <pubDate>Tue, 07 Jul 2009 09:05:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-escrum-is-no-more/</guid>
      <description>&lt;p&gt;I have &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/tags/eScrum/default.aspx&#34;&gt;posted in the past about the TFS 2005 and 2008 Process Template eScrum&lt;/a&gt; from Microsoft; a template we use internally for a number of Agile projects. Well today it has been removed from the Microsoft download sites.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2009/06/10/the-future-of-escrum.aspx&#34;&gt;It was decided a while ago&lt;/a&gt; that it would not be updated to support TFS2010 and has been removed to avoid any confusion over whether it is support or not by Microsoft (FYI it was never officially supported anyway as it did not originate inside the TFS team)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/tags/eScrum/default.aspx">posted in the past about the TFS 2005 and 2008 Process Template eScrum</a> from Microsoft; a template we use internally for a number of Agile projects. Well today it has been removed from the Microsoft download sites.</p>
<p><a href="http://blogs.msdn.com/bharry/archive/2009/06/10/the-future-of-escrum.aspx">It was decided a while ago</a> that it would not be updated to support TFS2010 and has been removed to avoid any confusion over whether it is support or not by Microsoft (FYI it was never officially supported anyway as it did not originate inside the TFS team)</p>
<p>So are we at Black Marble going to miss it? Well the main reason we had used it in the past was the web interface that gave a single point for updating the status of work items without the need to enter Visual Studio, it was our project wallboard. With the much enhanced Office integration with TFS2010 I think we are not going to miss eScrum. We can now provide an easy way with Excel or Project for any team member (developer or not) to update their work status and also Excel Services to provide a <a href="http://alistair.cockburn.us/Information&#43;radiator">information radiator</a> showing the overall project status.</p>
<p>If you do need a strict Scrum implementation template for TFS then have a look at <a href="http://www.scrumforteamsystem.com/en/default.aspx">Conchango’s Scrum for Team Syste</a>m which is going to be updated to make use of all the new features of TFS2010</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fun with a SQLExpress 2005 upgrade 2008</title>
      <link>https://blog.richardfennell.net/posts/fun-with-a-sqlexpress-2005-upgrade-2008/</link>
      <pubDate>Mon, 06 Jul 2009 12:33:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fun-with-a-sqlexpress-2005-upgrade-2008/</guid>
      <description>&lt;p&gt;On my development PC I had a 2005 instance of SQLExpress that was installed as part of the VS2008 setup. I thought I had upgraded it when I put on the SQL 2008 Management tools and/or VS2010 beta, but it seems I didn’t. I thought I would try the new &lt;a href=&#34;http://www.microsoft.com/web/downloads/platform.aspx&#34;&gt;Microsoft Web Platform Installer&lt;/a&gt;, but this also thought I had done the upgrade to 2008, I suspect due to the fact I had the 2008 management tools.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On my development PC I had a 2005 instance of SQLExpress that was installed as part of the VS2008 setup. I thought I had upgraded it when I put on the SQL 2008 Management tools and/or VS2010 beta, but it seems I didn’t. I thought I would try the new <a href="http://www.microsoft.com/web/downloads/platform.aspx">Microsoft Web Platform Installer</a>, but this also thought I had done the upgrade to 2008, I suspect due to the fact I had the 2008 management tools.</p>
<p><strong>Note:</strong> If you are using the <a href="http://www.microsoft.com/web/downloads/platform.aspx">Microsoft Web Platform Installer 2.0 RC</a> remember you can’t just click on it to run from the web if you are running as a non-administrator user on your PC (as you should be, running least privilege). You need to download it and ‘run it is administrator’ or open it in a browser running as administrator to get it t even load.</p>
<p>So I needed to download the SQLExpress 2008 media to do a manual upgrade, as I remembered I could not use the developer edition media I had to hand to upgrade and Express instance. This download in itself proved problematic. I did a <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&amp;displaylang=en">downloaded from MSDN</a>, but the file I got gave a ‘not a valid win32’ error when I tried to run it. Also I noticed each time I tried to download it in IE8 it was a different size – not a good sign! Once I swapped to Firefox it downloaded without issue.</p>
<p>Anyway in the end I got the right media and access rights and the upgraded went smoothly. However then I tried to attach a 2008 DB (the reason I needed the upgrade in the first place) I got the error</p>
<p>Parameter name: nColIndex<br>
Actual value was -1. (Microsoft.SqlServer.GridControl)</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_502DC814.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_0FF7AE9A.png" title="image"></a></p>
<p>I had yet again forgotten to run SQL Management Studio as an administrative user. This error dialog is SQL Management Studio was of saying you don’t have the rights!</p>
]]></content:encoded>
    </item>
    <item>
      <title>July Agile Yorkshire meeting</title>
      <link>https://blog.richardfennell.net/posts/july-agile-yorkshire-meeting/</link>
      <pubDate>Mon, 06 Jul 2009 10:12:42 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/july-agile-yorkshire-meeting/</guid>
      <description>&lt;p&gt;Don’t forget this month’s Agile Yorkshire meeting is this Wednesday at the Victoria Hotel in Leeds. It is the usual mix of free beer, great conversation and an excellent speaker (note: you can assign your own ranking order as these factors, using a suitable Agile planning model).&lt;/p&gt;
&lt;p&gt;This month the speaker is &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/8thjulynancyvanschooenderwoert-pleasevoteforyourchoiceofsubject&#34;&gt;Nancy Van Schooenderwoert on ‘Seven Paradoxes of Agile Software Development&lt;/a&gt;’.&lt;/p&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Don’t forget this month’s Agile Yorkshire meeting is this Wednesday at the Victoria Hotel in Leeds. It is the usual mix of free beer, great conversation and an excellent speaker (note: you can assign your own ranking order as these factors, using a suitable Agile planning model).</p>
<p>This month the speaker is <a href="http://www.agileyorkshire.org/2009-event-announcements/8thjulynancyvanschooenderwoert-pleasevoteforyourchoiceofsubject">Nancy Van Schooenderwoert on ‘Seven Paradoxes of Agile Software Development</a>’.</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Remember the new URL format for the TFS2010 web services have changed</title>
      <link>https://blog.richardfennell.net/posts/remember-the-new-url-format-for-the-tfs2010-web-services-have-changed/</link>
      <pubDate>Fri, 03 Jul 2009 14:25:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/remember-the-new-url-format-for-the-tfs2010-web-services-have-changed/</guid>
      <description>&lt;p&gt;To have a good look at TFS2010 I have migrated some existing VS2008 projects to VS2010. This has meant they are now being built using a new TFS 2010 build server. Now I wanted to make sure everyone still knew what was building and what was not, so I updated the configuration on &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/12/22/update-in-using-stylecop-in-tfs-team-build.aspx&#34;&gt;our build wallboard&lt;/a&gt; to get the status from both the older 2008 and the new 2010 server – and it did not work. I fiddled around, upgraded the build wallboard to use the TFS2010 assemblies, all to no avail, the application just exited when I tried to get a reference to the build service.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>To have a good look at TFS2010 I have migrated some existing VS2008 projects to VS2010. This has meant they are now being built using a new TFS 2010 build server. Now I wanted to make sure everyone still knew what was building and what was not, so I updated the configuration on <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/12/22/update-in-using-stylecop-in-tfs-team-build.aspx">our build wallboard</a> to get the status from both the older 2008 and the new 2010 server – and it did not work. I fiddled around, upgraded the build wallboard to use the TFS2010 assemblies, all to no avail, the application just exited when I tried to get a reference to the build service.</p>
<p>Then I had another think, the Url of the TFS server has changed format in 2010. It used to be <a href="http://my2008server:8080">http://my2008server:8080</a> it is now <a href="http://my2010server:8080/tfs">http://my2010server:8080/tfs</a>. I had been leaving off the trailing /tfs, an easy mistake to make. Once this was corrected my old build wallboard worked without a problem, there was no need to use the TFS2010 assemblies in the project.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2010 project upgrade bug; test assemblies being copied to release folders in error</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2010-project-upgrade-bug-test-assemblies-being-copied-to-release-folders-in-error/</link>
      <pubDate>Fri, 03 Jul 2009 13:08:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2010-project-upgrade-bug-test-assemblies-being-copied-to-release-folders-in-error/</guid>
      <description>&lt;p&gt;After I upgraded an ASP.NET Web Application VS2008 solution to VS2010 I found a strange problem. When I build either the whole solution or the test project in the solution the test assembly gets copies to Web Applications bin directory. However if I build the solution from the command line with MSBUILD they are not copied (so MSBUILD behaves as VS2008 used to).&lt;/p&gt;
&lt;p&gt;Turns out it is easy to repeat, the process is as follows:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After I upgraded an ASP.NET Web Application VS2008 solution to VS2010 I found a strange problem. When I build either the whole solution or the test project in the solution the test assembly gets copies to Web Applications bin directory. However if I build the solution from the command line with MSBUILD they are not copied (so MSBUILD behaves as VS2008 used to).</p>
<p>Turns out it is easy to repeat, the process is as follows:</p>
<ol>
<li>Open Vs2008 SP1</li>
<li>Create an empty solution</li>
<li>Add a C# Class library project (targeted on .NET 3.5)</li>
<li>Add a C# ASP.NET Web application (targeted on .NET 3.5)</li>
<li>Add reference from the web application to the class library</li>
<li>Build the solution, note that we see the class library and web application assemblies are in the web application bin directory</li>
<li>Add a C# Test project 3.5 (targeted on .NET 3.5)</li>
<li>Add reference to the Web application project</li>
<li>Build the solution, note that we still just see the class library and web application assemblies are in the web application bin directory – there is no test assembly</li>
<li>Exit VS2008 and load VS2010</li>
<li>Load the solution and allow it to be upgraded. On the dialog about .NET versions say to leave the projects on .NET 3.5.</li>
<li>Rebuild the solution and note that now in the Web Application bin directory we have the class library, web application and test assemblies</li>
<li>Do a clean of the solution, note that in the Web Application bin directory the test assembly is not removed.</li>
</ol>
<p>So why is this a problem? .NET 4.0 is a replacement for previous .NET version not an extension as 3.0 and 3.5. were for 2.0. This means basically that in any given deployment you need to have all 4.0 assemblies or all 2.0, 3.0, 3.5 ones. However, under VS2010 a test project must target .NET 4.0 (to get all the new cool testing features), so the fact at this 4.0 test assembly ends up in a 3.5 based Web Application directory is a problem.</p>
<p>I have no answer to the problem as yet, though I have reported it. At present the workaround is to either</p>
<ol>
<li>Delete the test assemblies from the Web Application bin directory. once this is done everything behaves as you would expect.</li>
<li>Or make sure you only target .NET 4.0, though I suspect this might be an issue for some people developing web applications as it will be a while before we see ISP deploying .NET 4.0</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>Re-awarded MVP</title>
      <link>https://blog.richardfennell.net/posts/re-awarded-mvp/</link>
      <pubDate>Thu, 02 Jul 2009 12:32:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/re-awarded-mvp/</guid>
      <description>&lt;p&gt;I am really happy to say that I have had my &lt;a href=&#34;http://mvp.support.microsoft.com/&#34;&gt;MVP for Team System Re-awarded&lt;/a&gt;, it is a privilege to get to work with such a great group of people as a have met via the MVP programme.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am really happy to say that I have had my <a href="http://mvp.support.microsoft.com/">MVP for Team System Re-awarded</a>, it is a privilege to get to work with such a great group of people as a have met via the MVP programme.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problems with uplink on an 8 port Netgear Gigabit switch</title>
      <link>https://blog.richardfennell.net/posts/problems-with-uplink-on-an-8-port-netgear-gigabit-switch/</link>
      <pubDate>Tue, 30 Jun 2009 09:39:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problems-with-uplink-on-an-8-port-netgear-gigabit-switch/</guid>
      <description>&lt;p&gt;All the ports on the &lt;a href=&#34;http://www.netgear.com/Products/Switches/DesktopSwitches/GS608.aspx&#34;&gt;Netgear GS608&lt;/a&gt; I think are meant to be auto speed and uplink sensing, I have found this not to be true. I had the 1Gb uplink to our central switches in Port 8 and a 100Mb Ethernet workstation in port 2 could not get an IP address via DHCP. When I moved the uplink to Port 1 it all leapt into life. Interestingly other 1Gb PCs in other ports had no problem with the uplink in either port 1 or 8.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>All the ports on the <a href="http://www.netgear.com/Products/Switches/DesktopSwitches/GS608.aspx">Netgear GS608</a> I think are meant to be auto speed and uplink sensing, I have found this not to be true. I had the 1Gb uplink to our central switches in Port 8 and a 100Mb Ethernet workstation in port 2 could not get an IP address via DHCP. When I moved the uplink to Port 1 it all leapt into life. Interestingly other 1Gb PCs in other ports had no problem with the uplink in either port 1 or 8.</p>
<p>So my tip is put you uplink in port 1 on a Netgear switch to avoid problems with auto sensing.</p>
]]></content:encoded>
    </item>
    <item>
      <title>.NET Framework 3.5 SP1 issue on Windows SharePoint Services v2.0</title>
      <link>https://blog.richardfennell.net/posts/net-framework-3-5-sp1-issue-on-windows-sharepoint-services-v2-0/</link>
      <pubDate>Mon, 29 Jun 2009 10:02:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/net-framework-3-5-sp1-issue-on-windows-sharepoint-services-v2-0/</guid>
      <description>&lt;p&gt;If you apply the TFS2008 SP1 to a system that has been upgraded from TFS 2005, but the WSS was not upgraded from 2.0 to 3.0 you can get a problem that you cannot access the SharePoint portal sites due to WebPart load errors (you get an Event ID: 1000 error in the Windows event logs). This is because the TFS 2008 SP1 installs .NET framework 3.5 SP1 which causes some &lt;a href=&#34;http://blogs.msdn.com/sharepoint/archive/2008/08/27/net-framework-3-5-sp1-issue-on-windows-sharepoint-services-v2-0.aspx&#34;&gt;problems for WSS 2.0&lt;/a&gt;. Note this is not usually a problem for new installs of TFS 2008 as these use WSS 3.0 by default, but the upgrade of TFS from 2005 to 2008 does not force the upgrade of WSS 2.0 to 3.0 so sites that upgraded are susceptible.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you apply the TFS2008 SP1 to a system that has been upgraded from TFS 2005, but the WSS was not upgraded from 2.0 to 3.0 you can get a problem that you cannot access the SharePoint portal sites due to WebPart load errors (you get an Event ID: 1000 error in the Windows event logs). This is because the TFS 2008 SP1 installs .NET framework 3.5 SP1 which causes some <a href="http://blogs.msdn.com/sharepoint/archive/2008/08/27/net-framework-3-5-sp1-issue-on-windows-sharepoint-services-v2-0.aspx">problems for WSS 2.0</a>. Note this is not usually a problem for new installs of TFS 2008 as these use WSS 3.0 by default, but the upgrade of TFS from 2005 to 2008 does not force the upgrade of WSS 2.0 to 3.0 so sites that upgraded are susceptible.</p>
<p>Most of the blog posts suggest a removal of .NET 3.5 and reinstall of 2.0 with service packs, this is not an option for a TFS 2008 installation. Luckily there is a solution, the <a href="http://support.microsoft.com/kb/959209">.NET framework family update</a>. Once these patches are installed for the historic versions of .NET all seems OK</p>
<p>Thanks to <a href="http://wesmacdonald.spaces.live.com/">Wes MacDonald</a> for pointing me at this fix, saved me no end of headaches</p>
]]></content:encoded>
    </item>
    <item>
      <title>Logging everything that is going on when an assembly loads using CThru</title>
      <link>https://blog.richardfennell.net/posts/logging-everything-that-is-going-on-when-an-assembly-loads-using-cthru/</link>
      <pubDate>Mon, 22 Jun 2009 11:33:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/logging-everything-that-is-going-on-when-an-assembly-loads-using-cthru/</guid>
      <description>&lt;p&gt;Whist trying to work out if there is any way to get around the problem I am &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/07/testing-sharepoint-workflows-using-typemock-isolator-part-3.aspx&#34;&gt;suffering with Sharepoint workflows idling inside a Typemock Isolator&lt;/a&gt; test harness I have been having a good look at &lt;a href=&#34;http://cthru.codeplex.com/&#34;&gt;CThru&lt;/a&gt;; a set if libraries for Typemock that, and I quote Codeplex here, ‘… allows creating interception and isolation frameworks for logging, testing and many other things very simply’. This is the framework is used to create the Silverlight mocking frame work on the same Codeplex site.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist trying to work out if there is any way to get around the problem I am <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/07/testing-sharepoint-workflows-using-typemock-isolator-part-3.aspx">suffering with Sharepoint workflows idling inside a Typemock Isolator</a> test harness I have been having a good look at <a href="http://cthru.codeplex.com/">CThru</a>; a set if libraries for Typemock that, and I quote Codeplex here, ‘… allows creating interception and isolation frameworks for logging, testing and many other things very simply’. This is the framework is used to create the Silverlight mocking frame work on the same Codeplex site.</p>
<p>To aid my analysis I wrote a basic Logger using the Aspect concepts of CThru, which I call as follows:</p>
<pre tabindex="0"><code>// set the name of the types I want to monitor
</code></pre><p>TestProject.LoggingAspect.TypeNamesToMatch.Add(&ldquo;SharePoint&rdquo;);</p>
<pre tabindex="0"><code>// tell it where to look for aspects
</code></pre><p>CThru.CThruEngine.AddAspectsInAssembly(System.Reflection.Assembly.GetExecutingAssembly());</p>
<pre tabindex="0"><code>// and start it up
</code></pre><p>CThru.CThruEngine.StartListening();</p>
<pre tabindex="0"><code>
The source is below is just included in my assembly, it allow me to chose if I want to log as text for CSV format. I am sure it will need editing for your logging needs but it gives you the basic idea….
</code></pre><p>using System;</p>
<pre tabindex="0"><code>using CThru;
</code></pre><p>using System.Diagnostics;</p>
<pre tabindex="0"><code>using System.Collections.Generic;
</code></pre><p>using System.Text;</p>
<pre tabindex="0"><code></code></pre><p>namespace TestProject</p>
<pre tabindex="0"><code>{
</code></pre><pre><code>/// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>    /// A sample Aspect logger for CThru
</code></pre><pre><code>/// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>    class LoggingAspect : Aspect
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>        /// &lt;summary&gt;
</code></pre><pre><code>    /// The current logger in use
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    private static IAspectLogger logger  = new DebugTextLogger() ;
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// A list of the available logging formats
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        public enum LoggingMethod
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            TextToDebug,
</code></pre><pre><code>        CommaSeparatedToDebug
</code></pre>
<pre tabindex="0"><code>        }
```

```
        /// &lt;summary&gt;
</code></pre><pre><code>    /// The list of string to do partial matches against when logging
</code></pre>
<pre tabindex="0"><code>        /// If any string in this list is in the namespace or typename it gets logged
</code></pre><pre><code>    /// If this list is empty then all types are logged
</code></pre>
<pre tabindex="0"><code>        /// &lt;/summary&gt;
</code></pre><pre><code>    public static List&lt;string\&gt; TypeNamesToMatch = new List&lt;string\&gt;();
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// Sets the current logging format
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        public static LoggingMethod CurrentLoggingMethod
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            set
</code></pre><pre><code>        {
</code></pre>
<pre tabindex="0"><code>                switch (value)
</code></pre><pre><code>            {
</code></pre>
<pre tabindex="0"><code>                    default:
</code></pre><pre><code>                case LoggingMethod.TextToDebug:
</code></pre>
<pre tabindex="0"><code>                        logger = new DebugTextLogger();
</code></pre><pre><code>                    break;
</code></pre>
<pre tabindex="0"><code>                    case LoggingMethod.CommaSeparatedToDebug:
</code></pre><pre><code>                    logger = new DebugCVSLogger();
</code></pre>
<pre tabindex="0"><code>                        break;
</code></pre><pre><code>            }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    public static bool LogToCSV = false;
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    public override void StaticConstructorBehavior(DuringCallbackEventArgs e)
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        LogEvent(&quot;LoggingAspect.StaticConstructorBehavior&quot;, e);
</code></pre>
<pre tabindex="0"><code>        }
```

```
        public override void ConstructorBehavior(DuringCallbackEventArgs e)
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            LogEvent(&#34;LoggingAspect.ConstructorBehavior&#34;, e);
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    public override void MethodBehavior(DuringCallbackEventArgs e)
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        LogEvent(&quot;LoggingAspect.MethodBehavior&quot;, e);
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>        if (e.MethodName == &quot;StsCompareStrings&quot;)
</code></pre>
<pre tabindex="0"><code>            {
</code></pre><pre><code>            e.MethodBehavior = MethodBehaviors.ReturnsCustomValue;
</code></pre>
<pre tabindex="0"><code>                e.ReturnValueOrException = true;
</code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    public override void MissingMethodBehavior(DuringCallbackEventArgs e)
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        LogEvent(&quot;LoggingAspect.MissingMethodBehavior&quot;, e);
</code></pre>
<pre tabindex="0"><code>        }
```

```
        private static void LogEvent(string description, DuringCallbackEventArgs e)
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            logger.LogEvent(description, e);
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// The control to see which 
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// &lt;param name=&#34;info&#34;&gt;The info on the currently handled assembly&lt;/param&gt;
</code></pre><pre><code>    /// &lt;returns&gt;True if we should monitor this event&lt;/returns&gt;
</code></pre>
<pre tabindex="0"><code>        public override bool ShouldIntercept(InterceptInfo info)
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            if (TypeNamesToMatch.Count &gt; 0)
</code></pre><pre><code>        {
</code></pre>
<pre tabindex="0"><code>                foreach (string name in TypeNamesToMatch)
</code></pre><pre><code>            {
</code></pre>
<pre tabindex="0"><code>                    // find the first match of this string in a namespace typename
</code></pre><pre><code>                if (info.TypeName.Contains(name) == true)
</code></pre>
<pre tabindex="0"><code>                    {
</code></pre><pre><code>                    return true;
</code></pre>
<pre tabindex="0"><code>                    }
</code></pre><pre><code>            }
</code></pre>
<pre tabindex="0"><code>            }
</code></pre><pre><code>        else
</code></pre>
<pre tabindex="0"><code>            {
</code></pre><pre><code>            // none in the list match all
</code></pre>
<pre tabindex="0"><code>                return true;
</code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code>            return false;
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    /// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// Helper method to format the parameters as a list in a string
</code></pre><pre><code>    /// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>        /// &lt;param name=&#34;e&#34;&gt;The handled event&lt;/param&gt;
</code></pre><pre><code>    /// &lt;returns&gt;A strung listing the params and their values&lt;/returns&gt;
</code></pre>
<pre tabindex="0"><code>        public static string ParametersListToString(DuringCallbackEventArgs e)
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            var sb = new StringBuilder();
</code></pre><pre><code>        if (e.ParameterValues != null)
</code></pre>
<pre tabindex="0"><code>            {
</code></pre><pre><code>            for (int i = 0; i &lt; e.ParameterValues.Length; i++)
</code></pre>
<pre tabindex="0"><code>                {
```

```
                    if (e.ParameterValues\[i\] != null)
</code></pre><pre><code>                {
</code></pre>
<pre tabindex="0"><code>                        sb.Append(String.Format(&#34;{0} \[{1}\]&#34;, e.ParameterValues\[i\].GetType(), e.ParameterValues\[i\]));
</code></pre><pre><code>                }
</code></pre>
<pre tabindex="0"><code>                    else
</code></pre><pre><code>                {
</code></pre>
<pre tabindex="0"><code>                        sb.Append(&#34;null&#34;);
</code></pre><pre><code>                }
</code></pre>
<pre tabindex="0"><code>                    if (i &lt; e.ParameterValues.Length - 1)
</code></pre><pre><code>                {
</code></pre>
<pre tabindex="0"><code>                        sb.Append(&#34;,&#34;);
</code></pre><pre><code>                }
</code></pre>
<pre tabindex="0"><code>                }
</code></pre><pre><code>        }
</code></pre>
<pre tabindex="0"><code>            return sb.ToString();
</code></pre><pre><code>    }
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>/// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>    /// Logger interface 
</code></pre><pre><code>/// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>    public interface IAspectLogger
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>       void LogEvent(string description, DuringCallbackEventArgs e);
</code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>/// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>    /// Logs an items as plain text
</code></pre><pre><code>/// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>    public class DebugTextLogger : IAspectLogger
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code>        public void LogEvent(string description, DuringCallbackEventArgs e)
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            Debug.WriteLine(string.Format(&#34;{0}: {1}{2}.{3}({4})&#34;,
</code></pre><pre><code>            description,
</code></pre>
<pre tabindex="0"><code>                e.TargetInstance == null ? &#34;\[Static\] &#34; : string.Empty,
</code></pre><pre><code>            e.TypeName,
</code></pre>
<pre tabindex="0"><code>                e.MethodName,
</code></pre><pre><code>            LoggingAspect.ParametersListToString(e)));
</code></pre>
<pre tabindex="0"><code>        }
</code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>/// &lt;summary&gt;
</code></pre>
<pre tabindex="0"><code>    /// Logs an items as comma separated to ease analysis
</code></pre><pre><code>/// &lt;/summary&gt;
</code></pre>
<pre tabindex="0"><code>    public class DebugCVSLogger : IAspectLogger
</code></pre><pre><code>{
</code></pre>
<pre tabindex="0"><code></code></pre><pre><code>    public DebugCVSLogger()
</code></pre>
<pre tabindex="0"><code>        {
</code></pre><pre><code>        // write out a header so we know the colomns
</code></pre>
<pre tabindex="0"><code>            Debug.WriteLine(string.Format(&#34;{0},{1},{2},{3},{4}&#34;,
</code></pre><pre><code>            &quot;Event logged&quot;,
</code></pre>
<pre tabindex="0"><code>                &#34;Is Static&#34;,
</code></pre><pre><code>            &quot;Type name&quot;,
</code></pre>
<pre tabindex="0"><code>                &#34;Method name&#34;,
</code></pre><pre><code>            &quot;Parameter List....&quot;));
</code></pre>
<pre tabindex="0"><code>        }
```

```
        public void LogEvent(string description, DuringCallbackEventArgs e)
</code></pre><pre><code>    {
</code></pre>
<pre tabindex="0"><code>            Debug.WriteLine(string.Format(&#34;{0},{1},{2},{3},{4}&#34;,
</code></pre><pre><code>            description,
</code></pre>
<pre tabindex="0"><code>                e.TargetInstance == null ? &#34;True&#34; : &#34;False&#34;,
</code></pre><pre><code>            e.TypeName,
</code></pre>
<pre tabindex="0"><code>                e.MethodName,
</code></pre><pre><code>            LoggingAspect.ParametersListToString(e)));
</code></pre>
<pre tabindex="0"><code>        }
</code></pre><pre><code>}
</code></pre>
<pre tabindex="0"><code></code></pre><p>}</p>
<pre tabindex="0"><code></code></pre>]]></content:encoded>
    </item>
    <item>
      <title>0x800106ba Windows Defender error in Windows Vista</title>
      <link>https://blog.richardfennell.net/posts/0x800106ba-windows-defender-error-in-windows-vista/</link>
      <pubDate>Sun, 21 Jun 2009 12:24:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/0x800106ba-windows-defender-error-in-windows-vista/</guid>
      <description>&lt;p&gt;I was installing a new PC for a friend yesterday; after using the &lt;a href=&#34;http://www.microsoft.com/windows/windows-vista/get/easy-transfer.aspx&#34;&gt;Easy Transfer Wizard&lt;/a&gt; (first time I used this and I can heartily recommend) to move their settings from their old XP system to their new Vista one I got the 0x800106ba Windows Defender error on start-up. Now there is a lot of frankly useless comments on this error on various forums, strange as the solution is simple. I suspect this is down to this issue being predominantly a problem that hits home user who are not a familiar with the internal workings of services etc. in Windows.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was installing a new PC for a friend yesterday; after using the <a href="http://www.microsoft.com/windows/windows-vista/get/easy-transfer.aspx">Easy Transfer Wizard</a> (first time I used this and I can heartily recommend) to move their settings from their old XP system to their new Vista one I got the 0x800106ba Windows Defender error on start-up. Now there is a lot of frankly useless comments on this error on various forums, strange as the solution is simple. I suspect this is down to this issue being predominantly a problem that hits home user who are not a familiar with the internal workings of services etc. in Windows.</p>
<p>Anyway the solution is make sure the Defender service is set to auto-start. You get the error if it has not been started when Windows check to see if it is running, I assume as part of the security centre checks.</p>
<p>It is not that the forums are really wrong, they usually suggest a reinstall of Defender (which will reset the service start-up), but it is just that this is not easy to achieve on Vista where Defender is backed into the operating system not as separate install as it was in Vista.</p>
]]></content:encoded>
    </item>
    <item>
      <title>29109 error when installing the quiescence GDR patch for TFS 2005</title>
      <link>https://blog.richardfennell.net/posts/29109-error-when-installing-the-quiescence-gdr-patch-for-tfs-2005/</link>
      <pubDate>Fri, 12 Jun 2009 11:03:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/29109-error-when-installing-the-quiescence-gdr-patch-for-tfs-2005/</guid>
      <description>&lt;p&gt;Whilst upgrading a single server TFS 2005 to a dual server 2008 install I hit a problem. I had installed the new 2005 Data Tier (DT) and patched it without issue. However then I tried to apply the patch VS80-KB19156-v2-x86, the Quiescence GDR patch, on the Application Tier (AT) I got the 29109 error: SQL Reporting Services configuration encountered an unknown problem. A search on the web found this is a &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/dd266793.aspx&#34;&gt;common issue&lt;/a&gt; usually fixed by repeated retries! This did not work for me.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst upgrading a single server TFS 2005 to a dual server 2008 install I hit a problem. I had installed the new 2005 Data Tier (DT) and patched it without issue. However then I tried to apply the patch VS80-KB19156-v2-x86, the Quiescence GDR patch, on the Application Tier (AT) I got the 29109 error: SQL Reporting Services configuration encountered an unknown problem. A search on the web found this is a <a href="http://msdn.microsoft.com/en-us/library/dd266793.aspx">common issue</a> usually fixed by repeated retries! This did not work for me.</p>
<p>After much fiddling, I started again and cleaned down both the DT and AT. This time I made one change from the process detailed in the TFS dual server installation walkthrough – I <strong>DID NOT</strong> patch the SQL 2005 instance of Reporting Services on the AT prior to installing TFS. This time the TFS patches applied OK, I then patched SQL at the end of the installation process to bring it in line with the DT SQL patch level.</p>
<p>This would suggest the problem is that the TFS 2005 patches are checking for something that was set in a default SQL 2005 install but not present in one that is patched to SP3.</p>
<p>Anyway hope my experience saves you some time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Sharepoint Extensions on a Load Balanced Sharepoint farm</title>
      <link>https://blog.richardfennell.net/posts/tfs-sharepoint-extensions-on-a-load-balanced-sharepoint-farm/</link>
      <pubDate>Wed, 10 Jun 2009 12:51:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-sharepoint-extensions-on-a-load-balanced-sharepoint-farm/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/06/09/tf30227-error-when-creating-team-projects-on-tfs-2008.aspx&#34;&gt;posted yesterday&lt;/a&gt; about problem creating a new team project if you had missing templates on your Sharepoint server. This problem could of course be avoided if you had installed the TFS Sharepoint Extensions onto your Sharepoint server as your are meant to. However, as I have discovered it is not that easy to do if your chosen  Sharepoint system has network load balanced front ends.&lt;/p&gt;
&lt;p&gt;The problem is that Sharepoint will replicate your MSFAGILE30.STP template between the various servers, but it will not move other TFS artefacts such as the TFSREDIRECT.ASPX in the 12 hive or setting to point to the reporting service instance in the registry. To add these other items you need to install the extensions an then run the TFSConfigwss.exe tool to edit the registry. The problem is the Extensions MSI will not complete if it detects the STP already in place (which as I said will that have been replicated by Sharepoint). The only solution I found was to cheat a bit:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/06/09/tf30227-error-when-creating-team-projects-on-tfs-2008.aspx">posted yesterday</a> about problem creating a new team project if you had missing templates on your Sharepoint server. This problem could of course be avoided if you had installed the TFS Sharepoint Extensions onto your Sharepoint server as your are meant to. However, as I have discovered it is not that easy to do if your chosen  Sharepoint system has network load balanced front ends.</p>
<p>The problem is that Sharepoint will replicate your MSFAGILE30.STP template between the various servers, but it will not move other TFS artefacts such as the TFSREDIRECT.ASPX in the 12 hive or setting to point to the reporting service instance in the registry. To add these other items you need to install the extensions an then run the TFSConfigwss.exe tool to edit the registry. The problem is the Extensions MSI will not complete if it detects the STP already in place (which as I said will that have been replicated by Sharepoint). The only solution I found was to cheat a bit:</p>
<ol>
<li>Run the Extensions MSI until you get the warning dialog it cannot complete</li>
<li>In c:program files copy the &lsquo;Team Foundation Server 2008 Sharepoint Extensions x64 Power Tool&rsquo; directory</li>
<li>Let the MSI finish, it will remove the directory, but you have a copy with the TFSconfigwss.exe tool which is probably the only thing you need.</li>
<li>Run TFSConfigwss.exe to setup the registry.</li>
</ol>
<p>Or you could just copy the files you need from your original Sharepoint server where you managed to install the Extensions correctly</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF30227 error when creating team projects on TFS 2008</title>
      <link>https://blog.richardfennell.net/posts/tf30227-error-when-creating-team-projects-on-tfs-2008/</link>
      <pubDate>Tue, 09 Jun 2009 21:25:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf30227-error-when-creating-team-projects-on-tfs-2008/</guid>
      <description>&lt;p&gt;Historically we have used the eScrum process template for our TFS team projects. However, with a view to the TFS 2010 future we have decided to moved back to the MSF Agile template. We used eScrum to provide an easy to use web based project dashboard; we now think that we can achieve the same or better in TFS 2010 using Excel’s enhanced links to TFS and Excel Services in Sharepoint.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Historically we have used the eScrum process template for our TFS team projects. However, with a view to the TFS 2010 future we have decided to moved back to the MSF Agile template. We used eScrum to provide an easy to use web based project dashboard; we now think that we can achieve the same or better in TFS 2010 using Excel’s enhanced links to TFS and Excel Services in Sharepoint.</p>
<p>So when I had to create a new team project today I decided to use the TFS 2008 &ldquo;MSF for Agile Software Development - v4.2&rdquo; template, to hopefully ease any upgrade issues. The problem was when I tried to create the team project I got the error TF30227, if I looked in the detailed log I saw:</p>
<p><em>Event Description: TF30162: Task &ldquo;SharePointPortal&rdquo; from Group &ldquo;Portal&rdquo; failed<br>
Exception Type: Microsoft.TeamFoundation.Client.PcwException<br>
Exception Message: TF30272: Template not found on the server</em></p>
<p>(Note that as is common with TFS the main error reported hides another error code.)</p>
<p>The problem was exactly as the error says, the template was missing on our central Sharepoint farm. We have been through a number of <a href="http://msmvps.com/blogs/rfennell/archive/2008/08/07/moving-the-document-store-in-tfs.aspx">Sharepoint upgrades and relocation of our portal site</a>s. This had meant that the TFS templates had to be reinstalled manually, this was done correctly for the eScrum template but a mistake was made for the MSF Agile one. We had installed the template with the command</p>
<p><em>Stsadm -o addtemplate -filename MSFAgile30.stp -title VSTS_MSFAgile30</em></p>
<p>when it should have been</p>
<p><em>Stsadm -o addtemplate -filename MSFAgile30.stp -title VSTS_MSFAgile</em></p>
<p>as TFS looks for a template call VSTS_MSFAgile not one called VSTS_MSFAgile30. Once this was correct the new project could be created.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Addressing binding issues with with Ivonna 2.0.0 using &amp;lt;dependentAssembly&amp;gt; in web.config</title>
      <link>https://blog.richardfennell.net/posts/addressing-binding-issues-with-with-ivonna-2-0-0-using-dependentassembly-in-web-config/</link>
      <pubDate>Fri, 29 May 2009 15:28:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/addressing-binding-issues-with-with-ivonna-2-0-0-using-dependentassembly-in-web-config/</guid>
      <description>&lt;p&gt;I have been having some binding problems when trying to use &lt;a href=&#34;http://www.sm-art.biz/Ivonna/Download.aspx&#34;&gt;Ivonna 2.0.0&lt;/a&gt; against a version of &lt;a href=&#34;http://www.typemock.com/Buy.php&#34;&gt;Typemock Isolator&lt;/a&gt; other than the 5.3.0 build it was built to run against. This is a know issue, if your version of Ivonna and Typemock don’t match then you have to use &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/eftw1fys.aspx&#34;&gt;.Net Binding redirection&lt;/a&gt; to get around the problem.&lt;/p&gt;
&lt;p&gt;So to track down the exact problem I used the &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/e74a18c4%28vs.71%29.aspx&#34;&gt;Fusion logger shipped with the .NET SDK (via fuslogvw.exe).&lt;/a&gt; This in itself has been an interesting experience. A few points are worth noting:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been having some binding problems when trying to use <a href="http://www.sm-art.biz/Ivonna/Download.aspx">Ivonna 2.0.0</a> against a version of <a href="http://www.typemock.com/Buy.php">Typemock Isolator</a> other than the 5.3.0 build it was built to run against. This is a know issue, if your version of Ivonna and Typemock don’t match then you have to use <a href="http://msdn.microsoft.com/en-us/library/eftw1fys.aspx">.Net Binding redirection</a> to get around the problem.</p>
<p>So to track down the exact problem I used the <a href="http://msdn.microsoft.com/en-us/library/e74a18c4%28vs.71%29.aspx">Fusion logger shipped with the .NET SDK (via fuslogvw.exe).</a> This in itself has been an interesting experience. A few points are worth noting:</p>
<ul>
<li>You cannot alter the settings (as to what it logs) from <strong>fuslogvw.exe</strong> unless you are running as administrator (because these a really just registry edits in <strong>HKLMSOFTWAREMicrosoftFusion</strong> node). However you can use the viewer to view logs even if not an administrator as long as the registry entries are correct.</li>
<li>I could only get the Fusion log to work if I was running my ASP.NET application in Visual Studio 2008 running as administrator, if I was a standard user nothing was logged.</li>
<li>You have to remember to press refresh on <strong>fuslogvw.exe</strong> a lot. If you don’t you keep thinking that it is not working when it really is.</li>
</ul>
<p>Anyway using the fusion logger I found I had two problem assemblies, not just the one I had expected. I had guessed I need to intercept the loading of the main Typemock assembly, but what the fusion logger showed me was I also needed to intercept the Typemock.Intergration assembly. Also I needed to reference the Typemock.Intergration assembly in my test project and make sure it was copied locally (something I had not needed to explicitly do when using Typemock 5.3.0 where it had found via I assume via the GAC)</p>
<p>Now it is important to remember that if using MSTEST and Ivonna you need to point the build directory for the Test Project to Web Application under test’s bin directory. This means that the .NET loader will check the <strong>web.config</strong> in the web application for any binding information, not just in the <strong>app.config</strong> in the test project as I had first assumed.</p>
<p>So all this means that I needed to add the following to my Web Application’s <strong>web.config</strong> and <strong>app.config</strong></p>
<pre tabindex="0"><code>&lt;runtime\&gt;  
   &lt;assemblyBinding xmlns\=&#34;urn:schemas-microsoft-com:asm.v1&#34;\&gt;  
     &lt;dependentAssembly\&gt;  
       &lt;assemblyIdentity name\=&#34;TypeMock.Integration&#34;  
                         publicKeyToken\=&#34;3dae460033b8d8e2&#34;  
                         culture\=&#34;neutral&#34; /&gt;  
       &lt;bindingRedirect oldVersion\=&#34;5.3.0.0&#34;  
                        newVersion\=&#34;5.3.1.0&#34;/&gt;  
     &lt;/dependentAssembly\&gt;  
     &lt;dependentAssembly\&gt;  
       &lt;assemblyIdentity name\=&#34;TypeMock&#34;  
                         publicKeyToken\=&#34;3dae460033b8d8e2&#34;  
                         culture\=&#34;neutral&#34; /&gt;  
       &lt;bindingRedirect oldVersion\=&#34;5.3.0.0&#34;  
                        newVersion\=&#34;5.3.1.0&#34;/&gt;  
     &lt;/dependentAssembly\&gt;  
  
   &lt;/assemblyBinding\&gt;  
 &lt;/runtime\&gt;  
 
</code></pre><p>Once this was done all my test loaded as expected</p>
<p><strong>Updated 4th June 2009</strong> – There is now 2.0.1 release of Ivonna that does support Isolator 5.3.1 so this binding is not required, but details are good as in the future there is bound to be another version mismatch</p>
]]></content:encoded>
    </item>
    <item>
      <title>The setup story for TFS 2010</title>
      <link>https://blog.richardfennell.net/posts/the-setup-story-for-tfs-2010/</link>
      <pubDate>Wed, 27 May 2009 20:09:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-setup-story-for-tfs-2010/</guid>
      <description>&lt;p&gt;I have been looking at the various install and upgrade stories for the TFS 2010 Beta. I have to say they are very nice compared to the older TFS versions. You now have a SharePoint like model where you install the product then use a separate configuration tool to upgrade or setup the features required. There are plenty of places to verify your setting as you go along to greatly reducing the potential mistakes.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been looking at the various install and upgrade stories for the TFS 2010 Beta. I have to say they are very nice compared to the older TFS versions. You now have a SharePoint like model where you install the product then use a separate configuration tool to upgrade or setup the features required. There are plenty of places to verify your setting as you go along to greatly reducing the potential mistakes.</p>
<p>One side effect of this model is that it is vital to get all your prerequisites in place. The lack of these has been the cause of the only upgrade scenario I have tried that has failed. This was on a VPC I used for TFS 2008 demos. This VPC used a differencing VHD using the older 2004 format that had a 16Gb limit and this disk was virtual full. To upgrade to TFS 2010 I needed to upgrade SQL to 2008 and this in turn needed Visual Studio 2008 patched to SP1 which needed over 6Gb free space, which was never going to happen on that VHD. So my upgrade failed, but that said this is not a realistic scenario, who has servers with just 16Gb these days!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Everytime I have to use Typemock I need to ask does my code stinks?</title>
      <link>https://blog.richardfennell.net/posts/everytime-i-have-to-use-typemock-i-need-to-ask-does-my-code-stinks/</link>
      <pubDate>Sun, 24 May 2009 09:06:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/everytime-i-have-to-use-typemock-i-need-to-ask-does-my-code-stinks/</guid>
      <description>&lt;p&gt;Ok a bit sweeping but I think there is truth in this, if you have to resort to a mocking framework (such as Typemock the one I use) I think it is vital to ask ‘why am I using this tool?’ I think there are three possible answers:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I have to mock some back box that is huge and messy that if I don’t mock it will mean any isolated testing is impossible e.g. SharePoint&lt;/li&gt;
&lt;li&gt;I have to mock a complex object, I could write it all by hand, but it is quicker to use an auto-mocking framework. Why do loads of typing when a tool can generate it for me? (the same argument as to why using Refactoring tools are good, they are faster than me typing and make less mistakes)&lt;/li&gt;
&lt;li&gt;My own code is badly designed and the only way to test it is to use a mocking framework to swap out functional units via ‘magic’ at runtime.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If the bit of code I am testing fails into either of the first two categories it is OK, but if it is in third I know must seriously consider some refactoring. Ok this is not always possible for technical or budgetary reasons, but I should at least consider it. Actually you could consider category 1 as a special case of category 3, a better testable design may be possible, but it is out of your control.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Ok a bit sweeping but I think there is truth in this, if you have to resort to a mocking framework (such as Typemock the one I use) I think it is vital to ask ‘why am I using this tool?’ I think there are three possible answers:</p>
<ol>
<li>I have to mock some back box that is huge and messy that if I don’t mock it will mean any isolated testing is impossible e.g. SharePoint</li>
<li>I have to mock a complex object, I could write it all by hand, but it is quicker to use an auto-mocking framework. Why do loads of typing when a tool can generate it for me? (the same argument as to why using Refactoring tools are good, they are faster than me typing and make less mistakes)</li>
<li>My own code is badly designed and the only way to test it is to use a mocking framework to swap out functional units via ‘magic’ at runtime.</li>
</ol>
<p>If the bit of code I am testing fails into either of the first two categories it is OK, but if it is in third I know must seriously consider some refactoring. Ok this is not always possible for technical or budgetary reasons, but I should at least consider it. Actually you could consider category 1 as a special case of category 3, a better testable design may be possible, but it is out of your control.</p>
<p>So given this I looked at the new <a href="http://blog.typemock.com/2009/05/mockingfaking-datetimenow-in-unit-tests.html">Typemock feature with interest, the ability to fake out DateTime.Now.</a> Something you have not been able to do in the past due to the DataTime classes deep location in the .NET framework. OK it is a really cool feature, but that is certainly not a good enough reason to use it. I have to ask if I need to mock this call out does my code stink?</p>
<p>Historically I would have defined an interface for a date services and used it to pass in a test or production implementation using dependency injection e.g.</p>
<pre tabindex="0"><code>public class MyApplication   
{      
    public MyApplication(IDateProvider dateProvider)  
    {  
        // so we use  
        DateTime date1 = dateProvider.GetCurrentDate();  
        // as opposed to  
        DateTime date2 = DateTime.Now      
    }  
}
</code></pre><p>So in the new world with the new Typemock feature I have three options:</p>
<ol>
<li>Just call <strong>DateTime.Now</strong> in my code, because now I know I can use Typemock to intercept the call and return the value I want for test purposes</li>
<li>Write my own date provider and use dependency injection to swap in different versions (or if I want to be really flexible use a <a href="http://www.castleproject.org/container/index.html">IoC framework like Castle Windsor</a>)</li>
<li>Write my own date provider class with a static GetDate method, but not use dependency injection, just call the method directory wherever I would have called <strong>DateTime.Now</strong> and use Typemock to intercept calls to this static method in tests (the old way to <a href="http://www.typemock.com/community/viewtopic.php?p=4734">get round the limitation that Typemock cannot mock classes from MSCORELIB</a></li>
</ol>
<p>I think this bring me back full circle to my first question: does the fact I use the new feature of Typemock to mock out <strong>DateTime.Now</strong> mean my code stinks? Well after bit of thought I think it does. I would always favour putting in some design patterns to aid testing, so in this case some dependency injection would appear the best option. Like all services it would allow me to centralise all date functions in one place, so a good SOA pattern. With all my date service in one place I can make a sensible choice of how I want to mock it out, manual or via an auto mocking framework.</p>
<p>So in summary, in mocking, like in so many things in life, just because you can do it is no reason why you should do something in a polite society. If you can, it is better to address a code smell with good design as opposed to a clever tool.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Post Developer Day South West thoughts</title>
      <link>https://blog.richardfennell.net/posts/post-developer-day-south-west-thoughts/</link>
      <pubDate>Sun, 24 May 2009 08:47:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-developer-day-south-west-thoughts/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.dddsouthwest.com/&#34;&gt;DD-SW in Taunton&lt;/a&gt; seems to go well, a big thank you to &lt;a href=&#34;http://www.dotnetdevnet.com/&#34;&gt;Guy and the rest of the Bristol .NET user group&lt;/a&gt; for all their work getting this event up and running. Also it was nice to see new faces, it is certainly a good idea to get the DDD family events out to places beyond Reading, spreading the good works of the community across the country.&lt;/p&gt;
&lt;p&gt;Thank you to those who attended my session on Scrum, I hope you all found it useful. You can find a virtually &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2007/DDD6%20Presentation%20-%20An%20Introduction%20to%20Scrum.ppt&#34;&gt;identical set the slides on the Black Marble web site&lt;/a&gt; and the actual stack I used will be up on the DD-SW site soon.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.dddsouthwest.com/">DD-SW in Taunton</a> seems to go well, a big thank you to <a href="http://www.dotnetdevnet.com/">Guy and the rest of the Bristol .NET user group</a> for all their work getting this event up and running. Also it was nice to see new faces, it is certainly a good idea to get the DDD family events out to places beyond Reading, spreading the good works of the community across the country.</p>
<p>Thank you to those who attended my session on Scrum, I hope you all found it useful. You can find a virtually <a href="http://www.blackmarble.co.uk/ConferencePapers/2007/DDD6%20Presentation%20-%20An%20Introduction%20to%20Scrum.ppt">identical set the slides on the Black Marble web site</a> and the actual stack I used will be up on the DD-SW site soon.</p>
<p>I actually managed to attend some sessions this time, as usual this just means more work as I invariably realise I have to spend some time on learning some new technologies This time it was <a href="http://www.blackmarble.co.uk/ConferencePapers/2007/DDD6%20Presentation%20-%20An%20Introduction%20to%20Scrum.ppt">MEF</a> and <a href="http://jquery.com/">jQuery</a>, the latter  technology I have ignored too long. It was also great to see a truly mind bending session by <a href="http://marcgravell.blogspot.com">Marc Gravell</a> on Expression trees, we need to see more of these deep dive sessions a community events. I have never checked to see if is it that they are not proposed or that they are not voted for? Can it be true the community just wants level 200 general overviews?</p>
<p>Anyway another great day – a pointer to everyone that if you haven’t been to DDD event you really should.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Developer Day South West is this weekend</title>
      <link>https://blog.richardfennell.net/posts/developer-day-south-west-is-this-weekend/</link>
      <pubDate>Thu, 21 May 2009 09:58:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/developer-day-south-west-is-this-weekend/</guid>
      <description>&lt;p&gt;It is &lt;a href=&#34;http://www.dddsouthwest.com/Home/tabid/36/Default.aspx&#34;&gt;Developer Day South West&lt;/a&gt; this weekend where I will be &lt;a href=&#34;http://www.dddsouthwest.com/Agenda/tabid/55/Default.aspx&#34;&gt;speaking on Scrum&lt;/a&gt;. I may also do a lunch time grok talk on SharePoint and Typemock Isolator as I did at &lt;a href=&#34;http://www.developerdayscotland.com/&#34;&gt;Developer Day Scotland&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I think there are still spaces at this event, so if you can make your way down to Taunton on Saturday I think it will be well worth the trip.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDD South West&#34; loading=&#34;lazy&#34; src=&#34;http://www.dddsouthwest.com/images/DDDSouthWestBadgeSmall.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is <a href="http://www.dddsouthwest.com/Home/tabid/36/Default.aspx">Developer Day South West</a> this weekend where I will be <a href="http://www.dddsouthwest.com/Agenda/tabid/55/Default.aspx">speaking on Scrum</a>. I may also do a lunch time grok talk on SharePoint and Typemock Isolator as I did at <a href="http://www.developerdayscotland.com/">Developer Day Scotland</a>.</p>
<p>I think there are still spaces at this event, so if you can make your way down to Taunton on Saturday I think it will be well worth the trip.</p>
<p><img alt="DDD South West" loading="lazy" src="http://www.dddsouthwest.com/images/DDDSouthWestBadgeSmall.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Timeouts on Bitlocker to go</title>
      <link>https://blog.richardfennell.net/posts/timeouts-on-bitlocker-to-go/</link>
      <pubDate>Thu, 21 May 2009 09:51:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/timeouts-on-bitlocker-to-go/</guid>
      <description>&lt;p&gt;Since moving to Windows 7 I have encrypted all my USB pen drives and external USB disk drives with &lt;a href=&#34;http://www.microsoft.com/windows/enterprise/products/windows-7-bitlocker.aspx&#34;&gt;Bitlocker to go&lt;/a&gt;. This has been working great for me, I have noticed no performance problems and it give nice piece of mind. The only irritation is when you plug the encrypted drive into an XP or Vista PC you can’t read it, but this is &lt;a href=&#34;http://news.softpedia.com/news/Windows-7-BitLocker-To-Go-Backwards-Compatible-with-XP-and-Vista-110080.shtml&#34;&gt;meant to be being addressed&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;However, I have seen one issue, this is that there seems to be a timeout if the bitlockered to go device is not accessed for a while (well over an hour is my guess, I have not timed it) it relocks itself and the password has to be re-entered. I can’t find any setting anywhere to control this timeout. I suppose the workaround is to set the bitlockered device to always automatically unlock on my PC, but I do like the security of having to enter the key manually.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since moving to Windows 7 I have encrypted all my USB pen drives and external USB disk drives with <a href="http://www.microsoft.com/windows/enterprise/products/windows-7-bitlocker.aspx">Bitlocker to go</a>. This has been working great for me, I have noticed no performance problems and it give nice piece of mind. The only irritation is when you plug the encrypted drive into an XP or Vista PC you can’t read it, but this is <a href="http://news.softpedia.com/news/Windows-7-BitLocker-To-Go-Backwards-Compatible-with-XP-and-Vista-110080.shtml">meant to be being addressed</a>.</p>
<p>However, I have seen one issue, this is that there seems to be a timeout if the bitlockered to go device is not accessed for a while (well over an hour is my guess, I have not timed it) it relocks itself and the password has to be re-entered. I can’t find any setting anywhere to control this timeout. I suppose the workaround is to set the bitlockered device to always automatically unlock on my PC, but I do like the security of having to enter the key manually.</p>
<p>The other possibility is that it is not a Bitlocker thing and it that my USB ports are resetting, and in effect reattaching the device. I suppose the effect is the same.</p>
<p>As my external USB pod contains mostly Virtual PC images in effect removing the underlying disk when they are running is a bit of a problem; but as long as you know it might happen you can live with it.</p>
<p><strong>Update 28 May 09</strong> – I now think it is my USB ports power saving mode, same net effect though</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF255048 Error when installing TFS 2010 beta 1</title>
      <link>https://blog.richardfennell.net/posts/tf255048-error-when-installing-tfs-2010-beta-1/</link>
      <pubDate>Thu, 21 May 2009 08:38:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf255048-error-when-installing-tfs-2010-beta-1/</guid>
      <description>&lt;p&gt;When you setup a dual or multi server TFS installation you need to specify the location of the OLAP Analysis service instance that will be used for the reporting warehouse. As with much of the TFS installation and configuration process there is a test button to confirm your setting will work, these are always worth pressing. If there is a problem you could get a TF255048 error, as the text says this hints the server cannot be found or you have no rights to access it, which may well be the case.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you setup a dual or multi server TFS installation you need to specify the location of the OLAP Analysis service instance that will be used for the reporting warehouse. As with much of the TFS installation and configuration process there is a test button to confirm your setting will work, these are always worth pressing. If there is a problem you could get a TF255048 error, as the text says this hints the server cannot be found or you have no rights to access it, which may well be the case.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_79411140.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_680D339D.png" title="image"></a></p>
<p>Well there is another thing to consider, the firewall on the SQL server. On my default 2008 SQL install the firewall was not opened to allow incoming connections to the OLAP service. Once the <a href="http://msdn.microsoft.com/en-us/library/ms174937.aspx">TCP Port 2383 was opened to incoming traffic</a> the test passed and I could move onto the next stage of the configuration</p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft.AnalysisServices Assemblies missing when configuring Reporting Services on multiple server TFS 2010 Beta1</title>
      <link>https://blog.richardfennell.net/posts/microsoft-analysisservices-assemblies-missing-when-configuring-reporting-services-on-multiple-server-tfs-2010-beta1/</link>
      <pubDate>Wed, 20 May 2009 15:23:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-analysisservices-assemblies-missing-when-configuring-reporting-services-on-multiple-server-tfs-2010-beta1/</guid>
      <description>&lt;p&gt;TFS 2010 provides far more options for the configuration of your server than the previous versions. You now can easily make use of any existing server resources you have such as SharePoint farms or Enterprise SQL installations. Today I was looking at one of these ‘less standard’ setups using some of our test lab equipment (hence the somewhat strange mix of 32bit and 64bit hardware) and hit a problem with the TFS beta 1 release configuration tool.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>TFS 2010 provides far more options for the configuration of your server than the previous versions. You now can easily make use of any existing server resources you have such as SharePoint farms or Enterprise SQL installations. Today I was looking at one of these ‘less standard’ setups using some of our test lab equipment (hence the somewhat strange mix of 32bit and 64bit hardware) and hit a problem with the TFS beta 1 release configuration tool.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_444DD6EE.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3BEA0197.png" title="image"></a></p>
<p>The key point to note here is that in previous versions of TFS the Reporting Services would need to be installed on the AT (though it could still use the a reports DB stored on the DT). With 2010 this is no longer the case, now the Reporting Services instance on the DT can be used directly. However this said it must be remembered that the Reporting Services instance must be dedicated to the TFS install; so in most cases it is more sensible to put it on the TFS AT. You probably don’t want to be dedicating the Reporting Services instance on your enterprise SQL server to TFS alone. Also you probably don’t want to expose your SQL server to web requests by having it host the Reporting Services instance,</p>
<p>But back to the actual problem; when I ran the TFS configuration tool and tried to configure the OLAP source for the Reporting Services I got the error that the <strong>Microsoft.AnalysisServices</strong> assemblies could not be found.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2F7BDE6E.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_744802A2.png" title="image"></a></p>
<p><strong>Note</strong>: if I skipped the setup of Reporting Services the configuration tool completed without any issue, it is certainly a huge step forward in the ease of installation for TFS. However beware if you skip the Reporting Services setup in the initial setup via the configuration tool then in the beta 1 you have not way to configure it later.</p>
<p>The answer is to install a single SQL component on the AT – the “Client tools connectivity” feature. Once this is done the right assemblies are in the GAC and you can proceed.</p>
<p>Remember that in general you will not see this issue as the feature is installed when reporting services is installed on the AT.</p>
<p><strong>Update 21 May 09:</strong> I have been told that this is definitely a bug, the test button should validate that the correct assemblies are in place. Interestingly of you use the main Verify function of the configuration tool (that checks all the settings in one go) this does perform the correct check and warns you appropriately. Also there is a mention in <strong>How to: Install SQL Server 2008</strong> section of the installation documentation that on multi server installations the client tools connectivity pack is required</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting &amp;quot;failed to start&amp;quot; with a SharePoint workflow</title>
      <link>https://blog.richardfennell.net/posts/getting-failed-to-start-with-a-sharepoint-workflow/</link>
      <pubDate>Wed, 20 May 2009 10:00:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-failed-to-start-with-a-sharepoint-workflow/</guid>
      <description>&lt;p&gt;I have been playing around with some workflow code for a demo I am doing. This has meant creating and deleting a workflow project as I refine the demo. Whilst going through the process of a delete and recreate of the workflow (using the same name for the project/workflow but creating a new project from the VS file menu) I hit the problem that when the new version of the workflow was run I got a “failed to start” error.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been playing around with some workflow code for a demo I am doing. This has meant creating and deleting a workflow project as I refine the demo. Whilst going through the process of a delete and recreate of the workflow (using the same name for the project/workflow but creating a new project from the VS file menu) I hit the problem that when the new version of the workflow was run I got a “failed to start” error.</p>
<p>After checking the SharePoint log in the 12 hive I found that workflow runtime was trying to load the code beside assembly using the old assembly name name/public key, which obviously it could not fine as the new version was in the GAC. Once I correct and redeployed the workflow.xml file (with the old PublicKeyToken for the CodeBesideAssembly) all was OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio Team System 2010 Beta 1 has shipped</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-team-system-2010-beta-1-has-shipped/</link>
      <pubDate>Mon, 18 May 2009 20:52:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-team-system-2010-beta-1-has-shipped/</guid>
      <description>&lt;p&gt;Today the &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-gb/products/2010/default.mspx&#34;&gt;Visual Studio 2010 Team Suite Beta 1&lt;/a&gt; and &lt;a href=&#34;http://www.microsoft.com/visualstudio/en-gb/products/2010/default.mspx&#34;&gt;Visual Studio 2010 Team Foundation Server Beta 1&lt;/a&gt; became available to download for MSDN subscribers and will be available to the general public on Wednesday.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today the <a href="http://www.microsoft.com/visualstudio/en-gb/products/2010/default.mspx">Visual Studio 2010 Team Suite Beta 1</a> and <a href="http://www.microsoft.com/visualstudio/en-gb/products/2010/default.mspx">Visual Studio 2010 Team Foundation Server Beta 1</a> became available to download for MSDN subscribers and will be available to the general public on Wednesday.</p>
]]></content:encoded>
    </item>
    <item>
      <title>London Alt.Net Conference is now full</title>
      <link>https://blog.richardfennell.net/posts/london-alt-net-conference-is-now-full/</link>
      <pubDate>Mon, 18 May 2009 17:12:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/london-alt-net-conference-is-now-full/</guid>
      <description>&lt;p&gt;Wow, the &lt;a href=&#34;http://www.altnetuk.com/2009.en.html&#34;&gt;Alt.Net conference in London&lt;/a&gt; in August certainly filled up fast. On Friday I went into a meeting before registration opened, and by the time I came out the first wave was full. I now see that the second wave of registration is also full.&lt;/p&gt;
&lt;p&gt;Unfortunately I can’t make this one, but I am sure it will be a success. I really like the format that is being tried, a bit for everyone.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Wow, the <a href="http://www.altnetuk.com/2009.en.html">Alt.Net conference in London</a> in August certainly filled up fast. On Friday I went into a meeting before registration opened, and by the time I came out the first wave was full. I now see that the second wave of registration is also full.</p>
<p>Unfortunately I can’t make this one, but I am sure it will be a success. I really like the format that is being tried, a bit for everyone.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New bundle from Typemock</title>
      <link>https://blog.richardfennell.net/posts/new-bundle-from-typemock/</link>
      <pubDate>Mon, 18 May 2009 10:22:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-bundle-from-typemock/</guid>
      <description>&lt;p&gt;I have been blogging for a while now on using Tyepmock &amp;amp; Ivonna for various forms of testing, well you can bow buy both packages as a bundle and here is the blog/press viral marketing release…..&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.typemock.com/&#34;&gt;&lt;em&gt;Unit Testing&lt;/em&gt;&lt;/a&gt; &lt;em&gt;ASP.NET?&lt;/em&gt; &lt;a href=&#34;http://www.typemock.com/ASP.NET_unit_testing_page.php&#34;&gt;&lt;em&gt;ASP.NET unit testing&lt;/em&gt;&lt;/a&gt; &lt;em&gt;has never been this easy.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Typemock is launching a new product for ASP.NET developers – the &lt;strong&gt;ASP.NET Bundle&lt;/strong&gt; - and for the launch will be giving out &lt;strong&gt;FREE licenses&lt;/strong&gt; to bloggers and their readers.&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been blogging for a while now on using Tyepmock &amp; Ivonna for various forms of testing, well you can bow buy both packages as a bundle and here is the blog/press viral marketing release…..</p>
<p><a href="http://www.typemock.com/"><em>Unit Testing</em></a> <em>ASP.NET?</em> <a href="http://www.typemock.com/ASP.NET_unit_testing_page.php"><em>ASP.NET unit testing</em></a> <em>has never been this easy.</em></p>
<p><em>Typemock is launching a new product for ASP.NET developers – the <strong>ASP.NET Bundle</strong> - and for the launch will be giving out <strong>FREE licenses</strong> to bloggers and their readers.</em></p>
<p><em>The ASP.NET Bundle is the ultimate ASP.NET unit testing solution, and offers both</em> <a href="http://www.typemock.com/"><em>Typemock Isolator</em></a><em>, a</em> <a href="http://www.typemock.com/"><em>unit test</em></a> <em>tool and</em> <a href="http://sm-art.biz/Ivonna.aspx"><em>Ivonna</em></a><em>, the Isolator add-on for</em> <a href="http://sm-art.biz/Ivonna.aspx"><em>ASP.NET unit testing</em></a><em>, for a bargain price.</em></p>
<p><em>Typemock Isolator is a leading</em> <a href="http://www.typemock.com/"><em>.NET unit testing</em></a> <em>tool (C# and VB.NET) for many ‘hard to test’ technologies such as</em> <a href="http://typemock.com/sharepointpage.php"><em>SharePoint</em></a><em>,</em> <a href="http://www.typemock.com/ASP.NET_unit_testing_page.php"><em>ASP.NET</em></a><em>,</em> <a href="http://www.typemock.com/ASP.NET_unit_testing_page.php"><em>MVC</em></a><em>,</em> <a href="http://www.typemock.com/wcfpage.php"><em>WCF</em></a><em>, WPF,</em> <a href="http://www.typemock.com/Silverlight_unit_testing_page.php"><em>Silverlight</em></a> <em>and more. Note that for</em> <a href="http://www.typemock.com/Silverlight_unit_testing_page.php"><em>unit testing Silverlight</em></a> <em>there is an open source Isolator add-on called</em> <a href="http://www.typemock.com/Silverlight_unit_testing_page.php"><em>SilverUnit</em></a><em>.</em></p>
<p><em>The first 60 bloggers who will blog this text in their blog and</em> <a href="mailto:asp@typemock.com"><em>tell us about it</em></a><em>, will get a Free Isolator ASP.NET Bundle license (Typemock Isolator + Ivonna). If you post this in an ASP.NET <strong>dedicated</strong> blog, you&rsquo;ll get a license automatically (even if more than 60 submit) during the first week of this announcement.<br>
Also 8 bloggers will get an <strong>additional 2 licenses</strong> (each) to give away to their readers / friends.</em></p>
<p><em>Go ahead, click the following link for</em> <a href="http://blog.typemock.com/2009/01/get-free-isolator-licnese-for-helping.html?utm_source=vb_blog&amp;utm_medium=typeblog&amp;utm_campaign=isolatorvbblog"><em>more information</em></a> <em>on how to get your free license.</em></p>
<p>Pass the word an get a chance at your free licenses…..</p>
]]></content:encoded>
    </item>
    <item>
      <title>Last nights Agile Yorkshire meeting on Exploratory Testing</title>
      <link>https://blog.richardfennell.net/posts/last-nights-agile-yorkshire-meeting-on-exploratory-testing/</link>
      <pubDate>Thu, 14 May 2009 22:18:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/last-nights-agile-yorkshire-meeting-on-exploratory-testing/</guid>
      <description>&lt;p&gt;I was unable to attend the last Agile Yorkshire meeting, shame it sounds like it was a good one. The slides are up and give a really nice overview of the theory behind exploratory testing, worth a look at &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/may13th-exploratorytesting/ExploratoryTesting.ppt&#34;&gt;http://www.agileyorkshire.org/2009-event-announcements/may13th-exploratorytesting/ExploratoryTesting.ppt&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was unable to attend the last Agile Yorkshire meeting, shame it sounds like it was a good one. The slides are up and give a really nice overview of the theory behind exploratory testing, worth a look at <a href="http://www.agileyorkshire.org/2009-event-announcements/may13th-exploratorytesting/ExploratoryTesting.ppt">http://www.agileyorkshire.org/2009-event-announcements/may13th-exploratorytesting/ExploratoryTesting.ppt</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Misleading error message adding a user on Windows 2008</title>
      <link>https://blog.richardfennell.net/posts/misleading-error-message-adding-a-user-on-windows-2008/</link>
      <pubDate>Wed, 13 May 2009 11:11:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/misleading-error-message-adding-a-user-on-windows-2008/</guid>
      <description>&lt;p&gt;Today I was setting up a new development VPC for SharePoint. When I tried to add a new user via the AD tools wizard I got the error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&amp;ldquo;Windows cannot set the password for [new user], network path was not found&amp;rdquo;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Forums suggest the issue is DNS resolution related, but for me it turned out to be the simple fact the password I was using did not meet the domains minimum requirements.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today I was setting up a new development VPC for SharePoint. When I tried to add a new user via the AD tools wizard I got the error</p>
<p><em>&ldquo;Windows cannot set the password for [new user], network path was not found&rdquo;</em></p>
<p>Forums suggest the issue is DNS resolution related, but for me it turned out to be the simple fact the password I was using did not meet the domains minimum requirements.</p>
<p>Not the most helpful message in the world.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Alt.Net UK Conference 2009</title>
      <link>https://blog.richardfennell.net/posts/alt-net-uk-conference-2009/</link>
      <pubDate>Tue, 12 May 2009 21:15:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alt-net-uk-conference-2009/</guid>
      <description>&lt;p&gt;Good news, there will be an Alt.Net UK Conference in London over the first weekend of August. Bit of a different format this time:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.altnetuk.com/2009-07-31&#34;&gt;Alt.Net Beers&lt;/a&gt; _A social opportunity to discuss Alt.Net over a few beers.&lt;br&gt;
__Friday, July 31, 2009 from 6:00 PM until 9:00 PM&lt;br&gt;
_&lt;a href=&#34;http://maps.google.co.uk/maps?q=82&amp;#43;Dean&amp;#43;Street%2c&amp;#43;London%2c&amp;#43;W1D&amp;#43;3HA%2c&amp;#43;United&amp;#43;Kingdom&#34;&gt;82 Dean Street, London, W1D 3HA, United Kingdom&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.altnetuk.com/2009-08-01&#34;&gt;Open Space Coding Day&lt;/a&gt; _A day of hands-on coding where the attendees choose the subjects.&lt;br&gt;
__Saturday, August 01, 2009 from 9:00 AM until 5:00 PM&lt;br&gt;
_&lt;a href=&#34;http://maps.google.co.uk/maps?q=36&amp;#43;Southwark&amp;#43;Bridge&amp;#43;Road%2c&amp;#43;London%2c&amp;#43;SE1&amp;#43;9EU%2c&amp;#43;United&amp;#43;Kingdom&#34;&gt;36 Southwark Bridge Road, London, SE1 9EU, United Kingdom&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Good news, there will be an Alt.Net UK Conference in London over the first weekend of August. Bit of a different format this time:</p>
<p><a href="http://www.altnetuk.com/2009-07-31">Alt.Net Beers</a> _A social opportunity to discuss Alt.Net over a few beers.<br>
__Friday, July 31, 2009 from 6:00 PM until 9:00 PM<br>
_<a href="http://maps.google.co.uk/maps?q=82&#43;Dean&#43;Street%2c&#43;London%2c&#43;W1D&#43;3HA%2c&#43;United&#43;Kingdom">82 Dean Street, London, W1D 3HA, United Kingdom</a></p>
<p><a href="http://www.altnetuk.com/2009-08-01">Open Space Coding Day</a> _A day of hands-on coding where the attendees choose the subjects.<br>
__Saturday, August 01, 2009 from 9:00 AM until 5:00 PM<br>
_<a href="http://maps.google.co.uk/maps?q=36&#43;Southwark&#43;Bridge&#43;Road%2c&#43;London%2c&#43;SE1&#43;9EU%2c&#43;United&#43;Kingdom">36 Southwark Bridge Road, London, SE1 9EU, United Kingdom</a></p>
<p><a href="http://www.altnetuk.com/2009-08-02">Alt.Net UK Conference</a> _The climax of the conference weekend! Share and learn in an Open Space environment.<br>
__Sunday, August 02, 2009 from 9:00 AM until 5:00 PM<br>
_<a href="http://maps.google.co.uk/maps?q=36&#43;Southwark&#43;Bridge&#43;Road%2c&#43;London%2c&#43;SE1&#43;9EU%2c&#43;United&#43;Kingdom">36 Southwark Bridge Road, London, SE1 9EU, United Kingdom</a></p>
<p>For more details and registration (opening in Thursday the 14th of May) see <a href="http://www.altnetuk.com" title="http://www.altnetuk.com">http://www.altnetuk.com</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>A day at the Architect Insight Conference</title>
      <link>https://blog.richardfennell.net/posts/a-day-at-the-architect-insight-conference/</link>
      <pubDate>Sat, 09 May 2009 16:00:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-day-at-the-architect-insight-conference/</guid>
      <description>&lt;p&gt;I was at the &lt;a href=&#34;http://msdn.microsoft.com/en-gb/architecture/dd135209.aspx&#34;&gt;Architect Insight Conference&lt;/a&gt; yesterday, so the big question is do I better know the role of the architect in the development process – I have to say no. Don’t get me wrong the event was interesting, I especially enjoyed the interactive group discussion sessions, one of which I chaired if that is the right term. As I have said about other conference I tend to find I get more from the discussions with other delegates than the more tradition presentation sessions.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was at the <a href="http://msdn.microsoft.com/en-gb/architecture/dd135209.aspx">Architect Insight Conference</a> yesterday, so the big question is do I better know the role of the architect in the development process – I have to say no. Don’t get me wrong the event was interesting, I especially enjoyed the interactive group discussion sessions, one of which I chaired if that is the right term. As I have said about other conference I tend to find I get more from the discussions with other delegates than the more tradition presentation sessions.</p>
<p>For me the role of the architect is very fluid. There are many different ways to run a project and a company. Some define a role for the architect, usually those with more formal structures, for others the role is actually an emergent virtual role that the team as a whole perform, usually as part of an agile planning process. There is no single silver bullet solution for all project types, recognising this is probably the big insight of the conference.</p>
<p>Give this why does it seem that people aspire to being an architect? what do the think the role entails that makes it appeal so much?</p>
<p>A very noticeable comment in our interactive session was that recent computer science graduate did not seem to have done much programming as part of their courses. They all seemed to be focused on the business/analysis aspects of the industry. Is this driving people to the perceived glamour/rock star role of architect? More than one delegate went as far as to say they were now looking at A Level students to fill junior developer roles. Graduates were either not interest or lacked the skills companies would expect after completing a degree course. It was easier to train up suitable 18 year olds. In our industry a keen enquiring mind is more important than a degree, something that seems to beaten out of many people at university.</p>
<p>This harks back to my formative years, I was a thin-sandwich course student mixing 6 months at university followed by 6 months in industry (which I hasten to add I would not recommend, better a couple of years study and then a year out in industry). Many people I worked with were not student/graduate engineers but <a href="http://en.wikipedia.org/wiki/Higher_National_Diploma">HND students</a>, in my opinion an educational route sadly underused with this current government target of 50% of people going to university. HND’s aimed to turn out good technicians, people who knew the job of making and testing the product, but without the grounding in theoretical theory a graduate would have. Very much a <a href="http://www.mcbreen.ab.ca/SoftwareCraftsmanship/">craftsmanship</a> point of view where staff are trained up within team, not arriving from university as the finished article.</p>
<p>So the key takeaway for the conference for me? Software development is a people/communication process. It is key to get everyone involved in the all stages of the process. Whatever else an architect is, they should not a person in an ivory tower lobbing out huge specification tomes to the minions below.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Another speaking engagement – Vbug Newcastle on SharePoint Testing</title>
      <link>https://blog.richardfennell.net/posts/another-speaking-engagement-vbug-newcastle-on-sharepoint-testing/</link>
      <pubDate>Sat, 09 May 2009 15:13:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/another-speaking-engagement-vbug-newcastle-on-sharepoint-testing/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Update 15 May&lt;/strong&gt; – Date changed due to unexpected overseas trip&lt;/p&gt;
&lt;p&gt;I will be doing a session on Testing SharePoint using Typemock Isolator on the 3rd June 17th June in Newcastle. &lt;a href=&#34;http://www.vbug.co.uk/events/default.aspx&#34;&gt;Check the Vbug site&lt;/a&gt; for more details.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Update 15 May</strong> – Date changed due to unexpected overseas trip</p>
<p>I will be doing a session on Testing SharePoint using Typemock Isolator on the 3rd June 17th June in Newcastle. <a href="http://www.vbug.co.uk/events/default.aspx">Check the Vbug site</a> for more details.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Virtual PC on Windows 7</title>
      <link>https://blog.richardfennell.net/posts/virtual-pc-on-windows-7/</link>
      <pubDate>Sat, 09 May 2009 14:58:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/virtual-pc-on-windows-7/</guid>
      <description>&lt;p&gt;You can &lt;a href=&#34;http://blogs.msdn.com/virtual_pc_guy/archive/2009/01/13/windows-7-on-virtual-pc-on-windows-7.aspx&#34;&gt;run Virtual PC 2007 on Windows 7&lt;/a&gt;, but Windows 7 does include a new &lt;a href=&#34;http://blogs.msdn.com/virtual_pc_guy/archive/2009/05/06/windows-7-rc-windows-virtual-pc-beta-now-available.aspx&#34;&gt;version of Virtual PC as part of the operating system&lt;/a&gt;, which is good.&lt;/p&gt;
&lt;p&gt;The problem I have, and as will many others, is though my 64Bit Acer 8210’s Intel process has hardware virtualization support, Acer for some bizarre reason chose to disable it in the BIOS; thought it was enabled in the 32bit 8200 series, and is enable in the later Travelmate equivalents. Acer are not alone in this choice. This means that many people with fairly recent PCs will not be able to run the newer version of Virtual PC.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can <a href="http://blogs.msdn.com/virtual_pc_guy/archive/2009/01/13/windows-7-on-virtual-pc-on-windows-7.aspx">run Virtual PC 2007 on Windows 7</a>, but Windows 7 does include a new <a href="http://blogs.msdn.com/virtual_pc_guy/archive/2009/05/06/windows-7-rc-windows-virtual-pc-beta-now-available.aspx">version of Virtual PC as part of the operating system</a>, which is good.</p>
<p>The problem I have, and as will many others, is though my 64Bit Acer 8210’s Intel process has hardware virtualization support, Acer for some bizarre reason chose to disable it in the BIOS; thought it was enabled in the 32bit 8200 series, and is enable in the later Travelmate equivalents. Acer are not alone in this choice. This means that many people with fairly recent PCs will not be able to run the newer version of Virtual PC.</p>
<p>If at all possible I think Microsoft need to provide support for host PCs with no hardware virtualization support,or that lack the option to enable it in the BIOS, as does my Acer. However I wonder, as hardware virtualisation is pre-requisite for Hyper V, is it that this new version Virtual PC share technology with Hyper V, thus giving the same hardware requirements?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Next weeks Agile Yorkshire meeting</title>
      <link>https://blog.richardfennell.net/posts/next-weeks-agile-yorkshire-meeting/</link>
      <pubDate>Wed, 06 May 2009 21:23:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/next-weeks-agile-yorkshire-meeting/</guid>
      <description>&lt;p&gt;Don’t forget the next Agile Yorkshire meeting on the 13th of May about &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/may13th-exploratorytesting&#34;&gt;Exploratory testing&lt;/a&gt;. Usual time, usual place, usual free pint for everyone thanks to our sponsors.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Don’t forget the next Agile Yorkshire meeting on the 13th of May about <a href="http://www.agileyorkshire.org/2009-event-announcements/may13th-exploratorytesting">Exploratory testing</a>. Usual time, usual place, usual free pint for everyone thanks to our sponsors.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Updating Windows 7 Beta to RC</title>
      <link>https://blog.richardfennell.net/posts/updating-windows-7-beta-to-rc/</link>
      <pubDate>Wed, 06 May 2009 15:24:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updating-windows-7-beta-to-rc/</guid>
      <description>&lt;p&gt;The upgrade took about 2 hours following the ‘&lt;a href=&#34;http://windows7news.com/2009/04/09/windows-7-beta-to-rc-upgrade-instructions/comment-page-59/&#34;&gt;enable in-place upgrade&lt;/a&gt;’ notes. There was a lot of no progress bar movement &amp;amp; percentages not changing whilst the disk light flashed, but we got there in the end.&lt;/p&gt;
&lt;p&gt;Thus far all seems OK. so the summary a bit slow but smooth&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The upgrade took about 2 hours following the ‘<a href="http://windows7news.com/2009/04/09/windows-7-beta-to-rc-upgrade-instructions/comment-page-59/">enable in-place upgrade</a>’ notes. There was a lot of no progress bar movement &amp; percentages not changing whilst the disk light flashed, but we got there in the end.</p>
<p>Thus far all seems OK. so the summary a bit slow but smooth</p>
]]></content:encoded>
    </item>
    <item>
      <title>My grok talk on Sharepoint testing with Typemock Isolator and Ivonna at DD Scotland</title>
      <link>https://blog.richardfennell.net/posts/my-grok-talk-on-sharepoint-testing-with-typemock-isolator-and-ivonna-at-dd-scotland/</link>
      <pubDate>Sun, 03 May 2009 20:50:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-grok-talk-on-sharepoint-testing-with-typemock-isolator-and-ivonna-at-dd-scotland/</guid>
      <description>&lt;p&gt;I had an enjoyable day at Developer Day Scotland in Glasgow yesterday; a big thank you to the organisers and speakers.&lt;/p&gt;
&lt;p&gt;I did a short grok talk on ‘Testing Sharepoint using Typemock Isolator and Ivonna’, a few people asked me for more details. Well the session was based on &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/12/04/developer-testing-of-sharepoint-webparts-using-typemock-isolator-and-ivonna.aspx&#34;&gt;a post&lt;/a&gt; I did a while ago. I have updated that post to tidy a couple of issue I found whilst preparing the session. If you need more details of the potential pitfalls in using these tools I suggest you also look at the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/05/01/testing-access-attributes-on-the-microsoft-mvc-framework.aspx&#34;&gt;MVC post&lt;/a&gt; I did a few days ago as this details the setup you need to get it going.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I had an enjoyable day at Developer Day Scotland in Glasgow yesterday; a big thank you to the organisers and speakers.</p>
<p>I did a short grok talk on ‘Testing Sharepoint using Typemock Isolator and Ivonna’, a few people asked me for more details. Well the session was based on <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/12/04/developer-testing-of-sharepoint-webparts-using-typemock-isolator-and-ivonna.aspx">a post</a> I did a while ago. I have updated that post to tidy a couple of issue I found whilst preparing the session. If you need more details of the potential pitfalls in using these tools I suggest you also look at the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/05/01/testing-access-attributes-on-the-microsoft-mvc-framework.aspx">MVC post</a> I did a few days ago as this details the setup you need to get it going.</p>
<p>I will also be doing a longer session on the same subject at some user groups later in the summer</p>
]]></content:encoded>
    </item>
    <item>
      <title>Testing access attributes on the Microsoft MVC framework</title>
      <link>https://blog.richardfennell.net/posts/testing-access-attributes-on-the-microsoft-mvc-framework/</link>
      <pubDate>Fri, 01 May 2009 08:56:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/testing-access-attributes-on-the-microsoft-mvc-framework/</guid>
      <description>&lt;p&gt;The MVC framework provides an excellent way to create a testable web site. In fact when you create a new MVC project you are given the option to create an associated test project that contains MSTEST unit tests for all the sample methods in the MVC Controller class; which you can add to as you go along.&lt;/p&gt;
&lt;p&gt;Recently whilst working on an MVC project I noticed that the one area that this model does not let you test is that of access control. MVC uses attributes on Controller methods to limit who can access what e.g.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The MVC framework provides an excellent way to create a testable web site. In fact when you create a new MVC project you are given the option to create an associated test project that contains MSTEST unit tests for all the sample methods in the MVC Controller class; which you can add to as you go along.</p>
<p>Recently whilst working on an MVC project I noticed that the one area that this model does not let you test is that of access control. MVC uses attributes on Controller methods to limit who can access what e.g.</p>
<pre tabindex="0"><code>\[Authorize\]  
     public ActionResult ChangePassword()  
     {  
  
         ViewData\[&#34;PasswordLength&#34;\] = MembershipService.MinPasswordLength;  
  
         return View();  
     }
</code></pre><p>These attributes are used by the MVC routing engine to decide if a method can be called or not. The problem is these attributes are not honoured by the unit tests as they call the controller methods directly not via the routing engine.</p>
<p>This raised a concern for me, if I am setting access controller via attributes how can I make sure I set the right attribute on the right methods? This is important for regression testing. This issue has also been <a href="http://www.sm-art.biz/ForumThreadView.aspx?thread=3&amp;mid=5&amp;pageid=7&amp;ItemID=1">discussed on StackOverflow</a>. An alternative is to not use this attribute security model, but to make the security checks within the controller methods programmatically. For some this might be the correct solution, but did not see right for me. If you are going to use a framework, try to use it as it was intended.</p>
<p>I therefore needed a way to honour the attributes whilst testing. One option would be to write code on the unit tests to check the attributes, but this was more reflection than I wanted to do at this time. So I thought of using <a href="http://www.sm-art.biz/Ivonna.aspx">Ivonna</a> the Typemock add-in. This allows you to load a web page and process it via a mocked web delivery framework.</p>
<p>Now this plan seemed simple but as soon as I started I found it was more complex than expected. I hit some problem as I will detail as I go along. I must say thanks to Artem Smirnov who wrote Ivonna for all his help in sorting out the issues I was having, both in my usage/understanding and fixing bugs. I could not have written this post without his help.</p>
<p><strong>Preparation- My Assumptions</strong></p>
<p>So for this post lets assume the following</p>
<ul>
<li>You create a new MVC project so you have the basic sample MVC web site.</li>
<li>You allow Visual Studio to also create a new Test Project for the MVC project</li>
<li>You have Typemock Isolator 5.3.0</li>
<li>You have Ivonna 1.2.7 what includes Artem’s “experimental support for MVC&quot;. (1.2.6.is OK for all bar the form submission example, see comments below)</li>
</ul>
<p><strong>GOTT’A 1:</strong> If you are writing tests using <a href="http://www.sm-art.biz/ForumThreadView.aspx?thread=3&amp;mid=5&amp;pageid=7&amp;ItemID=1">MSTEST with Ivonna</a> you have to set the Test Project output directory to the bin directory of the MVC project (via Test Project’s properties, build tab)  e.g. <em>..MvcApplication1bin</em>. This is needed so that Ivonna can find the page classes to load.</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2297B04C.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_027CA38F.png" title="image"></a> .</p>
<p><strong>Submitting a Form</strong></p>
<p>The basic way MVC works is that forms get POSTed to the controller. So this was where I started, I think the comments in the code explain what is being done</p>
<pre tabindex="0"><code>\[TestMethod, RunOnWeb(true)\]  
public void LogOn\_FormContainingValidLoggedDetails\_RedirectedToChangePasswordView()  
{  
  
    // Arrange  
    // Fake out the membership service using Typemock, allow call through to the original  
    // class so that we don&#39;t need to fake out all the methods that will be called  
    var fakeMembershipService = Isolate.Fake.Instance&lt;AccountMembershipService&gt;(Members.CallOriginal);  
    // set that the Validate method will return true with the correct UID and password  
    Isolate.WhenCalled(() =&gt; fakeMembershipService.ValidateUser(&#34;testid&#34;, &#34;goodpass&#34;)).WillReturn(true);  
  
    // Intercept the next call to the AccountController and slip in the faked service  
    Isolate.Swap.NextInstance&lt;AccountMembershipService&gt;().With(fakeMembershipService);  
  
    // Create the Ivonna test session  
    var session = new TestSession();  
  
    // Create the POST request, setting the AutoRedirect flag so that it does not  
    // actually do the redirect at the end of processing. We just check where it would   
    // redirect if we let it, this avoids an extra round trip  
    WebRequest request = new WebRequest(@&#34;/Account/LogOn&#34;, &#34;POST&#34;, null, null) { AutoRedirect = false };  
      
    // Fill in the form values with all the fields that would be sent on the Logon view submission  
    request.FormValues.Add(&#34;username&#34;, &#34;testid&#34;);  
    request.FormValues.Add(&#34;password&#34;, &#34;goodPass&#34;);   
    request.FormValues.Add(&#34;rememberMe&#34;, false.ToString());  
    request.FormValues.Add(&#34;returnUrl&#34;, @&#34;/Account/ChangePassword&#34;);  
  
    // Act  
    // Process the request  
    WebResponse response = session.ProcessRequest(request);  
  
    // Assert  
    // Check that we have been redirected to the correct page  
    Assert.AreEqual(@&#34;/Account/ChangePassword&#34;, response.RedirectLocation);  
}
</code></pre><p><strong>GOTT’A 2:</strong> If you are using the current shipping version of Ivonna (1.2.6 at the time of writing) this test will fail. There is a problem with form handling code so you need Artem’s “experimental support for MVC&quot; release this addresses a problem with the form handling code (Updated 23 May 2009 - now shipped in 1.2.7). However as you will see from my comments below this problem might not as critical as I first thought.</p>
<p>This test is all well and good, but is it useful? All it proves is that Microsoft wrote their logon code correctly. They provide unit tests for this in the MVC source if you are interested. In my opinion if you are using a framework like you need to take it on trust and assume it’s core functions are tested prior to its publication (OK there will be bugs, but as a general point I think this holds true)</p>
<p>So I would say it is good you can do this test, but in practice I don’t think I would bother. If I want to test the functionality of methods in my controller class I should just use standard unit tests as in the MVC samples. I should test the functionality from separately from the security.</p>
<p><strong>Checking for page differences between an anonymous user and an authenticated one</strong></p>
<p>What I do want to test that a page is rendering correctly depending if I am logged in or not. The following two tests show how to do this with the home page of the default MVC sample.</p>
<p>I would draw your attention to how ‘clean’ the test is. Ivonna (and hence Typemock) is doing all the heavy lifting behind the scenes.</p>
<pre tabindex="0"><code>\[TestMethod, RunOnWeb\]  
   public void Home\_IsNotLoggedOn\_SeeLogonButton()  
   {  
       // Arrange  
       // create the Ivonna test session  
       var session = new TestSession();  
       // create the request for the page we want  
       WebRequest request = new WebRequest(@&#34;/&#34;);  
       // set no user   
  
       // Act  
       WebResponse response = session.ProcessRequest(request);  
  
       // Assert  
       Assert.AreEqual(@&#34;/&#34;, response.Url);  
       // we can check some html items on the form  
       Assert.IsTrue(response.BodyAsString.Contains(&#34;&lt;h2&gt;Welcome to ASP.NET MVC!&lt;/h2&gt;&#34;));  
       Assert.IsTrue(response.BodyAsString.Contains(&#34;\[ &lt;a href=&#34;https://blog.richardfennell.net/Account/LogOn&#34;&gt;Log On&lt;/a&gt; \]&#34;));  
   }
</code></pre><pre tabindex="0"><code>  
        \[TestMethod, RunOnWeb\]  
        public void Home\_IsLoggedOn\_SeeLogOffButton()  
        {  
            // Arrange  
            // create the Ivonna test session  
            var session = new TestSession();  
            // create the request for the page we want  
            WebRequest request = new WebRequest(@&#34;/&#34;);  
            // Pass in a user and the frame does the rest  
            request.User = new System.Security.Principal.GenericPrincipal(new System.Security.Principal.GenericIdentity(&#34;testid&#34;), null);  
  
            // Act  
            WebResponse response = session.ProcessRequest(request);  
  
            // Assert  
            Assert.AreEqual(@&#34;/&#34;, response.Url);  
            // we can check some html items on the form  
            Assert.IsTrue(response.BodyAsString.Contains(&#34;&lt;h2&gt;Welcome to ASP.NET MVC!&lt;/h2&gt;&#34;));  
            Assert.IsTrue(response.BodyAsString.Contains(&#34;Welcome &lt;b&gt;testid&lt;/b&gt;!&#34;));  
            Assert.IsTrue(response.BodyAsString.Contains(&#34;\[ &lt;a href=&#34;https://blog.richardfennell.net/Account/LogOff&#34;&gt;Log Off&lt;/a&gt; \]&#34;));  
        }
</code></pre><p><strong>GOTT’A 3</strong>: When using classic ASP.NET Ivonna treats the returned page as an object and you have a collection of extension methods to help navigate the page checking control values etc. As MVC just returns HTML to the browser at this time you have to check for values by string matching in the <strong>response.BodyAsString</strong> (or you could use some xPath or Regular Expression)</p>
<p><strong>TIP</strong>: As I am sure you will be looking at HTML from <strong>response.BodyAsString</strong> property in the debugger at some point, using the HTML visualizer in the Visual Studio Auto/Local Window is a great help. Select the HTML visualizer (as opposed to the default string one) by using the drop down selector in the window or tool trip debug prompt, and you can see the page as if in a browser for a quick manual visual test.</p>
<p><strong>Checking who can see a page</strong></p>
<p>Where I started with this project was wanting to test page access i.e. can user A get to Page B? We are now in a position to achieve this. The following two tests check that an authenticated user can reach a secured page, and that a non authenticated one is redirected to the logon page. Again note the clean easy to read syntax.</p>
<pre tabindex="0"><code>\[TestMethod, RunOnWeb\]  
       public void ShowChangePassword\_NotAuthenticated\_RedirectToLogonView()  
       {  
           // Arrange  
           var session = new TestSession();  
           var request = new WebRequest(@&#34;/Account/ChangePassword&#34;);  
           // we set no value for the request.User property  
  
           // Act  
           var response = session.ProcessRequest(request);  
  
           // Assert  
           // redirected to the logon page  
           Assert.AreEqual(@&#34;Account/LogOn?ReturnUrl=%2fAccount%2fChangePassword&#34;, response.Url);  
       }  
</code></pre><pre tabindex="0"><code>\[TestMethod, RunOnWeb\]  
   public void ShowChangePassword\_Authenticated\_ShowChangePasswordView()  
   {  
       // Arrange  
       var session = new TestSession();  
       var request = new WebRequest(@&#34;/Account/ChangePassword&#34;);  
       // just pass in a user, using the fact that Ivonna has a built-in authentication faking  
       request.User = new System.Security.Principal.GenericPrincipal(new System.Security.Principal.GenericIdentity(&#34;testid&#34;), null);  
  
       // Act  
       var response = session.ProcessRequest(request);  
  
       // Assert  
       Assert.AreEqual(@&#34;/Account/ChangePassword&#34;, response.Url);  
       // we can also check the page content, but probably don&#39;t need to in this case  
       Assert.IsTrue(response.BodyAsString.Contains(&#34;Use the form below to change your password.&#34;));  
       Assert.IsTrue(response.BodyAsString.Contains(&#34;Welcome &lt;b&gt;testid&lt;/b&gt;!&#34;));  
       Assert.IsTrue(response.BodyAsString.Contains(&#34;\[ &lt;a href=&#34;https://blog.richardfennell.net/Account/LogOff&#34;&gt;Log Off&lt;/a&gt; \]&#34;));  
  
   }
</code></pre><p>The above tests assume that the ChangePassword page is just protected with a simple <strong>[Authorize]</strong> attribute, so a user is authenticated or not. However, it is easy to modify the test so that it handles roles. If the same page was protected with the attribute <strong>[Authorize(Roles= &ldquo;Staff&rdquo;)]</strong>, the test simply becomes</p>
<pre tabindex="0"><code>\[TestMethod, RunOnWeb\]  
   public void ShowStaffOnlyChangePassword\_Authenticated\_ShowChangePasswordView()  
   {  
       // Arrange  
       var session = new TestSession();  
       var request = new WebRequest(@&#34;/Account/ChangePassword&#34;);  
       // just pass in a user, using the fact that Ivonna has a built-in authentication faking  
       request.User = new System.Security.Principal.GenericPrincipal(new System.Security.Principal.GenericIdentity(&#34;testid&#34;) , new string\[\] {&#34;Staff&#34;});  
         
       // Act  
       var response = session.ProcessRequest(request);  
  
       // Assert  
       Assert.AreEqual(@&#34;/Account/ChangePassword&#34;, response.Url);  
       // we can also check the page content, but probably don&#39;t need to in this case  
       Assert.IsTrue(response.BodyAsString.Contains(&#34;Use the form below to change your password.&#34;));  
       Assert.IsTrue(response.BodyAsString.Contains(&#34;Welcome &lt;b&gt;testid&lt;/b&gt;!&#34;));  
       Assert.IsTrue(response.BodyAsString.Contains(&#34;\[ &lt;a href=&#34;https://blog.richardfennell.net/Account/LogOff&#34;&gt;Log Off&lt;/a&gt; \]&#34;));  
  
   }  
</code></pre><p><strong>Should I use this way to test MVC?</strong></p>
<p>If you are worried that developers are not applying (or are refactoring away) the security attributes on your MVC controllers this technique for testing is well worth a look.It provides a way of writing simple, readable tests for the MVC security model.</p>
<p>It is fair to say that these tests are not fast, the basic setup of a single test run takes about 30 seconds (but subsequent test in a batch are far faster) so you are not going to run them all the time in a TDD style. I think you should consider them as integration tests and run them as part of you continuous integration process. I think it is a more robust means of testing security than recorder based Web Tests. All thanks to Artem from producing such a useful Typemock add-in.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My NxtGen tour in August</title>
      <link>https://blog.richardfennell.net/posts/my-nxtgen-tour-in-august/</link>
      <pubDate>Wed, 29 Apr 2009 15:06:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-nxtgen-tour-in-august/</guid>
      <description>&lt;p&gt;I will be speaking on developer testing of SharePoint projects using Typemock at both the Birmingham and Manchester NXtGen user groups in August.&lt;/p&gt;
&lt;p&gt;For more details check the &lt;a href=&#34;http://www.nxtgenug.net/events.aspx&#34;&gt;NxtGen site&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking on developer testing of SharePoint projects using Typemock at both the Birmingham and Manchester NXtGen user groups in August.</p>
<p>For more details check the <a href="http://www.nxtgenug.net/events.aspx">NxtGen site</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Licensing exception with Ivonna the Typemock add-in (and any other add-ins I suspect)</title>
      <link>https://blog.richardfennell.net/posts/licensing-exception-with-ivonna-the-typemock-add-in-and-any-other-add-ins-i-suspect/</link>
      <pubDate>Sat, 25 Apr 2009 18:51:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/licensing-exception-with-ivonna-the-typemock-add-in-and-any-other-add-ins-i-suspect/</guid>
      <description>&lt;p&gt;Like a good developer I have been trying to run Visual Studio with least privilege; with Windows 7 this seems to work well. My main account is not an administrator, but Windows prompts me for elevated rights when needed. I have been developing happily with Visual Studio and &lt;a href=&#34;http://typemock.com/Typemock_software_development_tools.php&#34;&gt;Typemock&lt;/a&gt; without any need for extra rights.&lt;/p&gt;
&lt;p&gt;However, when I have been doing some testing using &lt;a href=&#34;http://www.sm-art.biz/Ivonna.aspx&#34;&gt;Ivonna&lt;/a&gt;, the Typemock add-in, I hit a problem. When I tried to create an &lt;em&gt;Ivonna.Framework.TestSession()&lt;/em&gt; I got a &lt;em&gt;Licensing.LicenseException: This copy has expired.&lt;/em&gt; Which it hadn’t as I have a fully licensed product.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Like a good developer I have been trying to run Visual Studio with least privilege; with Windows 7 this seems to work well. My main account is not an administrator, but Windows prompts me for elevated rights when needed. I have been developing happily with Visual Studio and <a href="http://typemock.com/Typemock_software_development_tools.php">Typemock</a> without any need for extra rights.</p>
<p>However, when I have been doing some testing using <a href="http://www.sm-art.biz/Ivonna.aspx">Ivonna</a>, the Typemock add-in, I hit a problem. When I tried to create an <em>Ivonna.Framework.TestSession()</em> I got a <em>Licensing.LicenseException: This copy has expired.</em> Which it hadn’t as I have a fully licensed product.</p>
<p>I had got so use to not needing elevated privilege I did not consider it to be the problem; so I contacted Sm-Art and Typemock support. The answer was simply to run Visual Studio with administrator privileges (right click on the short cut). Once this is done the licensing exception goes away as Typemock has enough rights to look in the right bit of the registry to access the add-in license. I have suggested that if possible this requirement needs to be addressed.</p>
<p>The other alternative is to grant your non administrator account more rights in the registry. On a 64bit development box it seems you need to a Read-Write access to <em>HKEY_LOCAL_MACHINESOFTWARETypeMock</em> and <em>HKEY_LOCAL_MACHINESOFTWAREWow6432NodeTypeMock</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>System.Web.Abstractions missing on a ASP.NET web site</title>
      <link>https://blog.richardfennell.net/posts/system-web-abstractions-missing-on-a-asp-net-web-site/</link>
      <pubDate>Fri, 24 Apr 2009 08:55:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/system-web-abstractions-missing-on-a-asp-net-web-site/</guid>
      <description>&lt;p&gt;I recently re-enabled feature on a ASP.NET site. This feature (a set of pages) had been running OK about six months ago for an initial pilot project but was disabled until now until a decision was made to develop the feature fully. In intervening time the web site had other modification and had been rebuilt (it was targeted at .NET 3.5) without issue.&lt;/p&gt;
&lt;p&gt;When I re-enabled the feature (renamed a .ASPX file) I got the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I recently re-enabled feature on a ASP.NET site. This feature (a set of pages) had been running OK about six months ago for an initial pilot project but was disabled until now until a decision was made to develop the feature fully. In intervening time the web site had other modification and had been rebuilt (it was targeted at .NET 3.5) without issue.</p>
<p>When I re-enabled the feature (renamed a .ASPX file) I got the error</p>
<p><em>Could not load file or assembly &lsquo;System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35&rsquo; or one of its dependencies. The system cannot find the file specified.</em></p>
<p>Now this surprised me, I had not change the .NET version targeting or changed development environment. In the end I fixed it by copying up the missing DLL to the bin directory. I had tried adding the following to the web.config to no effect</p>
<runtime>  
  <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">  
   <dependentAssembly>  
    <assemblyIdentity name="System.Web.Abstractions" publicKeyToken="31BF3856AD364E35"/>  
    <bindingRedirect oldVersion="0.0.0.0-3.5.0.0" newVersion="0.0.0.0"/>  
   </dependentAssembly>  
  </assemblyBinding>  
</runtime>  
</configuration>
<p>On think a bit more I think the issue must have been caused by:</p>
<ul>
<li>I may have upgraded to .NET to 3.5. SP1 in the time frame (but I thought I did it earlier)</li>
<li>I have installed MVC on my development PC, and most other posts that mention this issue also mention MVC, usually beta versions.</li>
</ul>
<p>EIther way it is fixed now</p>
]]></content:encoded>
    </item>
    <item>
      <title>Follow up to my session yesterday at VBug Newcatsle on DataDude GDR</title>
      <link>https://blog.richardfennell.net/posts/follow-up-to-my-session-yesterday-at-vbug-newcatsle-on-datadude-gdr/</link>
      <pubDate>Thu, 23 Apr 2009 09:30:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/follow-up-to-my-session-yesterday-at-vbug-newcatsle-on-datadude-gdr/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my session at VBug Newcastle last night, hope you enjoyed.&lt;/p&gt;
&lt;p&gt;As I mentioned in my session, to celebrate my talking at VBug, Microsoft chose to release the &lt;a href=&#34;https://www.microsoft.com/downloads/details.aspx?FamilyID=bb3ad767-5f69-4db9-b1c9-8f55759846ed&amp;amp;displaylang=en&#34;&gt;Visual Studio Team System 2008 Database Edition GDR R2&lt;/a&gt; yesterday. If you are using DataDude you do need to get this installed, it addresses many know issues.&lt;/p&gt;
&lt;p&gt;Slides virtually identical to yesterdays, as used at SQLBits, are on the &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2009/SQLBits%20IV%20-%20%27Making%20the%20SQL%20developer%20one%20of%20the%20family%20with%20Visual%20Studio%20Team%20System%27.ppt&#34;&gt;Black Marble site&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my session at VBug Newcastle last night, hope you enjoyed.</p>
<p>As I mentioned in my session, to celebrate my talking at VBug, Microsoft chose to release the <a href="https://www.microsoft.com/downloads/details.aspx?FamilyID=bb3ad767-5f69-4db9-b1c9-8f55759846ed&amp;displaylang=en">Visual Studio Team System 2008 Database Edition GDR R2</a> yesterday. If you are using DataDude you do need to get this installed, it addresses many know issues.</p>
<p>Slides virtually identical to yesterdays, as used at SQLBits, are on the <a href="http://www.blackmarble.co.uk/ConferencePapers/2009/SQLBits%20IV%20-%20%27Making%20the%20SQL%20developer%20one%20of%20the%20family%20with%20Visual%20Studio%20Team%20System%27.ppt">Black Marble site</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking in Newcastle tomorrow</title>
      <link>https://blog.richardfennell.net/posts/speaking-in-newcastle-tomorrow/</link>
      <pubDate>Tue, 21 Apr 2009 13:30:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-in-newcastle-tomorrow/</guid>
      <description>&lt;p&gt;If you are in Newcastle on the evening of the 22nd I will be speaking at &lt;a href=&#34;http://www.vbug.co.uk/Events/April-2009/VBUG-Newcastle-Making-the-SQL-developer-one-of-the-family-with-VSTS.aspx&#34;&gt;Vbug on Visual Studio 2008 Database Edition&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are in Newcastle on the evening of the 22nd I will be speaking at <a href="http://www.vbug.co.uk/Events/April-2009/VBUG-Newcastle-Making-the-SQL-developer-one-of-the-family-with-VSTS.aspx">Vbug on Visual Studio 2008 Database Edition</a>.</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Next Agile Yorkshire Meeting – Exploratory Testing</title>
      <link>https://blog.richardfennell.net/posts/next-agile-yorkshire-meeting-exploratory-testing/</link>
      <pubDate>Sat, 18 Apr 2009 21:34:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/next-agile-yorkshire-meeting-exploratory-testing/</guid>
      <description>&lt;p&gt;Next months Agile Yorkshire meeting is on the &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/may13th-exploratorytesting&#34;&gt;13th May where Ralph Williams will be talking about Exploratory Testing&lt;/a&gt;, the session outline sound interesting.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Next months Agile Yorkshire meeting is on the <a href="http://www.agileyorkshire.org/2009-event-announcements/may13th-exploratorytesting">13th May where Ralph Williams will be talking about Exploratory Testing</a>, the session outline sound interesting.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Alt.Net ‘In the north’ – wrapping it all up</title>
      <link>https://blog.richardfennell.net/posts/alt-net-in-the-north-wrapping-it-all-up/</link>
      <pubDate>Sat, 18 Apr 2009 21:10:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alt-net-in-the-north-wrapping-it-all-up/</guid>
      <description>&lt;p&gt;Thanks to everyone who was able to attend the &lt;a href=&#34;http://www.altdotnetuknorth.info/&#34;&gt;Alt.net ‘In the North’&lt;/a&gt; conference today; and also to our sponsors. The event could not happened without you all.&lt;/p&gt;
&lt;p&gt;As promised here is the list of blogs, books etc. we had on the whiteboard.&lt;/p&gt;
&lt;p&gt;**Books&lt;br&gt;
**&lt;a href=&#34;%20Specification%20by%20Example%20and%20Agile%20Acceptance%20Testing&#34;&gt;Bridging the Communication Gap: Specification by Example and Agile Acceptance Testing&lt;/a&gt; Gojko Adzic&lt;br&gt;
&lt;a href=&#34;http://www.amazon.co.uk/User-Stories-Applied-Development-Signature/dp/0321205685/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1240086737&amp;amp;sr=8-1&#34;&gt;User Stories Applied: For Agile Software Development (Addison Wesley Signature Series)&lt;/a&gt; by Mike Cohn&lt;br&gt;
&lt;a href=&#34;http://www.amazon.co.uk/Agile-Estimating-Planning-Robert-Martin/dp/0131479415/ref=sr_1_2?ie=UTF8&amp;amp;s=books&amp;amp;qid=1240086737&amp;amp;sr=8-2&#34;&gt;Agile Estimating and Planning&lt;/a&gt; by Mike Cohn&lt;br&gt;
&lt;a href=&#34;http://www.amazon.co.uk/Crystal-Clear-Human-Powered-Methodology-Small/dp/0201699478/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1240086945&amp;amp;sr=8-1&#34;&gt;Crystal Clear: A Human-Powered Methodology for Small Teams&lt;/a&gt; by Alistair Cockburn&lt;br&gt;
&lt;a href=&#34;http://www.amazon.co.uk/Domain-driven-Design-Tackling-Complexity-Software/dp/0321125215/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1240086975&amp;amp;sr=8-1&#34;&gt;Domain-driven Design: Tackling Complexity in the Heart of Software&lt;/a&gt; by Eric Evans&lt;br&gt;
&lt;a href=&#34;http://www.amazon.co.uk/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1240087047&amp;amp;sr=1-1&#34;&gt;Sketching User Experiences: Getting the Design Right and the Right Design&lt;/a&gt; by Bill Buxton&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who was able to attend the <a href="http://www.altdotnetuknorth.info/">Alt.net ‘In the North’</a> conference today; and also to our sponsors. The event could not happened without you all.</p>
<p>As promised here is the list of blogs, books etc. we had on the whiteboard.</p>
<p>**Books<br>
**<a href="%20Specification%20by%20Example%20and%20Agile%20Acceptance%20Testing">Bridging the Communication Gap: Specification by Example and Agile Acceptance Testing</a> Gojko Adzic<br>
<a href="http://www.amazon.co.uk/User-Stories-Applied-Development-Signature/dp/0321205685/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1240086737&amp;sr=8-1">User Stories Applied: For Agile Software Development (Addison Wesley Signature Series)</a> by Mike Cohn<br>
<a href="http://www.amazon.co.uk/Agile-Estimating-Planning-Robert-Martin/dp/0131479415/ref=sr_1_2?ie=UTF8&amp;s=books&amp;qid=1240086737&amp;sr=8-2">Agile Estimating and Planning</a> by Mike Cohn<br>
<a href="http://www.amazon.co.uk/Crystal-Clear-Human-Powered-Methodology-Small/dp/0201699478/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1240086945&amp;sr=8-1">Crystal Clear: A Human-Powered Methodology for Small Teams</a> by Alistair Cockburn<br>
<a href="http://www.amazon.co.uk/Domain-driven-Design-Tackling-Complexity-Software/dp/0321125215/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1240086975&amp;sr=8-1">Domain-driven Design: Tackling Complexity in the Heart of Software</a> by Eric Evans<br>
<a href="http://www.amazon.co.uk/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1240087047&amp;sr=1-1">Sketching User Experiences: Getting the Design Right and the Right Design</a> by Bill Buxton</p>
<p>**Blogs<br>
**<a href="http://weblogs.asp.net/scottgu/">http://weblogs.asp.net/scottgu/</a> (Scott Guthrie)<br>
<a href="http://www.hanselman.com/blog/">http://www.hanselman.com/blog/</a> (Scott Hanselman)<br>
<a href="http://ayende.com/">http://ayende.com/</a> (Oren Eini)<br>
<a href="http://mattberseth.com/">http://mattberseth.com/</a> (Matt Berseth)<br>
<a href="http://oakleafblog.blogspot.com/">http://oakleafblog.blogspot.com/</a> (Roger Jennings)</p>
<p><strong>Events</strong> <br>
<a href="http://openspacecode.com/contact">http://openspacecode.com/contact</a> – Coding events in London, next one 30th May<br>
<a href="http://www.developerdayscotland.com/">http://www.developerdayscotland.com/</a> - Free community conference in Glasgow 2nd May<br>
<a href="http://www.agileyorkshire.org/">http://www.agileyorkshire.org/</a> - Agile Yorkshire user group meets in Leeds every 2nd Wednesday in month<br>
<a href="http://www.blackmarble.co.uk/events">http://www.blackmarble.co.uk/events</a> - Various free events in Leeds/Bradford area</p>
<p>**Tools<br>
**<a href="http://www.hanselman.com/blog/ScottHanselmans2007UltimateDeveloperAndPowerUsersToolListForWindows.aspx">http://www.hanselman.com/blog/ScottHanselmans2007UltimateDeveloperAndPowerUsersToolListForWindows.aspx</a> - Tool list<br>
<a href="http://www.balsamiq.com/products/mockups/desktop">http://www.balsamiq.com/products/mockups/desktop</a> - UI Sketch tool<br>
<a href="http://www.openquarters.org">www.openquarters.org</a> (coming soon) the MVC CMS that Anthony demonstrated at the conference<br>
and a digital camera to capture your whiteboard drawings</p>
<p>I think that is all, thanks again to you all for making it such an rewarding event.</p>
<p>Technorati Tags: <a href="http://technorati.com/tags/altnetuk">altnetuk</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Alt.net ‘In the North’ starts tonight</title>
      <link>https://blog.richardfennell.net/posts/alt-net-in-the-north-starts-tonight/</link>
      <pubDate>Fri, 17 Apr 2009 11:49:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alt-net-in-the-north-starts-tonight/</guid>
      <description>&lt;p&gt;Looking forward to seeing everyone at &lt;a href=&#34;http://www.altdotnetuknorth.info/&#34;&gt;Alt.Net ‘in the North&lt;/a&gt;’ over the next two days.&lt;/p&gt;
&lt;p&gt;If you cannot make the planning session tonight, but are in Bradford later, we will be in the &lt;a href=&#34;http://www.jdwetherspoon.co.uk/pubs/pub-details.php?PubNumber=1389&#34;&gt;Titus Salt Pub&lt;/a&gt; for a few drinks sponsored by &lt;a href=&#34;http://www.seedsoftware.co.uk/&#34;&gt;SEED software&lt;/a&gt;. We will probably be upstairs from about 8:30pm&lt;/p&gt;
&lt;p&gt;Technorati Tags: &lt;a href=&#34;http://technorati.com/tags/altnetuk&#34;&gt;altnetuk&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Looking forward to seeing everyone at <a href="http://www.altdotnetuknorth.info/">Alt.Net ‘in the North</a>’ over the next two days.</p>
<p>If you cannot make the planning session tonight, but are in Bradford later, we will be in the <a href="http://www.jdwetherspoon.co.uk/pubs/pub-details.php?PubNumber=1389">Titus Salt Pub</a> for a few drinks sponsored by <a href="http://www.seedsoftware.co.uk/">SEED software</a>. We will probably be upstairs from about 8:30pm</p>
<p>Technorati Tags: <a href="http://technorati.com/tags/altnetuk">altnetuk</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Team Build, Code Coverage and MVC</title>
      <link>https://blog.richardfennell.net/posts/team-build-code-coverage-and-mvc/</link>
      <pubDate>Wed, 15 Apr 2009 17:12:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/team-build-code-coverage-and-mvc/</guid>
      <description>&lt;p&gt;I have been working on some automated build and testing for a project based on the Microsoft MVC framework. The build was working fine, and test were being run, but I was not seeing any code coverage data in the build summary in Visual Studio for the builds done by the Team Build box. However if I ran the test suite locally on a development PC the coverage data was there. Looking on the Team Build drop location I could find the &lt;strong&gt;data.coverage&lt;/strong&gt; file in the &lt;strong&gt;TestResults&lt;guid&gt;In&lt;build user&gt;&lt;/strong&gt; folder, but it was 84Kb in size, which I learnt means ‘contains no data’.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working on some automated build and testing for a project based on the Microsoft MVC framework. The build was working fine, and test were being run, but I was not seeing any code coverage data in the build summary in Visual Studio for the builds done by the Team Build box. However if I ran the test suite locally on a development PC the coverage data was there. Looking on the Team Build drop location I could find the <strong>data.coverage</strong> file in the <strong>TestResults<guid>In<build user></strong> folder, but it was 84Kb in size, which I learnt means ‘contains no data’.</p>
<p>After a good deal of hunting I found a pointer to the answer the <a href="http://www.mail-archive.com/listserver@oztfs.com/msg00045.html">OZTFS forum</a>. The problem is that as the MVC project is a web project, and it build a <strong>_PublishedWebsites</strong> folder and puts the assemblies into this. In effect the code coverage is just looking in the wrong place.</p>
<p>The fix is as follows:</p>
<ul>
<li>Make sure you have suitable <strong>TestRunConfig1.testrunconfig</strong> file in your solution</li>
<li>Open this file in the VS IDE and make sure code coverage is enabled for the assemblies you want (and if your assemblies are signed that the re-sign key is set)</li>
<li>Open you <strong>tfsbuild.proj</strong> file for the automated team build and make sure you have a testing block similar to the block below, change the path to the RunConfigFile as required.</li>
</ul>
<pre tabindex="0"><code>&lt;!--   
 TESTING   Set this flag to enable/disable running tests as a post-compilation build step.   
\--&gt;    
&lt;RunTest\&gt;true&lt;/RunTest\&gt;    
&lt;!--   
 CODE ANALYSIS   Set this property to enable/disable running code analysis. Valid values for this property are  
 Default, Always and Never.  
 Default - Perform code analysis as per the individual project settings  
 Always  - Always perform code analysis irrespective of project settings   
 Never   - Never perform code analysis irrespective of project settings   
\--&gt;    
&lt;RunCodeAnalysis\&gt;Default&lt;/RunCodeAnalysis\&gt;  
  
&lt;!--   
 CODE COVERAGE Set the test run configuration   
\--&gt;    
&lt;RunConfigFile\&gt;$(SolutionRoot)MyWebSolutionTestRunConfig1.testrunconfig&lt;/RunConfigFile\&gt;
</code></pre><ul>
<li>If you test this locally you should get code coverage results, but if you run the build on a Team Build box the code coverage section in the test report will show &ldquo;<em>No coverage result</em>&rdquo;</li>
<li><strong>Now the important bit</strong> – open the <strong>TestRunConfig1.testrunconfig</strong> file in Notepad and add an extra block to the <regular> code coverage section to additionally point to the assembly(s) in the <strong>_PublishedWebsites</strong> structure (you could also use the VS IDE on the build box to add the file if you wanted, but this will warn over an assembly being added twice). When complete the XML file should look similar to the one below</li>
</ul>
<pre tabindex="0"><code>&lt;?xml version\=&#34;1.0&#34; encoding\=&#34;UTF-8&#34;?\&gt;  
&lt;TestRunConfiguration name\=&#34;TestRunConfig1&#34; id\=&#34;b6360bec-8278-4773-a931-f22bfab2c57f&#34; xmlns\=&#34;http://microsoft.com/schemas/VisualStudio/TeamTest/2006&#34;\&gt;  
  &lt;Description\&gt;This is a default test run configuration for a local test run.&lt;/Description\&gt;  
  &lt;CodeCoverage enabled\=&#34;true&#34; keyFile\=&#34;MyWebsiteProjectKey.snk&#34;\&gt;  
    &lt;AspNet\&gt;  
      &lt;AspNetCodeCoverageItem id\=&#34;88655819-3261-43ac-b2d8-2d3aa1aabaef&#34; name\=&#34;MyWebsite&#34; applicationRoot\=&#34;/&#34; url\=&#34;http://localhost:0/&#34; /&gt;  
    &lt;/AspNet\&gt;  
    &lt;Regular\&gt;  
      &lt;CodeCoverageItem binaryFile\=&#34;C:buildsMyWebsiteCIBuildBinariesRelease\_PublishedWebsitesMyWebsitebinMyWebsite.dll&#34; pdbFile\=&#34;C:buildsMyWebsiteCIBuildBinariesRelease\_PublishedWebsitesMyWebsitebinMyWebsite.pdb&#34; instrumentInPlace\=&#34;true&#34; /&gt;  
    &lt;/Regular\&gt;  
  &lt;/CodeCoverage\&gt;  
  &lt;TestTypeSpecific\&gt;  
    &lt;WebTestRunConfiguration testTypeId\=&#34;4e7599fa-5ecb-43e9-a887-cd63cf72d207&#34;\&gt;  
      &lt;Browser name\=&#34;Internet Explorer 7.0&#34;\&gt;  
        &lt;Headers\&gt;  
          &lt;Header name\=&#34;User-Agent&#34; value\=&#34;Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)&#34; /&gt;  
          &lt;Header name\=&#34;Accept&#34; value\=&#34;\*/\*&#34; /&gt;  
          &lt;Header name\=&#34;Accept-Language&#34; value\=&#34;{{$IEAcceptLanguage}}&#34; /&gt;  
          &lt;Header name\=&#34;Accept-Encoding&#34; value\=&#34;GZIP&#34; /&gt;  
        &lt;/Headers\&gt;  
      &lt;/Browser\&gt;  
      &lt;Network Name\=&#34;LAN&#34; BandwidthInKbps\=&#34;0&#34; /&gt;  
    &lt;/WebTestRunConfiguration\&gt;  
  &lt;/TestTypeSpecific\&gt;  
&lt;/TestRunConfiguration\&gt;
</code></pre><ul>
<li>Once this is done you can run the tests locally or on the build machine and in both cases MSTest manages to find the assembly to test the code coverage on and reports the results.</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Holiday is when you catch up…..</title>
      <link>https://blog.richardfennell.net/posts/holiday-is-when-you-catch-up/</link>
      <pubDate>Fri, 10 Apr 2009 14:22:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/holiday-is-when-you-catch-up/</guid>
      <description>&lt;p&gt;I got round to listening to the latest Radio TFS podcast today whist out for a run, &lt;a href=&#34;http://www.radiotfs.com/2009/03/25/AdoptingTeamSystemWithStevenBorg.aspx&#34;&gt;Adopting Team System with Steve Borg&lt;/a&gt;. If you are looking at adopting TFS or even just critically looking at your development life cycle with a view to improving (irrespective of the tools you use), then this podcast is well worth the time to listen to. It actually covers a lot of the points I was discussing at the Agile Yorkshire user group this week in &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/8thapril-richardfennellcrystalclearmethodology&#34;&gt;my session of Crystal Clear&lt;/a&gt;. By now I would usually have put my slide stack up for all to download, but in this case, as my session was a book review in essence I would like you to read the original &lt;a href=&#34;http://www.amazon.co.uk/Crystal-Clear-Human-Powered-Methodology-Small/dp/0201699478/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1239372246&amp;amp;sr=1-1&#34;&gt;Crystal Clear by Alistair Cockburn&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I got round to listening to the latest Radio TFS podcast today whist out for a run, <a href="http://www.radiotfs.com/2009/03/25/AdoptingTeamSystemWithStevenBorg.aspx">Adopting Team System with Steve Borg</a>. If you are looking at adopting TFS or even just critically looking at your development life cycle with a view to improving (irrespective of the tools you use), then this podcast is well worth the time to listen to. It actually covers a lot of the points I was discussing at the Agile Yorkshire user group this week in <a href="http://www.agileyorkshire.org/2009-event-announcements/8thapril-richardfennellcrystalclearmethodology">my session of Crystal Clear</a>. By now I would usually have put my slide stack up for all to download, but in this case, as my session was a book review in essence I would like you to read the original <a href="http://www.amazon.co.uk/Crystal-Clear-Human-Powered-Methodology-Small/dp/0201699478/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1239372246&amp;sr=1-1">Crystal Clear by Alistair Cockburn</a>.</p>
<p>In my opinion, the key point they both raise is the that it is important to have a process that provides:</p>
<ul>
<li>Safety – provides a framework that means the project can safely be delivered</li>
<li>Efficiency – development should be in an efficient manner</li>
<li>Habitable – that the team can live with the process (if they can’t the process will be avoided/subverted)</li>
</ul>
<p>Or to put it another way (and quoting here from the Crystal Clear book) “a little methodology does a lot of good, after that weight is costly”</p>
<p>A point raised at the user group in the chat after my session was that of how to get senior people (such as CEO, CFO etc) to buy into the ‘new’ development process (a critical factor for success). Too often it is heard “I don’t care if you are agile or not, I just want it delivered” and no support is provided beyond the actual coding team from the business. A good discussion of this type of problem is in <a href="http://www.neuri.com/publishing/">Gojko Adzic’s book Bridging the Communication Gap: Specification by Example and Agile Acceptance Testing</a>. This is written for non software developers and discusses how to make sure that the whole business is involved in the development process, thus enabling the project to deliver what the business really needs not what people think they need. I would say this book is an essential for anyone involved in the software specifications process – and that should be everyone in an agile project!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Testing SharePoint Workflows using TypeMock Isolator (Part 3)</title>
      <link>https://blog.richardfennell.net/posts/testing-sharepoint-workflows-using-typemock-isolator-part-3/</link>
      <pubDate>Tue, 07 Apr 2009 15:42:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/testing-sharepoint-workflows-using-typemock-isolator-part-3/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 12 June 2009&lt;/strong&gt; - I have been having problems using this technique of Typemock with Sharepoint Workflows, the workflows keep unexpectedly idling as opposed to activating. If you suffer similar problem please check for later posts as to any solutions I find. &lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/06/testing-sharepoint-workflows-using-typemock-isolator-part-2.aspx&#34;&gt;Now I can test a basic workflow&lt;/a&gt; it soon becomes obvious that you could end up with many tests for a single workflow, as a workflow can have any number of criteria that could cause branching to occur. Maybe a sensible way to write the tests is using &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/10/04/running-fitness-net-tests-in-unit-test-some-tips.aspx&#34;&gt;Fit/Fitness&lt;/a&gt; to provide the test cases in tabular form?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 12 June 2009</strong> - I have been having problems using this technique of Typemock with Sharepoint Workflows, the workflows keep unexpectedly idling as opposed to activating. If you suffer similar problem please check for later posts as to any solutions I find. </p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/06/testing-sharepoint-workflows-using-typemock-isolator-part-2.aspx">Now I can test a basic workflow</a> it soon becomes obvious that you could end up with many tests for a single workflow, as a workflow can have any number of criteria that could cause branching to occur. Maybe a sensible way to write the tests is using <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/10/04/running-fitness-net-tests-in-unit-test-some-tips.aspx">Fit/Fitness</a> to provide the test cases in tabular form?</p>
<p>So to this end I have added a bit more code to my Typemock/Sharepoint test project (after installing the Fit libraries as detailed in my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/07/18/running-fitnesse-net-tests-using-mstest.aspx">previous posts</a>). I now have a single test that loads a set of test criteria from an HTML file (my previous posts discuss why I am using HTML files as opposed to the Fit Wiki).</p>
<pre tabindex="0"><code> 1: \[TestMethod\]
</code></pre><p>2:      public void WorkFlow1SwitchOnTitle_DataViaFitnesse_Success()</p>
<pre tabindex="0"><code> 3:      {
</code></pre><p>4:          fit.Runner.FolderRunner runner = new fit.Runner.FolderRunner(new fit.Runner.ConsoleReporter());</p>
<pre tabindex="0"><code> 5:          var errorCount = runner.Run(new string\[\] {
</code></pre><p>6:                  &ldquo;-i&rdquo;,@&ldquo;WorkflowTestCases.htm&rdquo;, // the htm file that holds the test</p>
<pre tabindex="0"><code> 7:                  &#34;-a&#34;,@&#34;TestProject.dll&#34;,  //we have the fit facade in this assembly
</code></pre><p>8:                  &ldquo;-o&rdquo;,@&ldquo;results&rdquo;}); // the directory the results are dumped into as HTML</p>
<pre tabindex="0"><code> 9:          // fit can fail silently giving no failures as no test are run, so check for exceptions
</code></pre><p>10:          Assert.AreEqual(false, Regex.IsMatch(runner.Results, &ldquo;^0.+?0.+?0.+?0.+?$&rdquo;), &ldquo;No tests appear to have been run&rdquo;);</p>
<pre tabindex="0"><code> 11:          // look for expected errors
</code></pre><p>12:          Assert.AreEqual(0, errorCount, runner.Results);</p>
<pre tabindex="0"><code> 13:          
</code></pre><p>14:      }</p>
<pre tabindex="0"><code>
I then have an HTML file that contains the test cases (remember to make sure that this file is deployed to the output directory)

import

TestProject

Workflow Fit Tests

Upload Document With Title

Document Approved?

ABC

True

XYZ

False

abc

True
</code></pre><p>1: &lt;HTML&gt;&lt;HEAD&gt;</p>
<pre tabindex="0"><code> 2:     &lt;body\&gt;
</code></pre><p>3: &lt;table border=&ldquo;1&rdquo; cellspacing=&ldquo;0&rdquo;&gt;</p>
<pre tabindex="0"><code> 4: &lt;tr\&gt;&lt;td\&gt;import&lt;/td\&gt;
</code></pre><p>5: &lt;/tr&gt;</p>
<pre tabindex="0"><code> 6: &lt;tr\&gt;&lt;td\&gt;TestProject&lt;/td\&gt;
</code></pre><p>7: &lt;/tr&gt;</p>
<pre tabindex="0"><code> 8: &lt;/table\&gt;
</code></pre><p>9:</p>
<pre tabindex="0"><code> 10: &lt;table border\=&#34;1&#34; cellspacing\=&#34;0&#34;\&gt;
</code></pre><p>11: &lt;tr&gt;&lt;td colspan=&ldquo;2&rdquo;&gt;Workflow Fit Tests&lt;/td&gt;</p>
<pre tabindex="0"><code> 12: &lt;/tr\&gt;
</code></pre><p>13: &lt;tr&gt;&lt;td&gt;Upload Document With Title &lt;/td&gt;</p>
<pre tabindex="0"><code> 14: &lt;td\&gt;Document Approved?&lt;/td\&gt;
</code></pre><p>15: &lt;/tr&gt;</p>
<pre tabindex="0"><code> 16: &lt;tr\&gt;&lt;td\&gt;ABC&lt;/td\&gt;
</code></pre><p>17: &lt;td&gt;True&lt;/td&gt;</p>
<pre tabindex="0"><code> 18: &lt;/tr\&gt;
</code></pre><p>19: &lt;tr&gt;&lt;td&gt;XYZ&lt;/td&gt;</p>
<pre tabindex="0"><code> 20: &lt;td\&gt;False&lt;/td\&gt;
</code></pre><p>21: &lt;/tr&gt;</p>
<pre tabindex="0"><code> 22: &lt;tr\&gt;&lt;td\&gt;abc&lt;/td\&gt;
</code></pre><p>23: &lt;td&gt;True&lt;/td&gt;</p>
<pre tabindex="0"><code> 24: &lt;/tr\&gt;
</code></pre><p>25: &lt;/table&gt;</p>
<pre tabindex="0"><code> 26:  
</code></pre><p>27:     &lt;/body&gt;</p>
<pre tabindex="0"><code> 28: &lt;/html\&gt;
```

Finally we need to create the facade class that wrapper the workflow function for Fit to call. In this sample I just popped the class in the test project for simplicity. Notice it is this facade class that contains all the Typemock bits, also that I make use of the helper class I created in my previous post to actually run the workflow.

```
 1: using System;
</code></pre><p>2: using TypeMock.ArrangeActAssert;</p>
<pre tabindex="0"><code> 3: using Microsoft.SharePoint.Workflow;
</code></pre><p>4: </p>
<pre tabindex="0"><code> 5: namespace TestProject
</code></pre><p>6: {</p>
<pre tabindex="0"><code> 7:     public class WorkflowFitTests : fit.ColumnFixture
</code></pre><p>8:     {</p>
<pre tabindex="0"><code> 9:         public string UploadDocumentWithTitle;
</code></pre><p>10: </p>
<pre tabindex="0"><code> 11:         public bool DocumentApproved()
</code></pre><p>12:         {</p>
<pre tabindex="0"><code> 13:  
</code></pre><p>14:             var fakeProperties = Isolate.Fake.Instance<SPWorkflowActivationProperties>();</p>
<pre tabindex="0"><code> 15:             var fakeItem = fakeProperties.Item;
</code></pre><p>16:             Isolate.WhenCalled(() =&gt; fakeItem.Title).WillReturn(this.UploadDocumentWithTitle);</p>
<pre tabindex="0"><code> 17:  
</code></pre><p>18:             // Act</p>
<pre tabindex="0"><code> 19:             TypemockWorkflowTests.WorkflowRunner(typeof(SharePointWorkflow.Workflow1), fakeProperties);
</code></pre><p>20: </p>
<pre tabindex="0"><code> 21:             // Assert, if a document is approved must call the following two line
</code></pre><p>22:             try</p>
<pre tabindex="0"><code> 23:             {
</code></pre><p>24:                 Isolate.Verify.WasCalledWithExactArguments(() =&gt; fakeItem.Update());</p>
<pre tabindex="0"><code> 25:                 Isolate.Verify.WasCalledWithExactArguments(() =&gt; fakeItem\[&#34;Approved&#34;\] = &#34;True&#34;);
</code></pre><p>26:                 return true; // we called all the updates expected</p>
<pre tabindex="0"><code> 27:             }
</code></pre><p>28:             catch (TypeMock.VerifyException)</p>
<pre tabindex="0"><code> 29:             {
</code></pre><p>30:                 // it did not call something expected, check if it was just the not approved path</p>
<pre tabindex="0"><code> 31:                 Isolate.Verify.WasNotCalled(() =&gt; fakeItem.Update());
</code></pre><p>32:                 return false;</p>
<pre tabindex="0"><code> 33:             }
</code></pre><p>34: </p>
<pre tabindex="0"><code> 35:         }
</code></pre><p>36:     }</p>
<pre tabindex="0"><code> 37: }
```

I am currently working on a sample of this testing technique, that does a bit more than a simple branch on if test, I will post a set of sample code when I am done.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2008 DBPRO GDR QFE (wow loads of TLAs there)</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2008-dbpro-gdr-qfe-wow-loads-of-tlas-there/</link>
      <pubDate>Tue, 07 Apr 2009 09:48:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2008-dbpro-gdr-qfe-wow-loads-of-tlas-there/</guid>
      <description>&lt;p&gt;At my session at &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/03/31/sqlbits-iv-slides.aspx&#34;&gt;SQLBits IV on Visual Studio 2008 DBPro GDR&lt;/a&gt; it was mentioned that there was a major patch just about to be released to address some known issues. Well &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2009/04/06/update-to-the-dbpro-gdr.aspx&#34;&gt;Brian Harry has provided links to the release&lt;/a&gt; of the release candidate of the &lt;a href=&#34;http://blogs.msdn.com/gertd/archive/2009/03/26/release-candidate-of-gdr-qfe.aspx&#34;&gt;GDR QFE&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At my session at <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/03/31/sqlbits-iv-slides.aspx">SQLBits IV on Visual Studio 2008 DBPro GDR</a> it was mentioned that there was a major patch just about to be released to address some known issues. Well <a href="http://blogs.msdn.com/bharry/archive/2009/04/06/update-to-the-dbpro-gdr.aspx">Brian Harry has provided links to the release</a> of the release candidate of the <a href="http://blogs.msdn.com/gertd/archive/2009/03/26/release-candidate-of-gdr-qfe.aspx">GDR QFE</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Testing SharePoint Workflows using TypeMock Isolator (Part 2)</title>
      <link>https://blog.richardfennell.net/posts/testing-sharepoint-workflows-using-typemock-isolator-part-2/</link>
      <pubDate>Mon, 06 Apr 2009 21:48:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/testing-sharepoint-workflows-using-typemock-isolator-part-2/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 12 June 2009&lt;/strong&gt; - I have been having problems using this technique of Typemock with Sharepoint Workflows, the workflows keep unexpectedly idling as opposed to activating. If you suffer similar problem please check for later posts as to any solutions I find. &lt;/p&gt;
&lt;p&gt;After reading &lt;a href=&#34;http://blog.typemock.com/2009/04/writing-shorter-tests-don-build-tree.html&#34;&gt;Gil’s blog on writing simpler tests&lt;/a&gt; I have done some tidying of the code from my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/03/testing-sharepoint-workflows-using-typemock-isolator.aspx?CommentPosted=true#commentmessage&#34;&gt;previous post&lt;/a&gt;. In this version I have extracted the boiler plate code to run the workflow to a static helper method and modified my tests to incorporate Gil’s comments, they are certainly more readable.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 12 June 2009</strong> - I have been having problems using this technique of Typemock with Sharepoint Workflows, the workflows keep unexpectedly idling as opposed to activating. If you suffer similar problem please check for later posts as to any solutions I find. </p>
<p>After reading <a href="http://blog.typemock.com/2009/04/writing-shorter-tests-don-build-tree.html">Gil’s blog on writing simpler tests</a> I have done some tidying of the code from my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/03/testing-sharepoint-workflows-using-typemock-isolator.aspx?CommentPosted=true#commentmessage">previous post</a>. In this version I have extracted the boiler plate code to run the workflow to a static helper method and modified my tests to incorporate Gil’s comments, they are certainly more readable.</p>
<pre tabindex="0"><code> 1: \[TestMethod\]
</code></pre><p>2:       public void WorkFlowSwitchOnTitle_TitleStartsWithA_SetApprovelFieldAndUpdate()</p>
<pre tabindex="0"><code> 3:       {
</code></pre><p>4:           // Arrange</p>
<pre tabindex="0"><code> 5:           var fakeProperties = Isolate.Fake.Instance&lt;SPWorkflowActivationProperties&gt;();
</code></pre><p>6:           var fakeItem = fakeProperties.Item;</p>
<pre tabindex="0"><code> 7:           Isolate.WhenCalled(() =&gt; fakeItem.Title).WillReturn(&#34;ABC&#34;);
</code></pre><p>8: </p>
<pre tabindex="0"><code> 9:           // we actually don&#39;t need to create this field MOSS lets us attempt to write to
</code></pre><p>10:           // it even if not declared, it would only need to be created if we check the value in the workflow</p>
<pre tabindex="0"><code> 11:           /\*
</code></pre><p>12:           var fakeField = fakeItem.Fields[&ldquo;Approved&rdquo;];</p>
<pre tabindex="0"><code> 13:           fakeField.DefaultValue = false.ToString();
</code></pre><p>14:           */</p>
<pre tabindex="0"><code> 15:  
</code></pre><p>16:           // Act</p>
<pre tabindex="0"><code> 17:           WorkflowRunner(typeof(SharePointWorkflow.Workflow1),fakeProperties);
</code></pre><p>18:</p>
<pre tabindex="0"><code> 19:           // Assert
</code></pre><p>20:           Isolate.Verify.WasCalledWithExactArguments(() =&gt; fakeItem.Update());</p>
<pre tabindex="0"><code> 21:           Isolate.Verify.WasCalledWithExactArguments(() =&gt; fakeItem\[&#34;Approved&#34;\] = &#34;True&#34;);
</code></pre><p>22: </p>
<pre tabindex="0"><code> 23:       }
</code></pre><p>24: </p>
<pre tabindex="0"><code> 25:       \[TestMethod\]
</code></pre><p>26:       public void WorkFlowSwitchOnTitle_TitleStartsWithZ_DoNothing()</p>
<pre tabindex="0"><code> 27:       {
</code></pre><p>28:           // Arrange</p>
<pre tabindex="0"><code> 29:           var fakeProperties = Isolate.Fake.Instance&lt;SPWorkflowActivationProperties&gt;();
</code></pre><p>30:           var fakeItem = fakeProperties.Item;</p>
<pre tabindex="0"><code> 31:           Isolate.WhenCalled(() =&gt; fakeItem.Title).WillReturn(&#34;XYZ&#34;);
</code></pre><p>32: </p>
<pre tabindex="0"><code> 33:           // Act
</code></pre><p>34:           WorkflowRunner(typeof(SharePointWorkflow.Workflow1),fakeProperties);</p>
<pre tabindex="0"><code> 35:  
</code></pre><p>36:           // Assert</p>
<pre tabindex="0"><code> 37:           Isolate.Verify.WasNotCalled(() =&gt; fakeItem.Update());
</code></pre><p>38: </p>
<pre tabindex="0"><code> 39:       }
</code></pre><p>40: </p>
<pre tabindex="0"><code> 41:  
</code></pre><p>42: </p>
<pre tabindex="0"><code> 43:       /// &lt;summary&gt;
</code></pre><p>44:       /// A helper method to run a workflow for a test</p>
<pre tabindex="0"><code> 45:       /// &lt;/summary&gt;
</code></pre><p>46:       /// <param name="wfType">The type of workflow to create</param></p>
<pre tabindex="0"><code> 47:       /// &lt;param name=&#34;fakeProperties&#34;&gt;The fake properties used to create the workflow&lt;/param&gt;
</code></pre><p>48:       private static void WorkflowRunner(Type wfType, SPWorkflowActivationProperties fakeProperties)</p>
<pre tabindex="0"><code> 49:       {
</code></pre><p>50:           using (WorkflowRuntime workflowRuntime = new WorkflowRuntime())</p>
<pre tabindex="0"><code> 51:           {
</code></pre><p>52:               AutoResetEvent waitHandle = new AutoResetEvent(false);</p>
<pre tabindex="0"><code> 53:               workflowRuntime.WorkflowCompleted += delegate(object sender, WorkflowCompletedEventArgs e)
</code></pre><p>54:               {</p>
<pre tabindex="0"><code> 55:                   // don&#39;t put asserts here as will be in the wrong thread
</code></pre><p>56:                   waitHandle.Set();</p>
<pre tabindex="0"><code> 57:               };
</code></pre><p>58: </p>
<pre tabindex="0"><code> 59:               workflowRuntime.WorkflowTerminated += delegate(object sender, WorkflowTerminatedEventArgs e)
</code></pre><p>60:               {</p>
<pre tabindex="0"><code> 61:                   // don&#39;t put asserts here as will be in the wrong thread
</code></pre><p>62:                   waitHandle.Set();</p>
<pre tabindex="0"><code> 63:               };
</code></pre><p>64: </p>
<pre tabindex="0"><code> 65:               // when this is called the constructor is called twice
</code></pre><p>66:               // the first time is for validation for the workflow in this appdomain see <a href="http://odetocode.com/Blogs/scott/archive/2006/03/30/3192.aspx">http://odetocode.com/Blogs/scott/archive/2006/03/30/3192.aspx</a></p>
<pre tabindex="0"><code> 67:               // then the real construction is run, the problem is this double run means that the Isolate.Swap.NextInstance
</code></pre><p>68:               // fails as it attaches to the first validation create, not the second real one</p>
<pre tabindex="0"><code> 69:               WorkflowInstance instance = workflowRuntime.CreateWorkflow(wfType);
</code></pre><p>70: </p>
<pre tabindex="0"><code> 71:               // SO for this reason we only get the swap after the first create has beend one
</code></pre><p>72:               Isolate.Swap.NextInstance<SPWorkflowActivationProperties>().With(fakeProperties);</p>
<pre tabindex="0"><code> 73:  
</code></pre><p>74:               // we then recreate the workflow again, this time it has already been validated so</p>
<pre tabindex="0"><code> 75:               // so the swap works
</code></pre><p>76:               instance = workflowRuntime.CreateWorkflow(wfType);</p>
<pre tabindex="0"><code> 77:  
</code></pre><p>78:               instance.Start();</p>
<pre tabindex="0"><code> 79:  
</code></pre><p>80:               waitHandle.WaitOne();</p>
<pre tabindex="0"><code> 81:  
</code></pre><p>82:               // the workflow is finished assert could go here, but will be done</p>
<pre tabindex="0"><code> 83:               // in the calling method
</code></pre><p>84: </p>
<pre tabindex="0"><code> 85:           }
</code></pre><p>86:</p>
<pre tabindex="0"><code> 87:       }
```

The workflow remains the same as in the previous post.
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Speaking next week at Agile Yorkshire</title>
      <link>https://blog.richardfennell.net/posts/speaking-next-week-at-agile-yorkshire/</link>
      <pubDate>Sat, 04 Apr 2009 09:45:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-next-week-at-agile-yorkshire/</guid>
      <description>&lt;p&gt;I am speaking in Wednesday the 8th at &lt;a href=&#34;http://www.agileyorkshire.org/&#34;&gt;Agile Yorkshire&lt;/a&gt;, on &lt;a href=&#34;http://www.agileyorkshire.org/2009-event-announcements/8thapril-richardfennellcrystalclearmethodology&#34;&gt;Crystal Clear and lessons learnt in Agile projects&lt;/a&gt;. The user group is meeting at the Victoria Hotel in Leeds as usual, and thanks to our sponsors there is a free drink for all attendees.&lt;/p&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking in Wednesday the 8th at <a href="http://www.agileyorkshire.org/">Agile Yorkshire</a>, on <a href="http://www.agileyorkshire.org/2009-event-announcements/8thapril-richardfennellcrystalclearmethodology">Crystal Clear and lessons learnt in Agile projects</a>. The user group is meeting at the Victoria Hotel in Leeds as usual, and thanks to our sponsors there is a free drink for all attendees.</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Live Writer crashing when adding hyperlinks</title>
      <link>https://blog.richardfennell.net/posts/live-writer-crashing-when-adding-hyperlinks/</link>
      <pubDate>Fri, 03 Apr 2009 21:52:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/live-writer-crashing-when-adding-hyperlinks/</guid>
      <description>&lt;p&gt;For a while I have had a problem that when I tried to add a hyperlink via the toolbar in Live Writer I get a dialog that Live Writer has stopped working, it doesn’t exit, it just des not open the modal window for adding a hyperlink. It was irritating, but as I could edit the HTML source and put in the link by hand I could not be bothered to work out how to fix it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For a while I have had a problem that when I tried to add a hyperlink via the toolbar in Live Writer I get a dialog that Live Writer has stopped working, it doesn’t exit, it just des not open the modal window for adding a hyperlink. It was irritating, but as I could edit the HTML source and put in the link by hand I could not be bothered to work out how to fix it.</p>
<p>Well today I wanted to add a new pluggin to Live Writer and it seem the options dialog suffers the same problem, so I had to fix it. I searched with Microsoft Live Search and it found nothing, so I tried with Google and found the solution on <a href="http://blog.lovelovemode.com/2009/03/22/windowslivewriterhtmleditorlinkingglossarymanager-problem/">Tom Soisoonthorn’s blog</a> , it is missing text in linkglossary.xml file.</p>
<p><strong>Update 5 Mar 09</strong> – On checking the file linkglossary.xml again I think the reason Live Writer stopped working was the due to the installation of fiddler2, I can’t see why they are related, but it is in it’s configuration section the error lies.</p>
<p>Beyond the fact that this fixed my problems, it is interesting to note this is the first time a good few months where I have not found what I wanted via Live Search – it is much improved, but Google still seems to get just that bit more.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Testing SharePoint Workflows using TypeMock Isolator</title>
      <link>https://blog.richardfennell.net/posts/testing-sharepoint-workflows-using-typemock-isolator/</link>
      <pubDate>Fri, 03 Apr 2009 16:28:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/testing-sharepoint-workflows-using-typemock-isolator/</guid>
      <description>&lt;p&gt;&lt;strong&gt;Updated 12 June 2009&lt;/strong&gt; - I have been having problems using this technique of Typemock with Sharepoint Workflows, the workflows keep unexpectedly idling as opposed to activating. If you suffer similar problem please check for later posts as to any solutions I find. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Updated 6 April 2009&lt;/strong&gt; – Also see &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/06/testing-sharepoint-workflows-using-typemock-isolator-part-2.aspx&#34;&gt;Testing SharePoint Workflows using TypeMock Isolator (Part 2)&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I have for a while been trying to test SharePoint workflows using TypeMock Isolator to mock out the SharePoint fixtures, I want to remove the dependency of having SharePoint on any test boxes where possible. I have at last got this working after getting a new version of TypeMock Isolator 5.3.0 + a fix from the very helpful team at TypeMock&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong>Updated 12 June 2009</strong> - I have been having problems using this technique of Typemock with Sharepoint Workflows, the workflows keep unexpectedly idling as opposed to activating. If you suffer similar problem please check for later posts as to any solutions I find. </p>
<p><strong>Updated 6 April 2009</strong> – Also see <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/04/06/testing-sharepoint-workflows-using-typemock-isolator-part-2.aspx">Testing SharePoint Workflows using TypeMock Isolator (Part 2)</a></p>
<p>I have for a while been trying to test SharePoint workflows using TypeMock Isolator to mock out the SharePoint fixtures, I want to remove the dependency of having SharePoint on any test boxes where possible. I have at last got this working after getting a new version of TypeMock Isolator 5.3.0 + a fix from the very helpful team at TypeMock</p>
<p>My idea was to be able to build a workflow that could changed list item properties for a document e.g. the workflow could set a field called approved to true if certain criteria were met. Now as a MOSS2007 workflow is based on .NET WF I knew I could try to build upon the work I document in <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/10/03/testing-driven-development-for-workflow-foundation.aspx">my previous post</a> on TDD for WF.</p>
<p>My test system was as follows:</p>
<ol>
<li>I created a new SharePoint workflow, all it contained was a decision box that went down the true path if the document associated with the workflow has a title starting with the letter A</li>
<li>In a coded action for the true path, I then set an approved property to true.</li>
</ol>
<p>All very simple, but good enough for this test, the key methods are shown below</p>
<pre tabindex="0"><code> 1: private void IfTest(object sender, ConditionalEventArgs e)
</code></pre><p>2:    {</p>
<pre tabindex="0"><code> 3:       var currentItem = workflowProperties.Item;        e.Result = currentItem.Title.StartsWith(&#34;A&#34;);
</code></pre><p>4:    }</p>
<pre tabindex="0"><code> 5:  
</code></pre><p>6:    private void TrueTask(object sender, EventArgs e)</p>
<pre tabindex="0"><code> 7:    {
</code></pre><p>8:       var currentItem = workflowProperties.Item;</p>
<pre tabindex="0"><code> 9:       currentItem\[&#34;Approved&#34;\] = true.ToString();
</code></pre><p>10:       currentItem.Update();</p>
<pre tabindex="0"><code> 11:    }
```

I then created a test using the same form I did for WF based testing, I think the comments cover the key points

```
 1: \[TestMethod\]
</code></pre><p>2:       public void WorkFlowSwitchOnTitle_TitleStartsWithA_SetApprovelField()</p>
<pre tabindex="0"><code> 3:       {
</code></pre><p>4:           using (WorkflowRuntime workflowRuntime = new WorkflowRuntime())</p>
<pre tabindex="0"><code> 5:           {
</code></pre><p>6: </p>
<pre tabindex="0"><code> 7:               // Create our fake workflow and items
</code></pre><p>8:               var fakeProperties = Isolate.Fake.Instance<SPWorkflowActivationProperties>(Members.ReturnRecursiveFakes);</p>
<pre tabindex="0"><code> 9:               var fakeItem = Isolate.Fake.Instance&lt;SPListItem&gt;(Members.ReturnRecursiveFakes);
</code></pre><p>10: </p>
<pre tabindex="0"><code> 11:               var fakeField = Isolate.Fake.Instance&lt;SPField&gt;(Members.ReturnRecursiveFakes);
</code></pre><p>12:               fakeField.DefaultValue = false.ToString();</p>
<pre tabindex="0"><code> 13:               Isolate.WhenCalled(() =&gt; fakeProperties.Item).WillReturn(fakeItem);
</code></pre><p>14:               // setup the if test</p>
<pre tabindex="0"><code> 15:               Isolate.WhenCalled(() =&gt; fakeItem.Title).WillReturn(&#34;ABC&#34;);
</code></pre><p>16:               Isolate.WhenCalled(() =&gt; fakeItem[&ldquo;Approved&rdquo;]).WillReturn(fakeField);</p>
<pre tabindex="0"><code> 17:  
</code></pre><p>18:               // setup the workflow handling                AutoResetEvent waitHandle = new AutoResetEvent(false);</p>
<pre tabindex="0"><code> 19:               workflowRuntime.WorkflowCompleted += delegate(object sender, WorkflowCompletedEventArgs e)
</code></pre><p>20:               {</p>
<pre tabindex="0"><code> 21:                   // don&#39;t put asserts here as will be in the wrong thread
</code></pre><p>22:                   waitHandle.Set();</p>
<pre tabindex="0"><code> 23:               };
</code></pre><p>24: </p>
<pre tabindex="0"><code> 25:               workflowRuntime.WorkflowTerminated += delegate(object sender, WorkflowTerminatedEventArgs e)
</code></pre><p>26:               {</p>
<pre tabindex="0"><code> 27:                   // don&#39;t put asserts here as will be in the wrong thread
</code></pre><p>28:                   waitHandle.Set();</p>
<pre tabindex="0"><code> 29:               };
</code></pre><p>30: </p>
<pre tabindex="0"><code> 31:               // when this is called the constructor is called twice
</code></pre><p>32:               // the first time is for validation for the workflow in this appdomain see <a href="http://odetocode.com/Blogs/scott/archive/2006/03/30/3192.aspx">http://odetocode.com/Blogs/scott/archive/2006/03/30/3192.aspx</a></p>
<pre tabindex="0"><code> 33:               // then the real construction is run, the problem is this double run means that the Isolate.Swap.NextInstance
</code></pre><p>34:               // fails as it attaches to the first validation create, not the second real one</p>
<pre tabindex="0"><code> 35:               WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(SharePointWorkflow.Workflow1));
</code></pre><p>36: </p>
<pre tabindex="0"><code> 37:               // SO for this reason we only get the swap after the first create has been done
</code></pre><p>38:               Isolate.Swap.NextInstance<SPWorkflowActivationProperties>().With(fakeProperties);</p>
<pre tabindex="0"><code> 39:  
</code></pre><p>40:               // we then recreate the workflow again, this time it has already been validated so</p>
<pre tabindex="0"><code> 41:               // the swap works
</code></pre><p>42:               instance = workflowRuntime.CreateWorkflow(typeof(SharePointWorkflow.Workflow1));</p>
<pre tabindex="0"><code> 43:  
</code></pre><p>44:               instance.Start();</p>
<pre tabindex="0"><code> 45:  
</code></pre><p>46:               waitHandle.WaitOne();</p>
<pre tabindex="0"><code> 47:               // wait for the workflow to complete and then check the method expected were called                Isolate.Verify.WasCalledWithExactArguments(() =&gt; fakeItem.Update());
</code></pre><p>48:               Isolate.Verify.WasCalledWithExactArguments(() =&gt; fakeItem[&ldquo;Approved&rdquo;] = &ldquo;True&rdquo;);</p>
<pre tabindex="0"><code> 49:           }
</code></pre><p>50: </p>
<pre tabindex="0"><code> 51:       }
```

If you try this without the fix TypeMock provided me with, the two verifies will fail, I am told this is due to a threading issue. Interesting that this is SharePoint specific as the same basic method works OK for standard WF workflows.

I also understand this fix will be in the next TypeMock release, I will update this post when I know for sure
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Book Review &#39;Software Testing with Visual Studio Team System 2008&#39;</title>
      <link>https://blog.richardfennell.net/posts/book-review-software-testing-with-visual-studio-team-system-2008/</link>
      <pubDate>Thu, 02 Apr 2009 16:08:38 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/book-review-software-testing-with-visual-studio-team-system-2008/</guid>
      <description>&lt;p&gt;A book arrived recently on my desk &lt;a href=&#34;http://www.packtpub.com/software-testing-with-visual-studio-team-system-2008/book&#34;&gt;&amp;lsquo;Software Testing with Visual Studio Team System 2008&amp;rsquo; by Subashni. S and N Satheesh Kumar&lt;/a&gt;. On reading the book it provides a workmanlike coverage of the testing features of Visual Studio 2008 including some of the API options in the testing namespace, but I can&amp;rsquo;t see what it adds to the subject beyond what a user already has access to in the general introductions on MSDN/Help files.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A book arrived recently on my desk <a href="http://www.packtpub.com/software-testing-with-visual-studio-team-system-2008/book">&lsquo;Software Testing with Visual Studio Team System 2008&rsquo; by Subashni. S and N Satheesh Kumar</a>. On reading the book it provides a workmanlike coverage of the testing features of Visual Studio 2008 including some of the API options in the testing namespace, but I can&rsquo;t see what it adds to the subject beyond what a user already has access to in the general introductions on MSDN/Help files.</p>
<p>My major problem with the book can be summed up from the blurb on the back &rsquo;testing is neither an easy process nor remotely exciting for developers&rsquo;, OK maybe true but I would argue this is just because they are not <a href="http://junit.sourceforge.net/doc/testinfected/testing.htm">test infected</a> yet. I am not going to go all puritanical here over Test Driven Development, but any developer who feels conformable about having code that is not as testable as possible is not professional in my eyes. As soon as you look on testing as your safety net it starts to impact all your design decisions and your ways of working; it becomes not boring but a stimulating mental challenge as to how to make your system as testable and robust as possible.</p>
<p>This book covers the basic tools and menu options for Visual Studio 2008 testing, but it does not really help to explain why to use various tools; it does not provide a basis for the intent behind the various testing techniques. Knowing the tools exist is not enough. Knowing how to apply technique is the key here, I don&rsquo;t actually care how a technique is implemented in a given IDE or testing framework; I can lookup that in the vendors manual up when I need it e.g. as a programmer I need to know I could use row based unit testing techniques but I don&rsquo;t care about the actual format of the MbUnit, or MSTest API, that is for intellisense.</p>
<p>This problem is apparent in the books focus on the automatic unit test generation tools in VS2008, OK the wizard exists, but I would never recommend they are used, it tends to produce fragile tests of little business value. If you want to automatically generate test cases look at <a href="http://msdn.microsoft.com/en-us/devlabs/cc950525.aspx">Pex</a>, but that is too cutting edge for most projects(and beyond the scope of the book), so better to focus on good simple manually code unit test using a good design model, which this book does not cover - I suggest you check a book like <a href="http://www.amazon.co.uk/Art-Unit-Testing-Examples-NET/dp/1933988274/ref=sr_1_6?ie=UTF8&amp;s=books&amp;qid=1238675278&amp;sr=1-6">Roy Osheroves &lsquo;Art of Unit Testing</a>&rsquo; for this type of material.</p>
<p>For me this book, like so many, is coming from the wrong end of the problem. The tools are important and some are better than others, but far better to drive the knowledge from the technique to the tool as opposed to the other way round. In my option the only case where a book on a tool is that useful is when it provides tips and trick (like Sara Ford&rsquo;s excellent <a href="http://www.amazon.co.uk/Microsoft-Visual-Studio-Tips-PRO-Developer/dp/0735626405/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1238688380&amp;sr=8-1">Visual Studio Tips: 251 Ways to Improve Your Productivity</a>), but this book does not really provide any good tips or trick either.</p>
<p>So if you are looking for a printed bound copy of basically similar contents of the MSDN on your desk then have a look at this book, but you could just read the introductions in MSDN online. However in my option neither place is the place to start if you want to learn how to write good quality tests, go back the basics and learn the theory and technique.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SQLBits IV Slides</title>
      <link>https://blog.richardfennell.net/posts/sqlbits-iv-slides/</link>
      <pubDate>Tue, 31 Mar 2009 11:54:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sqlbits-iv-slides/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my session at SQLBits IV. &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2009/SQLBits%20IV%20-%20&#34;&gt;My slides on Visual Studio 2008 Database edition GDR release  can be downloaded from the Black Marble website&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Also a bit thank you to the organiser for putting on such as successful event, it is really good to not have to travel to the London or Reading for IT events.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my session at SQLBits IV. <a href="http://www.blackmarble.co.uk/ConferencePapers/2009/SQLBits%20IV%20-%20">My slides on Visual Studio 2008 Database edition GDR release  can be downloaded from the Black Marble website</a>.</p>
<p>Also a bit thank you to the organiser for putting on such as successful event, it is really good to not have to travel to the London or Reading for IT events.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Alt.net &#39;in the north&#39; still has some spaces</title>
      <link>https://blog.richardfennell.net/posts/alt-net-in-the-north-still-has-some-spaces/</link>
      <pubDate>Tue, 31 Mar 2009 11:43:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alt-net-in-the-north-still-has-some-spaces/</guid>
      <description>&lt;p&gt;As I am sure you noticed there was an unfortunate clash of dates between &lt;a href=&#34;http://www.altdotnetuknorth.info&#34;&gt;Alt.net &amp;lsquo;in the North&lt;/a&gt;&amp;rsquo; in Bradford and &lt;a href=&#34;http://developerdeveloperdeveloper.com/webdd09&#34;&gt;WebDD&lt;/a&gt; in Reading on the 18th of April. Well I see that WebDD over the weekend has published it&amp;rsquo;s agenda, opened registrations and is now full.&lt;/p&gt;
&lt;p&gt;So if you are disappointed that you can&amp;rsquo;t get a place for the WebDD event have you thought of trying an &lt;a href=&#34;http://en.wikipedia.org/wiki/Open-space_meeting&#34;&gt;Open Spaces event&lt;/a&gt; like Alt.net? We still have some spaces left. The key difference of this type of event, as opposed to the more traditional lecture format of WebDD, is that the attendees decide the agenda when they arrive and most sessions take the format of an open discussion of peoples real work experiences and opinions.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As I am sure you noticed there was an unfortunate clash of dates between <a href="http://www.altdotnetuknorth.info">Alt.net &lsquo;in the North</a>&rsquo; in Bradford and <a href="http://developerdeveloperdeveloper.com/webdd09">WebDD</a> in Reading on the 18th of April. Well I see that WebDD over the weekend has published it&rsquo;s agenda, opened registrations and is now full.</p>
<p>So if you are disappointed that you can&rsquo;t get a place for the WebDD event have you thought of trying an <a href="http://en.wikipedia.org/wiki/Open-space_meeting">Open Spaces event</a> like Alt.net? We still have some spaces left. The key difference of this type of event, as opposed to the more traditional lecture format of WebDD, is that the attendees decide the agenda when they arrive and most sessions take the format of an open discussion of peoples real work experiences and opinions.</p>
<p>Go on give it a try it might be fun!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking on Visual Studio Team System Database Edition GDR</title>
      <link>https://blog.richardfennell.net/posts/speaking-on-visual-studio-team-system-database-edition-gdr/</link>
      <pubDate>Wed, 25 Mar 2009 16:22:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-on-visual-studio-team-system-database-edition-gdr/</guid>
      <description>&lt;p&gt;As well as speaking on the GDR release of VSTS Database Edition at &lt;a href=&#34;http://www.sqlbits.com/&#34;&gt;SQLbits&lt;/a&gt; this weekend I will also be at &lt;a href=&#34;http://www.vbug.co.uk/events/default.aspx&#34;&gt;VBug Newcastle on the 22nd April.&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As well as speaking on the GDR release of VSTS Database Edition at <a href="http://www.sqlbits.com/">SQLbits</a> this weekend I will also be at <a href="http://www.vbug.co.uk/events/default.aspx">VBug Newcastle on the 22nd April.</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Visual Studio 2008 Database Edition GDR release - Createdeployment</title>
      <link>https://blog.richardfennell.net/posts/visual-studio-2008-database-edition-gdr-release-createdeployment/</link>
      <pubDate>Sun, 22 Mar 2009 22:28:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/visual-studio-2008-database-edition-gdr-release-createdeployment/</guid>
      <description>&lt;p&gt;Whilst preparing for my session at &lt;a href=&#34;http://www.sqlbits.com/&#34;&gt;SQLBits&lt;/a&gt; next weekend I was re-watched &lt;a href=&#34;http://channel9.msdn.com/pdc2008/TL45/Default.aspx&#34;&gt;Gert Drapers&amp;rsquo; PDC session (TL45) where he used a command tool to deploy a database via a USB pen drive (about 30 minutes into the session)&lt;/a&gt;. Now it seems that the &lt;strong&gt;createdeployment&lt;/strong&gt; command line tool he used is not currently available outside Microsoft, but the same effect can be achieved use the &lt;a href=&#34;http://msdn.microsoft.com/en-us/library/dd193258.aspx&#34;&gt;VSDBCMD&lt;/a&gt; command.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt; - get the files onto the distribution device&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst preparing for my session at <a href="http://www.sqlbits.com/">SQLBits</a> next weekend I was re-watched <a href="http://channel9.msdn.com/pdc2008/TL45/Default.aspx">Gert Drapers&rsquo; PDC session (TL45) where he used a command tool to deploy a database via a USB pen drive (about 30 minutes into the session)</a>. Now it seems that the <strong>createdeployment</strong> command line tool he used is not currently available outside Microsoft, but the same effect can be achieved use the <a href="http://msdn.microsoft.com/en-us/library/dd193258.aspx">VSDBCMD</a> command.</p>
<p><strong>Step 1</strong> - get the files onto the distribution device</p>
<p>The first step is to build the distribution media, this is just an XCOPY process. As the MSDN documentation says you need to end up with the following directory structure on your USB drive; for this example I used <strong>G:</strong> for USB drive letter and <strong>Database1</strong> for the name of the database I want to distribute</p>
<p><strong>G:</strong> copy the contents of [Program Files]Microsoft Visual Studio 9.0VSTSDBDeploy &amp; sub directories<br>
<strong>G:</strong> copy the dlls from[Program Files]Microsoft SQL Server Compact Editionv3.5 folder<br>
<strong>G:Database1</strong> copy the contents of [ProjectsFolder]DataBase1SolutionDatabase1sqldebug or release directory after the DB project is built</p>
<p><strong>Step 2</strong> - A script to do the deploy</p>
<p>It is now a simple process of running the command line tool, but this is a bit long to type each time so I used a batch file. My command usage was</p>
<p><em>deploy [DB name] [SQL server instance]</em><br>
e.g. <em>deploy database1 .sqlexpress</em></p>
<p>The contents of the actual <strong>deploy.bat</strong> batch file is as follows. Note there are many more options you can set but this seems to be the basic minimum</p>
<p><em>VSDBCMD /a:Deploy  /dsp:Sql /cs:&ldquo;Server=%2;Database=%1;Trusted_Connection=yes;&rdquo;  /model:%1%1.dbschema  /manifest:%1%1.deploymanifest /script:%1%.sql /dd</em></p>
<p>Using this batch file a new instance of a database can be created or an existing one updated.</p>
<p><strong>Note:</strong> When I first tried to get this going I keep getting SQL file create errors which appears as <strong>TSD01268</strong> errors in the deployment log. Eventually I realised the problem. I was running on a 64Bit Windows 7 PC. My default SQLExpress instance, running as the Network Service account, was setup to and had rights to create files in <em>C:Program Files (x86)Microsoft SQL ServerMSSQL.1MSSQLData</em> but not in <em>C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLData.</em> Unfortunately the VSDBCMD tried to use the second location. Once the SQL instance was set to default to the second location and suitable rights provided all worked correctly.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Free hotel internet - if you dare touch it</title>
      <link>https://blog.richardfennell.net/posts/free-hotel-internet-if-you-dare-touch-it/</link>
      <pubDate>Fri, 20 Mar 2009 10:39:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/free-hotel-internet-if-you-dare-touch-it/</guid>
      <description>&lt;p&gt;I have been away at the MVP summit and then on holiday for a couple of weeks. On this trip every hotel I stayed in had free internet, though often only wired. Seem many hotels have Wifi contracts with mobile phone companies and they are still charging for.&lt;/p&gt;
&lt;p&gt;So top tip make sure you have UTP cable in your bag.&lt;/p&gt;
&lt;p&gt;The &amp;lsquo;most interesting&amp;rsquo; system I had was at a hotel in Whistler.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been away at the MVP summit and then on holiday for a couple of weeks. On this trip every hotel I stayed in had free internet, though often only wired. Seem many hotels have Wifi contracts with mobile phone companies and they are still charging for.</p>
<p>So top tip make sure you have UTP cable in your bag.</p>
<p>The &lsquo;most interesting&rsquo; system I had was at a hotel in Whistler.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/IMAG0105_457FCD65.jpg"><img alt="IMAG0105" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/IMAG0105_thumb_7E2A7772.jpg" title="IMAG0105"></a></p>
<p>I decided not to touch it and used the free Wifi from a nearby coffee shop!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Do you need work item hierarchy in TFS? (Part 2)</title>
      <link>https://blog.richardfennell.net/posts/do-you-need-work-item-hierarchy-in-tfs-part-2/</link>
      <pubDate>Fri, 20 Mar 2009 10:33:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/do-you-need-work-item-hierarchy-in-tfs-part-2/</guid>
      <description>&lt;p&gt;I did a post before going to the MVC summit called &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/02/25/do-you-need-work-item-hierarchy-in-tfs.aspx&#34;&gt;Do you need work item hierarchy in TFS?&lt;/a&gt; which mentioned the new tools from Notion for TFS.&lt;/p&gt;
&lt;p&gt;Well there are many ways to skin a cat, and also to produce a hierarchy of TFS work items. Also have a look at &lt;a href=&#34;http://www.alm-tools.de/#Page=10&#34;&gt;Artiso Workitem Manager&lt;/a&gt;. This product is interesting as it provides tools that work outside of Visual Studio to manage hierarchies of work items. This might be just what you need for people such as project manager who do not live in a development environment. The nicest feature for me is that allows the import and export of work items into Word document, thus allowing production of specification documents from TFS with the click of a button.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I did a post before going to the MVC summit called <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2009/02/25/do-you-need-work-item-hierarchy-in-tfs.aspx">Do you need work item hierarchy in TFS?</a> which mentioned the new tools from Notion for TFS.</p>
<p>Well there are many ways to skin a cat, and also to produce a hierarchy of TFS work items. Also have a look at <a href="http://www.alm-tools.de/#Page=10">Artiso Workitem Manager</a>. This product is interesting as it provides tools that work outside of Visual Studio to manage hierarchies of work items. This might be just what you need for people such as project manager who do not live in a development environment. The nicest feature for me is that allows the import and export of work items into Word document, thus allowing production of specification documents from TFS with the click of a button.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Error 1722 when installing MVC 1.0</title>
      <link>https://blog.richardfennell.net/posts/error-1722-when-installing-mvc-1-0/</link>
      <pubDate>Thu, 19 Mar 2009 17:09:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/error-1722-when-installing-mvc-1-0/</guid>
      <description>&lt;p&gt;As I am sure you will have heard the &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?FamilyID=53289097-73ce-43bf-b6a6-35e00103cb4b&amp;amp;displaylang=en&#34;&gt;MVC 1.0&lt;/a&gt; has been released. I downloaded this on my Windows 7 and tried to install it but got an Error 1722, the MSI then rolled back.&lt;/p&gt;
&lt;p&gt;Turns out the problem was simply that I had Visual Studio open at the time I installed, it needs to be closed. It seems the MSI does not check for this before it attempts the copy.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As I am sure you will have heard the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=53289097-73ce-43bf-b6a6-35e00103cb4b&amp;displaylang=en">MVC 1.0</a> has been released. I downloaded this on my Windows 7 and tried to install it but got an Error 1722, the MSI then rolled back.</p>
<p>Turns out the problem was simply that I had Visual Studio open at the time I installed, it needs to be closed. It seems the MSI does not check for this before it attempts the copy.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Request for help with PhD Research</title>
      <link>https://blog.richardfennell.net/posts/request-for-help-with-phd-research/</link>
      <pubDate>Thu, 26 Feb 2009 21:34:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/request-for-help-with-phd-research/</guid>
      <description>&lt;p&gt;Whilst at SC2009 today a request was made by Emad Ghoshen for attendees, and any other developers they knew, to assist him in his PhD research into maintainability of web applications.&lt;/p&gt;
&lt;p&gt;He asked if people could download some Java/JSP code and answer a few questions on it, don&amp;rsquo;t worry if this is not your usual languages this is one of the questions he is researching.&lt;/p&gt;
&lt;p&gt;All the details can be found at &lt;a href=&#34;http://www.sueblack.co.uk/clarosexp.html&#34; title=&#34;http://www.sueblack.co.uk/clarosexp.html&#34;&gt;http://www.sueblack.co.uk/clarosexp.html&lt;/a&gt;,&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst at SC2009 today a request was made by Emad Ghoshen for attendees, and any other developers they knew, to assist him in his PhD research into maintainability of web applications.</p>
<p>He asked if people could download some Java/JSP code and answer a few questions on it, don&rsquo;t worry if this is not your usual languages this is one of the questions he is researching.</p>
<p>All the details can be found at <a href="http://www.sueblack.co.uk/clarosexp.html" title="http://www.sueblack.co.uk/clarosexp.html">http://www.sueblack.co.uk/clarosexp.html</a>,</p>
]]></content:encoded>
    </item>
    <item>
      <title>Intent is the key - thoughts on the way home form Software Craftsmanship 2009</title>
      <link>https://blog.richardfennell.net/posts/intent-is-the-key-thoughts-on-the-way-home-form-software-craftsmanship-2009/</link>
      <pubDate>Thu, 26 Feb 2009 21:33:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/intent-is-the-key-thoughts-on-the-way-home-form-software-craftsmanship-2009/</guid>
      <description>&lt;p&gt;Today has been interesting, I have been to conferences where you sit and listen, such as DDD, TechEd etc. I have been to conferences where everyone is encouraged to talk open spaces style such as Alt.Net, but today has fallen between the two styles.&lt;/p&gt;
&lt;p&gt;The Software Craftsmanship 2009 conference has been in more of a workshop style; most sessions have started with a short presentation to set the scene then the attendees split to forms small groups to do some exercise or chat, reporting back later in the session. A sort of lead open spaces feel if you want.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today has been interesting, I have been to conferences where you sit and listen, such as DDD, TechEd etc. I have been to conferences where everyone is encouraged to talk open spaces style such as Alt.Net, but today has fallen between the two styles.</p>
<p>The Software Craftsmanship 2009 conference has been in more of a workshop style; most sessions have started with a short presentation to set the scene then the attendees split to forms small groups to do some exercise or chat, reporting back later in the session. A sort of lead open spaces feel if you want.</p>
<p>As usual with events you need to let what you heard sink in, but I think it will be useful. Not so much in the &lsquo;I must do X to fix project Y&rsquo; but in the general approach to development issues. This was a conference on craftsmanship, best practice in general not magic bullets. A good example was in the session on responsibility driven design with mock objects, where a good deal of time was spent discussing the important of variable/object names in the design. From this session you should not take away that &lsquo;View&rsquo; is a bad name and &lsquo;Display&rsquo; is a good one; but that the choice of the name is important to how you will view the intent of the test and the code you are writing.</p>
<p>I suppose this was the theme for the day, in development intent is key, why you do something is more critical than how. It is only through clear understanding of the intent of the business users that a developer can hope to design the best system. So often what the client asks for is based on what they think can be done and unless this requirement is challenged to get at the underlying intend the best solution (whatever best means to the project) will be missed. The same holds true with writing tests, it is vital that the test conveys the intent of what is being tested, else there is little hope for any future maintenance work when all the original staff have moved on. This means to me that the most important part of the user story is the &lsquo;so that they can&rsquo; clause at the end, it is so often the window onto the truth of the real story intent behind the story.</p>
<p>So an excellent day all round, thanks to Jason Gorman and everyone else who helped to organise the event, I look forward to next years, and so should you if you are interested in your craft&hellip;.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Do you need work item hierarchy in TFS?</title>
      <link>https://blog.richardfennell.net/posts/do-you-need-work-item-hierarchy-in-tfs/</link>
      <pubDate>Wed, 25 Feb 2009 16:14:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/do-you-need-work-item-hierarchy-in-tfs/</guid>
      <description>&lt;p&gt;Is so have a look at &lt;a href=&#34;http://www.notionsolutions.com/Products/Pages/default.aspx&#34;&gt;Notion Tools from Team System&lt;/a&gt;. This set of tools provides&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A timesheet based on works items that you access inside Visual Studio&lt;/li&gt;
&lt;li&gt;A work organizer  to manage work items and documents allowing creation of hierarchies&lt;/li&gt;
&lt;li&gt;A work planner to help schedule resources for future iterations.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Well worth a look as a means to extend the reach of TFS into your Agile projects.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Is so have a look at <a href="http://www.notionsolutions.com/Products/Pages/default.aspx">Notion Tools from Team System</a>. This set of tools provides</p>
<ul>
<li>A timesheet based on works items that you access inside Visual Studio</li>
<li>A work organizer  to manage work items and documents allowing creation of hierarchies</li>
<li>A work planner to help schedule resources for future iterations.</li>
</ul>
<p>Well worth a look as a means to extend the reach of TFS into your Agile projects.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SLExtensions HTMLEditor</title>
      <link>https://blog.richardfennell.net/posts/slextensions-htmleditor/</link>
      <pubDate>Wed, 25 Feb 2009 14:32:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/slextensions-htmleditor/</guid>
      <description>&lt;p&gt;I have been looking at porting a old content editor I wrote from WinForm to SilverLight and hit the problem there was no HTML editor control available in the standard Silverlight 2 control set. Roll in the excellent &lt;a href=&#34;http://www.codeplex.com/SLExtensions&#34;&gt;SLExtensions controls on CodePlex&lt;/a&gt;, to save the day&amp;hellip;..&lt;/p&gt;
&lt;p&gt;Now I did hit one problem with the HTMLEditor, that was addressed very quickly in for support forums. The point to watch out for is that for the HTMLEditor control to work the Silverlight object must be loaded into the web page with the setting to be windowless&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been looking at porting a old content editor I wrote from WinForm to SilverLight and hit the problem there was no HTML editor control available in the standard Silverlight 2 control set. Roll in the excellent <a href="http://www.codeplex.com/SLExtensions">SLExtensions controls on CodePlex</a>, to save the day&hellip;..</p>
<p>Now I did hit one problem with the HTMLEditor, that was addressed very quickly in for support forums. The point to watch out for is that for the HTMLEditor control to work the Silverlight object must be loaded into the web page with the setting to be windowless</p>
<blockquote>
<p>&lt;object id=&ldquo;appId&rdquo;  data=&ldquo;data:application/x-silverlight,&rdquo; type=&ldquo;application/x-silverlight-2&rdquo; width=&ldquo;100%&rdquo; height=&ldquo;100%&rdquo; &gt; <br>
    <!-- all the other parameters --><br>
    **<param value="true" name="windowless"/><br>
**</object></p></blockquote>
<p>If you just add the object with default settings to a new web page, or let Visual Studio generate a dynamic test page then this is not set. You end up rendering the editor but cannot enter text.</p>
<p>Hope this saves you some time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Alt.Net UK &#39;In the North&#39; Registration is now open</title>
      <link>https://blog.richardfennell.net/posts/alt-net-uk-in-the-north-registration-is-now-open/</link>
      <pubDate>Tue, 24 Feb 2009 12:29:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alt-net-uk-in-the-north-registration-is-now-open/</guid>
      <description>&lt;p&gt;You can now register at &lt;a href=&#34;http://www.altdotnetuknorth.info/&#34; title=&#34;http://www.altdotnetuknorth.info/&#34;&gt;http://www.altdotnetuknorth.info/&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can now register at <a href="http://www.altdotnetuknorth.info/" title="http://www.altdotnetuknorth.info/">http://www.altdotnetuknorth.info/</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>New URL for Agile Yorkshire (nee XPClub)</title>
      <link>https://blog.richardfennell.net/posts/new-url-for-agile-yorkshire-nee-xpclub/</link>
      <pubDate>Mon, 23 Feb 2009 21:28:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-url-for-agile-yorkshire-nee-xpclub/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://agileyorkshire.org&#34;&gt;agileyorkshire.org&lt;/a&gt; is the new home for the Agile Yorkshire user group. If you follow the link you will see that next months subject is &lt;strong&gt;Test Doubles: An Introduction To Unit Test Patterns.&lt;/strong&gt; Unfortunately I won&amp;rsquo;t able able to make this session as I will be out of the country, but sounds interesting.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://agileyorkshire.org">agileyorkshire.org</a> is the new home for the Agile Yorkshire user group. If you follow the link you will see that next months subject is <strong>Test Doubles: An Introduction To Unit Test Patterns.</strong> Unfortunately I won&rsquo;t able able to make this session as I will be out of the country, but sounds interesting.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Excellent selection of speakers at the May Progressive .NET Tutorials</title>
      <link>https://blog.richardfennell.net/posts/excellent-selection-of-speakers-at-the-may-progressive-net-tutorials/</link>
      <pubDate>Mon, 23 Feb 2009 21:05:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/excellent-selection-of-speakers-at-the-may-progressive-net-tutorials/</guid>
      <description>&lt;p&gt;You may have seen on a few blogs that Skills Matter are organising the Progressive .NET Tutorials, a 3 day event in May. I have to say that the selection of speakers is excellent including Hammett, Ayende Rahien, David Laribee, Gojko Adzic, Ian Cooper, Mike Hadlow, Scott Belware and Sebastien Lambla; on subjects such as NHibernate, Castle, Monorail, Agile Testing, Web Testing, DSL&amp;rsquo;s in C#, OpenRasta, Windsor WCF, MEF (Microsoft&amp;rsquo;s Managed Extensions Framework) and more&amp;hellip;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You may have seen on a few blogs that Skills Matter are organising the Progressive .NET Tutorials, a 3 day event in May. I have to say that the selection of speakers is excellent including Hammett, Ayende Rahien, David Laribee, Gojko Adzic, Ian Cooper, Mike Hadlow, Scott Belware and Sebastien Lambla; on subjects such as NHibernate, Castle, Monorail, Agile Testing, Web Testing, DSL&rsquo;s in C#, OpenRasta, Windsor WCF, MEF (Microsoft&rsquo;s Managed Extensions Framework) and more&hellip;</p>
<p>For the full programme and description see <a href="http://skillsmatter.com/event/open-source-dot-net/progressive-dot-net-exchange">http://skillsmatter.com/event/open-source-dot-net/progressive-dot-net-exchange</a>. There is a massive discount available to blog readers for this <a href="http://skillsmatter.com/event/open-source-dot-net/progressive-dot-net-exchange">May .NET workshops</a>. If you were thinking of signing up, quote SM1368-622459-33L in the promo code field which allows you to book the workshops for £350 (normal ticket price is £1000).</p>
]]></content:encoded>
    </item>
    <item>
      <title>Update on Alt.Net &#39;In the North&#39; Conference</title>
      <link>https://blog.richardfennell.net/posts/update-on-alt-net-in-the-north-conference/</link>
      <pubDate>Fri, 20 Feb 2009 09:06:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/update-on-alt-net-in-the-north-conference/</guid>
      <description>&lt;p&gt;There is a bit more information on the the Alt.Net conference now at &lt;a href=&#34;http://www.altdotnetuknorth.info/&#34; title=&#34;http://www.altdotnetuknorth.info/&#34;&gt;http://www.altdotnetuknorth.info/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Registration will open next week at noon on Tuesday the 24th February.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is a bit more information on the the Alt.Net conference now at <a href="http://www.altdotnetuknorth.info/" title="http://www.altdotnetuknorth.info/">http://www.altdotnetuknorth.info/</a></p>
<p>Registration will open next week at noon on Tuesday the 24th February.</p>
]]></content:encoded>
    </item>
    <item>
      <title>If Alt.Net is not to your taste.....</title>
      <link>https://blog.richardfennell.net/posts/if-alt-net-is-not-to-your-taste/</link>
      <pubDate>Wed, 18 Feb 2009 15:58:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/if-alt-net-is-not-to-your-taste/</guid>
      <description>&lt;p&gt;On the same day as the Alt.Net event I am organising there is a community event at TVP &amp;lsquo;&lt;a href=&#34;http://developerdeveloperdeveloper.com/webdd09/Default.aspx&#34;&gt;WebDD&#39;09&lt;/a&gt; - &lt;em&gt;With all the latest stuff from MIX 09&amp;rsquo;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;We are spoilt, there is so much choice in the community events at this time of year&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>On the same day as the Alt.Net event I am organising there is a community event at TVP &lsquo;<a href="http://developerdeveloperdeveloper.com/webdd09/Default.aspx">WebDD'09</a> - <em>With all the latest stuff from MIX 09&rsquo;.</em></p>
<p>We are spoilt, there is so much choice in the community events at this time of year</p>
]]></content:encoded>
    </item>
    <item>
      <title>Announcing the Alt.Net.UK &#39;in the North&#39; Conference</title>
      <link>https://blog.richardfennell.net/posts/announcing-the-alt-net-uk-in-the-north-conference/</link>
      <pubDate>Wed, 18 Feb 2009 10:50:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/announcing-the-alt-net-uk-in-the-north-conference/</guid>
      <description>&lt;p&gt;I am please to be able to announce that there will be an Alt.net Open Space Conference in Bradford on the 17/18th April this year. I had mentioned my intension of organising such an event at the last London conference, but it has taken a bit longer than expected to get sorted due to problems with getting the venue.&lt;/p&gt;
&lt;p&gt;The event will be hosted by &lt;strong&gt;Black Marble at their office in Bradford, West Yorkshire&lt;/strong&gt;, and there will be space for 50 attendees. The format will be the same as previous UK Alt.net conferences.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am please to be able to announce that there will be an Alt.net Open Space Conference in Bradford on the 17/18th April this year. I had mentioned my intension of organising such an event at the last London conference, but it has taken a bit longer than expected to get sorted due to problems with getting the venue.</p>
<p>The event will be hosted by <strong>Black Marble at their office in Bradford, West Yorkshire</strong>, and there will be space for 50 attendees. The format will be the same as previous UK Alt.net conferences.</p>
<ul>
<li>A planning session on <strong>Friday the 17th April from 7pm to 8:30pm</strong>, followed by a trip to a bar to socialise</li>
<li>The open spaces sessions from <strong>9am to 4:30pm on Saturday 18th April</strong>.</li>
</ul>
<p>As well as providing a venue for the event, Black Marble has kindly agreed to also sponsor lunch on the Saturday.</p>
<p>Other offers of sponsorship will be greatly appreciated</p>
<p>I will get some more detail posted on the web such as registrations, local hotels etc. ASAP.  Check this blog for details [Update - See <a href="http://www.altdotnetuknorth.info/" title="http://www.altdotnetuknorth.info/">http://www.altdotnetuknorth.info/</a>]</p>
<p>**What is the conference format?<br>
**The conference will be based on the open spaces format.</p>
<p>An Open Space conference&rsquo;s agenda is decided upon by the conference participants during the opening of the event. Whoever shows up is the right group. Whatever happens is the only thing that could have. Whenever it starts is the right time. When it&rsquo;s over, it&rsquo;s over. </p>
<p>**What is Alt.Net?<br>
**Various blog posts have defined <a href="http://en.wikipedia.org/wiki/Alt.net">Alt.Net.</a> Term originally coined by <a href="http://codebetter.com/blogs/david_laribee/">David Laribee on his blog</a>.</p>
<p><strong>Who are the organisers?</strong><br>
<a href="http://blogs.blackmarble.co.uk/blogs/rfennell">Richard Fennell</a>, <a href="http://blogs.blackmarble.co.uk/blogs/iangus/default.aspx">Iain Angus</a>  and <a href="http://mckennatribe.com/">Nick McKenna</a></p>
<p><a href="http://www.blackmarble.co.uk">Black Marble</a> is providing sponsorship and logistic support. </p>
<p><img loading="lazy" src="/wp-content/uploads/sites/2/historic/altdotnet_2.jpg"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Buzzword bingo in the cloud</title>
      <link>https://blog.richardfennell.net/posts/buzzword-bingo-in-the-cloud/</link>
      <pubDate>Fri, 13 Feb 2009 13:26:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/buzzword-bingo-in-the-cloud/</guid>
      <description>&lt;p&gt;At todays Azure event I heard a new word for my occasional buzz word posts.&lt;/p&gt;
&lt;p&gt;Marketechture - an architecture designed by marketing for use in Powerpoint&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At todays Azure event I heard a new word for my occasional buzz word posts.</p>
<p>Marketechture - an architecture designed by marketing for use in Powerpoint</p>
]]></content:encoded>
    </item>
    <item>
      <title>SQLBits IV registration is open</title>
      <link>https://blog.richardfennell.net/posts/sqlbits-iv-registration-is-open/</link>
      <pubDate>Wed, 11 Feb 2009 14:33:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sqlbits-iv-registration-is-open/</guid>
      <description>&lt;p&gt;You can now &lt;a href=&#34;http://www.sqlbits.com/information/Registration.aspx&#34;&gt;register for SQLBits IV in Manchester on the 28th March&lt;/a&gt;, hurry if you want to attend as these free conferences do tend to full up quick&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://sqlbits.com/&#34;&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://sqlbits.com/images/sqlbits/SQLBItsNewLogo%20IV.png&#34; title=&#34;Vote for the new SQLBits Logo in the Logo Competition&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can now <a href="http://www.sqlbits.com/information/Registration.aspx">register for SQLBits IV in Manchester on the 28th March</a>, hurry if you want to attend as these free conferences do tend to full up quick</p>
<p><a href="http://sqlbits.com/"><img loading="lazy" src="http://sqlbits.com/images/sqlbits/SQLBItsNewLogo%20IV.png" title="Vote for the new SQLBits Logo in the Logo Competition"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>My session has been selected for SQLBits IV</title>
      <link>https://blog.richardfennell.net/posts/my-session-has-been-selected-for-sqlbits-iv/</link>
      <pubDate>Sun, 08 Feb 2009 20:44:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-session-has-been-selected-for-sqlbits-iv/</guid>
      <description>&lt;p&gt;Thanks to everyone who voted for my session &amp;lsquo;Making the SQL developer one of the family with Visual Studio Team System&amp;rsquo;, it was successful in the selection process and so will be on the agenda at SQLBits IV on 28th March 2009 in Manchester&lt;/p&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who voted for my session &lsquo;Making the SQL developer one of the family with Visual Studio Team System&rsquo;, it was successful in the selection process and so will be on the agenda at SQLBits IV on 28th March 2009 in Manchester</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Missing .NET framework installing a VSTO application</title>
      <link>https://blog.richardfennell.net/posts/missing-net-framework-installing-a-vsto-application/</link>
      <pubDate>Sat, 07 Feb 2009 18:36:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/missing-net-framework-installing-a-vsto-application/</guid>
      <description>&lt;p&gt;I have been getting the error &lt;em&gt;&amp;ldquo;The required version of the .NET Framework is not installed on this computer.&amp;rdquo;&lt;/em&gt; (event id 4096 in Event log)  when trying to install a VSTO application from both a ClickOnce deployment and a local copy. This is interesting as the .NET framework is installed (on my 64bit Windows 7 PC) and the VSTO application was developed on the self same machine (and works in Visual Studio 2008).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been getting the error <em>&ldquo;The required version of the .NET Framework is not installed on this computer.&rdquo;</em> (event id 4096 in Event log)  when trying to install a VSTO application from both a ClickOnce deployment and a local copy. This is interesting as the .NET framework is installed (on my 64bit Windows 7 PC) and the VSTO application was developed on the self same machine (and works in Visual Studio 2008).</p>
<p>The fix, it turned out, was to uninstall Microsoft Visual Studio Tools for the Microsoft Office system (version 3.0 Runtime) (x86) and then reinstall it, once this was done the install worked fine.</p>
<p><strong>Update 17th  Feb 09</strong> A good list of what should be installed can be found at <a href="http://blogs.msdn.com/vsto/archive/2008/02/19/deploying-prerequisites-for-your-visual-studio-tools-for-office-solution.aspx">http://blogs.msdn.com/vsto/archive/2008/02/19/deploying-prerequisites-for-your-visual-studio-tools-for-office-solution.aspx</a>. However here does seem to be an install issue over order and leaving incorrect development resource so a look at the diagnostic tools in <a href="http://www.microsoft.com/downloads/details.aspx?familyid=46b6bf86-e35d-4870-b214-4d7b72b02bf9&amp;displaylang=en">http://www.microsoft.com/downloads/details.aspx?familyid=46b6bf86-e35d-4870-b214-4d7b72b02bf9&amp;displaylang=en</a> would not go amis. This should be an easy process when using VS2008SP1, but it does seem problematic.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Free events in the coming week</title>
      <link>https://blog.richardfennell.net/posts/free-events-in-the-coming-week/</link>
      <pubDate>Sat, 07 Feb 2009 11:51:09 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/free-events-in-the-coming-week/</guid>
      <description>&lt;p&gt;I know of two free events next week in Yorkshire next week for those interested :&lt;/p&gt;
&lt;p&gt;On Wednesday the 11th is the regular meeting of the &lt;a href=&#34;http://xpclub.erudine.com/2009/01/11th-february-2009-test-driven.html&#34;&gt;Agile Yorkshire user group&lt;/a&gt;, where we will be enjoying a presentation on Test Driven Development by user group regulars Adam and Neil from Masternaut ThreeX. This will start at 7pm as normal, but some of us are planning to arrive about 6:30 to discuss some administrative issue of the user group. So if you are interested just turn up at the Victoria Hotel.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I know of two free events next week in Yorkshire next week for those interested :</p>
<p>On Wednesday the 11th is the regular meeting of the <a href="http://xpclub.erudine.com/2009/01/11th-february-2009-test-driven.html">Agile Yorkshire user group</a>, where we will be enjoying a presentation on Test Driven Development by user group regulars Adam and Neil from Masternaut ThreeX. This will start at 7pm as normal, but some of us are planning to arrive about 6:30 to discuss some administrative issue of the user group. So if you are interested just turn up at the Victoria Hotel.</p>
<p>On Friday the 13th we at Black Marble are hosting a Microsoft Azure Technical briefing where <a href="http://blogs.msdn.com/david_gristwood/">Dave Gristwood</a> and other member of the DPE team will be covering how this new technology can be used in your ongoing IT strategy. For this event you need to <a href="http://blogs.msdn.com/ukisvdev/archive/2009/01/14/new-dates-for-azure-technical-briefing-announced.aspx">book in advance, so check the DPE blog for details</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Which way to fake an item in Typemock</title>
      <link>https://blog.richardfennell.net/posts/which-way-to-fake-an-item-in-typemock/</link>
      <pubDate>Thu, 05 Feb 2009 21:22:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/which-way-to-fake-an-item-in-typemock/</guid>
      <description>&lt;p&gt;&lt;em&gt;I raised a&lt;/em&gt; &lt;a href=&#34;http://www.typemock.com/community/viewtopic.php?topic=1147&amp;amp;forum=5&#34;&gt;&lt;em&gt;question on the Typemock forum&lt;/em&gt;&lt;/a&gt; &lt;em&gt;concerning a problem I was having mocking Sharepoint SPFarm objects. It was all down to which way to fake items using the various techniques in Isolator. It was interesting enough, I thought, to repeat here as a blog post.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I had written some tests for a method that got a list of SiteCollections that a user had rights to access. The key point being the need to access the static property &lt;strong&gt;SPFarm.Local&lt;/strong&gt; to get a list of Sharepoint services to iterate across. If I ran each test by itself it worked; but if run as a batch in TestDriven.Net or MSTest the first passed and the rest failed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>I raised a</em> <a href="http://www.typemock.com/community/viewtopic.php?topic=1147&amp;forum=5"><em>question on the Typemock forum</em></a> <em>concerning a problem I was having mocking Sharepoint SPFarm objects. It was all down to which way to fake items using the various techniques in Isolator. It was interesting enough, I thought, to repeat here as a blog post.</em></p>
<p>I had written some tests for a method that got a list of SiteCollections that a user had rights to access. The key point being the need to access the static property <strong>SPFarm.Local</strong> to get a list of Sharepoint services to iterate across. If I ran each test by itself it worked; but if run as a batch in TestDriven.Net or MSTest the first passed and the rest failed.</p>
<p>The problem was down to how I was creating the fake SPfarm, I was using:</p>
<p>SPFarm fakeFarm = Isolate.Fake.Instance<SPFarm>(Members.ReturnRecursiveFakes);<br>
Isolate.Swap.NextInstance<SPFarm>().With(fakeFarm);</p>
<p>when I should have used</p>
<p>SPFarm fakeFarm = Isolate.Fake.Instance<SPFarm>(Members.ReturnRecursiveFakes);<br>
Isolate.WhenCalled(() =&gt; SPFarm.Local).WillReturn(fakeFarm);</p>
<p>Ok, I used the wrong call, but wait a minute, each of my tests were marked with the <strong>[Isolated]</strong> attribute. My understanding was this meant the Typemock system was reset between each test, and as I only call <strong>SPFarm.Local</strong> once per test were these two forms not equivalent?</p>
<p>This is the answer from Doron at Typemock, hope it clear up any confusion of the type I was suffering from&hellip;..</p>
<p><em>You are right in that the Isolate attribute resets fake behavior between each and every test. However, SwapNextInstance<T> is triggered if and only if a constructor for T has been called. It is not equivalent to WhenCalled() but rather complementing.<br>
Generally speaking, you set behaviors on your fake object (or static methods) using WhenCalled() and then you can choose how to inject that fake behaviour to the code under test:<br>
- If the code under test receives a reference to the fake behavior, you just pass it in.<br>
- If the code under tests receives it from a third party, you fake that third party to return the fake object you set up.<br>
- If the code under test uses &rsquo;new&rsquo; to instantiate the dependent behavior, use SwapNextInstance to replace the next &rsquo;new&rsquo; with the faked object</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Developer Day Scotland Voting Opens</title>
      <link>https://blog.richardfennell.net/posts/developer-day-scotland-voting-opens/</link>
      <pubDate>Wed, 04 Feb 2009 20:46:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/developer-day-scotland-voting-opens/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://developerdayscotland.com/main/Home/tabid/74/Default.aspx&#34;&gt;voting process has opened for Developer Day Scotland&lt;/a&gt; being held on the 2nd of May. I would like to draw you attention to my proposed session on testing for Sharepoint developers where I will show what can be done with Typemock and good use of design patterns to allow the building of complex tests that can be run of build servers that do not require Sharepoint to be installed.&lt;/p&gt;
&lt;p&gt;But even if you don&amp;rsquo;t fancy this session please still vote, and when it opens sign up to attend DDS. In my opinion DDS was the best of the DDD events I went to last year. A great venue and atmosphere.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://developerdayscotland.com/main/Home/tabid/74/Default.aspx">voting process has opened for Developer Day Scotland</a> being held on the 2nd of May. I would like to draw you attention to my proposed session on testing for Sharepoint developers where I will show what can be done with Typemock and good use of design patterns to allow the building of complex tests that can be run of build servers that do not require Sharepoint to be installed.</p>
<p>But even if you don&rsquo;t fancy this session please still vote, and when it opens sign up to attend DDS. In my opinion DDS was the best of the DDD events I went to last year. A great venue and atmosphere.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/GetReady2smallCAOFIQNI_0C100A2C.png"><img alt="GetReady2-smallCAOFIQNI" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/GetReady2smallCAOFIQNI_thumb_444E8144.png" title="GetReady2-smallCAOFIQNI"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Old assemblies appearing in the _PublishedWebsites directory with Team Build</title>
      <link>https://blog.richardfennell.net/posts/old-assemblies-appearing-in-the-_publishedwebsites-directory-with-team-build/</link>
      <pubDate>Thu, 29 Jan 2009 15:37:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/old-assemblies-appearing-in-the-_publishedwebsites-directory-with-team-build/</guid>
      <description>&lt;p&gt;I have been having a problem with a new automated CI build under Team Build that I have added to an old Visual Studio solution. The solution is fairly big, but in essence it contains a shared data type assembly, a web site front end and a back end web service. The problem was on the Team Build drop share in the &lt;strong&gt;_PublishedWebsitesproduced&lt;/strong&gt; by team build I was finding a old version of the shared data type assembly. However in the &lt;strong&gt;release&lt;/strong&gt; directory of the same build I found the correct newly built version of the assembly.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been having a problem with a new automated CI build under Team Build that I have added to an old Visual Studio solution. The solution is fairly big, but in essence it contains a shared data type assembly, a web site front end and a back end web service. The problem was on the Team Build drop share in the <strong>_PublishedWebsitesproduced</strong> by team build I was finding a old version of the shared data type assembly. However in the <strong>release</strong> directory of the same build I found the correct newly built version of the assembly.</p>
<p>After much fiddling with build order and dependencies I found the problem. It was that the web service project was originally created as a VS2003 or 2005 website a good few years ago (so long ago I can&rsquo;t remember which); the point is it was not a web application, hence there had been a <strong>bin</strong> directory under the root in Visual Studio.</p>
<p>When this project was converted to a VS2008 web application this was tidied up in Visual Studio. However in the TFS source control the <strong>bin</strong> directory remained. This was not an issue when the project was build locally in Visual Studio as:</p>
<ol>
<li>Visual Studio did not pull the files down as the directory was not in the project</li>
<li>They would have been overwritten anyway as part of the build</li>
</ol>
<p>However on Team Build :</p>
<ol>
<li>The files did come down to the build server (due to the workspace scope)</li>
<li>They were not over written due to Team Build having a separate <strong>source</strong> and <strong>binary</strong> paths.</li>
</ol>
<p>So these files, in the unwanted bin folder, ended up being copied to the drop location after the newly built version of the shared assembly, as part of the copy of web site resource e.g. graphic files. The net effect being to leaving me with an old DLL in the <strong>_PublishedWebsites</strong> folder, whilst having the correct new version <strong>release</strong> directory.</p>
<p>This was only found in the end by detailed checking of the copy lines in the build log.</p>
]]></content:encoded>
    </item>
    <item>
      <title>PostBuild events not running on TFS Team Build</title>
      <link>https://blog.richardfennell.net/posts/postbuild-events-not-running-on-tfs-team-build/</link>
      <pubDate>Wed, 28 Jan 2009 22:46:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/postbuild-events-not-running-on-tfs-team-build/</guid>
      <description>&lt;p&gt;I have been struggling today with a problem that a PostBuild event on a C# project works fine on my development PC but failed on a Team Build box. The project is based on the &lt;a href=&#34;http://www.codeplex.com/sptemplateland&#34;&gt;Codeplex  SharePoint Visual Studio Project Template&lt;/a&gt; that uses post build scripts to create a deployment WSP.&lt;/p&gt;
&lt;p&gt;It turns out the problem was an unwanted condition on the PostBuildEvent in the projects .csproj file. It was like this:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been struggling today with a problem that a PostBuild event on a C# project works fine on my development PC but failed on a Team Build box. The project is based on the <a href="http://www.codeplex.com/sptemplateland">Codeplex  SharePoint Visual Studio Project Template</a> that uses post build scripts to create a deployment WSP.</p>
<p>It turns out the problem was an unwanted condition on the PostBuildEvent in the projects .csproj file. It was like this:</p>
<PropertyGroup>  
    <PostBuildEvent Condition=" '$(TeamBuildConstants)'=='' ">  
          echo POSTBUILD STARTED  
    </PostBuildEvent>  
</PropertyGroup>
<p>when it should have been</p>
<PropertyGroup>  
    <PostBuildEvent>  
          echo POSTBUILD STARTED  
    </PostBuildEvent>  
</PropertyGroup>  
<p>Where this condition came from I am not sure, as if I create a new project of this type from the sample template it was not there. As this was an existing project I guess it got edited in at some point in the past, why I can’t think. Anyway it is awkward to spot if you don’t look at the .csproj file in notepad.</p>
]]></content:encoded>
    </item>
    <item>
      <title>We are hosting Microsoft&#39;s Azure Technical Briefing</title>
      <link>https://blog.richardfennell.net/posts/we-are-hosting-microsofts-azure-technical-briefing/</link>
      <pubDate>Wed, 28 Jan 2009 18:20:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/we-are-hosting-microsofts-azure-technical-briefing/</guid>
      <description>&lt;p&gt;Interested in finding out more about Microsoft&amp;rsquo;s Cloud computing strategy and the technology behind Azure? Yes - well you are in luck there are more UK Microsoft road show events in February. They are in Cambridge, Edinburgh and the one we are hosting in Bradford on the 13th.&lt;/p&gt;
&lt;p&gt;For more details and links to registration see &lt;a href=&#34;http://blogs.msdn.com/ukisvdev/archive/2009/01/14/new-dates-for-azure-technical-briefing-announced.aspx&#34; title=&#34;http://blogs.msdn.com/ukisvdev/archive/2009/01/14/new-dates-for-azure-technical-briefing-announced.aspx&#34;&gt;http://blogs.msdn.com/ukisvdev/archive/2009/01/14/new-dates-for-azure-technical-briefing-announced.aspx&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Interested in finding out more about Microsoft&rsquo;s Cloud computing strategy and the technology behind Azure? Yes - well you are in luck there are more UK Microsoft road show events in February. They are in Cambridge, Edinburgh and the one we are hosting in Bradford on the 13th.</p>
<p>For more details and links to registration see <a href="http://blogs.msdn.com/ukisvdev/archive/2009/01/14/new-dates-for-azure-technical-briefing-announced.aspx" title="http://blogs.msdn.com/ukisvdev/archive/2009/01/14/new-dates-for-azure-technical-briefing-announced.aspx">http://blogs.msdn.com/ukisvdev/archive/2009/01/14/new-dates-for-azure-technical-briefing-announced.aspx</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Writing a webpart to work inside and outside Sharepoint when talking to WCF</title>
      <link>https://blog.richardfennell.net/posts/writing-a-webpart-to-work-inside-and-outside-sharepoint-when-talking-to-wcf/</link>
      <pubDate>Wed, 28 Jan 2009 16:02:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/writing-a-webpart-to-work-inside-and-outside-sharepoint-when-talking-to-wcf/</guid>
      <description>&lt;p&gt;I have posted in the past on &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/07/06/bug-tracking-with-tfs.aspx&#34;&gt;porting our ASP.NET bug tracking systems into Sharepoint and also linking it to TFS&lt;/a&gt;. The idea being that the initial customer support contact is tracked in the call tracking system e.g. is it switched on, is there paper in it&amp;hellip;.., and then escalated to TFS when development effort is required. To do this escalation my webpart calls a WCF service to access the TFS API - why you ask? The TFS API is 32bit only and you cannot call it directly inside a Sharepoint 64bit server farm.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have posted in the past on <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/07/06/bug-tracking-with-tfs.aspx">porting our ASP.NET bug tracking systems into Sharepoint and also linking it to TFS</a>. The idea being that the initial customer support contact is tracked in the call tracking system e.g. is it switched on, is there paper in it&hellip;.., and then escalated to TFS when development effort is required. To do this escalation my webpart calls a WCF service to access the TFS API - why you ask? The TFS API is 32bit only and you cannot call it directly inside a Sharepoint 64bit server farm.</p>
<p>It is good to be able to test this webpart/wcf setup outside of Sharepoint, it is faster and avoids any deployment issues. However, I did hit upon an issue when doing this. When running in Visual Studio it reports a problem that it cannot step into the WCF code; in fact this is a bit of a red herring you cannot even connect in any way.</p>
<p>The fix is simple, you need to set the WSHttoBinding settings, that you use to pass the AD authenticated user, differently in Sharepoint and outside it in Visual Studio running the webpart on Cassini. Basically the default is fine for Cassini (assuming the WCF service web.config is set for WIndows authentication), but you need to set some extra settings if hosted in Sharepoint.</p>
<p>For us I fixed it by using the DEBUG compilation flag, so when running locally it uses the Cassini settings (or lack of them) and when built for deployment (via a WSP) the extra SharePoint settings are used.</p>
<pre tabindex="0"><code>var binding = new WSHttpBinding(); #if !DEBUG      // we were using the following lines but these cause debug to fail      // but we need them get the AD context for connection to TFS if in Sharepoint     // As we only deploy a release build this #if is OK     binding.Security.Mode = SecurityMode.None;     binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;     binding.Security.Message.EstablishSecurityContext = false; #endif  var tfsClient = new BlackMarble.Sabs.WebParts.SabsTfsLinkReference.WorkItemLinkClient(binding, new EndpointAddress(tfsLinkURL));       
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>New name for the XP Club and next months event</title>
      <link>https://blog.richardfennell.net/posts/new-name-for-the-xp-club-and-next-months-event/</link>
      <pubDate>Tue, 27 Jan 2009 16:22:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-name-for-the-xp-club-and-next-months-event/</guid>
      <description>&lt;p&gt;The details of the &lt;a href=&#34;http://xpclub.erudine.com/2009/01/11th-february-2009-test-driven.html&#34;&gt;February meeting of the Agile Yorkshire&lt;/a&gt; User Group (new name, and new web site is on the way, watch out for posts)&lt;/p&gt;
&lt;p&gt;The session is on developers experiences of TDD, another view from the coal face. See you there, 2nd Wednesday of the month as usual&amp;hellip;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The details of the <a href="http://xpclub.erudine.com/2009/01/11th-february-2009-test-driven.html">February meeting of the Agile Yorkshire</a> User Group (new name, and new web site is on the way, watch out for posts)</p>
<p>The session is on developers experiences of TDD, another view from the coal face. See you there, 2nd Wednesday of the month as usual&hellip;.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Call for speakers for DDD South West</title>
      <link>https://blog.richardfennell.net/posts/call-for-speakers-for-ddd-south-west/</link>
      <pubDate>Tue, 27 Jan 2009 13:38:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/call-for-speakers-for-ddd-south-west/</guid>
      <description>&lt;p&gt;DDD community events are getting another venue this year, Queen&amp;rsquo;s College in Taunton on the 23rd of May.&lt;/p&gt;
&lt;p&gt;It has been decided that some of the speaker will be regulars from other DDD events; including myself speaking on Scrum. However all the other slots will be filled in the usual submission and voting manner, with the added rule the speaker should be new to the conference circuit (speaking to user groups is allowed and encouraged). So is this you? do you fancy speaking? if it is then &lt;a href=&#34;http://www.dddsouthwest.com/CallForNewSpeakers/tabid/61/Default.aspx&#34;&gt;submit a session.&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>DDD community events are getting another venue this year, Queen&rsquo;s College in Taunton on the 23rd of May.</p>
<p>It has been decided that some of the speaker will be regulars from other DDD events; including myself speaking on Scrum. However all the other slots will be filled in the usual submission and voting manner, with the added rule the speaker should be new to the conference circuit (speaking to user groups is allowed and encouraged). So is this you? do you fancy speaking? if it is then <a href="http://www.dddsouthwest.com/CallForNewSpeakers/tabid/61/Default.aspx">submit a session.</a></p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/DDDSouthWestBadgeSmall1_15A5133A.png"><img alt="DDDSouthWestBadgeSmall[1]" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/DDDSouthWestBadgeSmall1_thumb_758A067C.png" title="DDDSouthWestBadgeSmall[1]"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Voting opens for SQLBits IV</title>
      <link>https://blog.richardfennell.net/posts/voting-opens-for-sqlbits-iv/</link>
      <pubDate>Mon, 26 Jan 2009 10:55:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/voting-opens-for-sqlbits-iv/</guid>
      <description>&lt;p&gt;You can vote for sessions at SQLBit IV as of today at &lt;a href=&#34;http://www.sqlbits.com/information/PublicSessions.aspx&#34; title=&#34;http://www.sqlbits.com/information/PublicSessions.aspx&#34;&gt;http://www.sqlbits.com/information/PublicSessions.aspx&lt;/a&gt;, you do need to join the site first. Note joining the site is not the same as registering to attend the event which has not opened yet.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can vote for sessions at SQLBit IV as of today at <a href="http://www.sqlbits.com/information/PublicSessions.aspx" title="http://www.sqlbits.com/information/PublicSessions.aspx">http://www.sqlbits.com/information/PublicSessions.aspx</a>, you do need to join the site first. Note joining the site is not the same as registering to attend the event which has not opened yet.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Problem hosting WCF using Cassini on Windows 7</title>
      <link>https://blog.richardfennell.net/posts/problem-hosting-wcf-using-cassini-on-windows-7/</link>
      <pubDate>Sat, 24 Jan 2009 10:41:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/problem-hosting-wcf-using-cassini-on-windows-7/</guid>
      <description>&lt;p&gt;I have been working on an internal legacy application at our place that is being slowly moved from ASMX to WCF services. At present the services are a mixture of the two and there is a new WCF based client making connections to both types as needed. This work has been going on for a while on Vista development boxes without any problems. However when I opened the solution on a Windows 7 box (Beta 7000 build) I found I could not access the WCF services hosted locally using Visual Studio 2008’s WebDev.Webserver.exe (Cassini) server.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working on an internal legacy application at our place that is being slowly moved from ASMX to WCF services. At present the services are a mixture of the two and there is a new WCF based client making connections to both types as needed. This work has been going on for a while on Vista development boxes without any problems. However when I opened the solution on a Windows 7 box (Beta 7000 build) I found I could not access the WCF services hosted locally using Visual Studio 2008’s WebDev.Webserver.exe (Cassini) server.</p>
<p>After a bit of digging I stripped the problem back to this:</p>
<ul>
<li>In VS2008SP1 create a new ASP.NET hosted WcfService, Don’t altered any of the content from the demo/sample that is in there i.e. the GetData method etc.</li>
<li>Run this from within VS2008 so it is using WebDev.Webserver.exe on some port number chosen by Visual Studio, a browser will be opened and you will be able to see the WDSL</li>
<li>Connect to the service using the WCF test client (C:Program FilesMicrosoft Visual Studio 9.0Common7IDEWcfTestClient.exe).</li>
<li>It gets back the WDSL but when you try to execute a method you get the following error. <em>The remote server returned an unexpected response: (400) Bad Request.</em></li>
</ul>
<p>I tried disabling firewall, anti virus etc. but it all had no effect. I looked with <a href="http://www.fiddler2.com/fiddler2/version.asp">Fiddler2</a> and there appears to be no communication to the service (remember to look at localhost in Fiddler you have to using 127.0.0.1. (note the trailing dot) or see the <a href="http://www.fiddlertool.com/Fiddler/help/hookup.asp#Q-LocalTraffic">FAQ for other techniques</a>.</p>
<p>I then repeated the process, but create a self hosted WCF service (that uses C:Program FilesMicrosoft Visual Studio 9.0Common7IDEWcfSvcHost.exe) as opposed to an ASP.Net one, and this worked perfectly. Also if I published the ASP.Net WCF service to a IIS server it also worked fine. So this definitely looked like a Cassini issue.</p>
<p>I fed these result back to Microsoft and just heard that there is a problem with one of the security initiation messages when Cassini is involved, and they are looking into it.</p>
<p>So for now avoid the combination of Cassini, WCF and Windows 7 Beta, the simple workarounds on Windows 7 are</p>
<ol>
<li>to make your WCF services self hosted during development.</li>
<li>or host in IIS on you development PC</li>
</ol>
<p>In both cases you can chose how to host them in production, it does not have to be the same, that is the great advantage of WCF.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Installing an MSI on Windows 7 fails after 16th January 2009</title>
      <link>https://blog.richardfennell.net/posts/installing-an-msi-on-windows-7-fails-after-16th-january-2009/</link>
      <pubDate>Mon, 19 Jan 2009 21:18:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/installing-an-msi-on-windows-7-fails-after-16th-january-2009/</guid>
      <description>&lt;p&gt;I am using Windows 7 Beta on my main PC, whilst trying to install an application today I hit a problem, the installer failed on start-up. Firstly I thought it was a corrupt MSI so I tried another application, but it did the same. When I checked the Windows Application event log I found the following&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Faulting application name: msiexec.exe, version: 5.0.7000.0, time stamp: 0x49432105&lt;br&gt;
Faulting module name: ntdll.dll, version: 6.1.7000.0, time stamp: 0x49434898&lt;br&gt;
Exception code: 0xc0000005&lt;br&gt;
Fault offset: 0x00000000000ebbaa&lt;br&gt;
Faulting process id: 0x7dc&lt;br&gt;
Faulting application start time: 0x01c97a77da8e8b3e&lt;br&gt;
Faulting application path: C:WindowsSystem32msiexec.exe&lt;br&gt;
Faulting module path: C:WindowsSYSTEM32ntdll.dll&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am using Windows 7 Beta on my main PC, whilst trying to install an application today I hit a problem, the installer failed on start-up. Firstly I thought it was a corrupt MSI so I tried another application, but it did the same. When I checked the Windows Application event log I found the following</p>
<p><em>Faulting application name: msiexec.exe, version: 5.0.7000.0, time stamp: 0x49432105<br>
Faulting module name: ntdll.dll, version: 6.1.7000.0, time stamp: 0x49434898<br>
Exception code: 0xc0000005<br>
Fault offset: 0x00000000000ebbaa<br>
Faulting process id: 0x7dc<br>
Faulting application start time: 0x01c97a77da8e8b3e<br>
Faulting application path: C:WindowsSystem32msiexec.exe<br>
Faulting module path: C:WindowsSYSTEM32ntdll.dll</em></p>
<p>I next hit the forums and found that I was not alone with this problem, it seemed to start around the 16th of January, some people were putting it down to a corrupt Windows Defender update.</p>
<p>The solution was to use the Control Panel Restore tools to roll back to a restore point before the 16th. As this is a newly installed PC I have installed something most days, so it was easy to find such a restore point and roll back.</p>
<p>Once this was done the MSI, lets us see of the problem re-occurs</p>
<p><strong>Update (day of post):</strong> I reran Windows Update and it wanted to install ‘Definition Update for Microsoft Forefront Client Security (Antimalware 1.49.2086.0)’, I let it do this and I can still install MSI, so it seems the rollback is a valid solution</p>
<p><strong>Update 22nd Jan</strong>: Here are other reports of the same problem and some reported workaround <a href="http://www.neowin.net/news/main/09/01/19/windows-7-beta-testers-find-critical-windows-installer-bug" title="http://www.neowin.net/news/main/09/01/19/windows-7-beta-testers-find-critical-windows-installer-bug">http://www.neowin.net/news/main/09/01/19/windows-7-beta-testers-find-critical-windows-installer-bug</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Another call for speakers</title>
      <link>https://blog.richardfennell.net/posts/another-call-for-speakers/</link>
      <pubDate>Thu, 15 Jan 2009 11:18:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/another-call-for-speakers/</guid>
      <description>&lt;p&gt;Just seen that there has been a &lt;a href=&#34;http://developerdeveloperdeveloper.com/belfast/&#34;&gt;calls for speakers for DDD Belfast,&lt;/a&gt; another chance to meet up with like minded developers.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;DDDBelfast&#34; loading=&#34;lazy&#34; src=&#34;http://idunno.org/images/idunno_org/WindowsLiveWriter/DDDBelfastCallforSpeakersOpen_D8A8/DDDBelfast_3.png&#34; title=&#34;DDDBelfast&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just seen that there has been a <a href="http://developerdeveloperdeveloper.com/belfast/">calls for speakers for DDD Belfast,</a> another chance to meet up with like minded developers.</p>
<p><img alt="DDDBelfast" loading="lazy" src="http://idunno.org/images/idunno_org/WindowsLiveWriter/DDDBelfastCallforSpeakersOpen_D8A8/DDDBelfast_3.png" title="DDDBelfast"></p>
]]></content:encoded>
    </item>
    <item>
      <title>XP Club January 2009 Meeting</title>
      <link>https://blog.richardfennell.net/posts/xp-club-january-2009-meeting/</link>
      <pubDate>Tue, 13 Jan 2009 21:48:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/xp-club-january-2009-meeting/</guid>
      <description>&lt;p&gt;The XP Club did not manage to book the usual venue, the Victoria Hotel Pub in Leeds, for this months meeting as it was already booked. So for one month only, on the 14th of January, will be next door to Victoria Hotel Pub at the O&amp;rsquo;Neils from 7pm.&lt;/p&gt;
&lt;p&gt;Also, it has been decided there will be no technical presentation this month, we will meet to discuss the issues and politics around the club, including club&amp;rsquo;s constitution, bank accounts, rough strategy for the future and so on. You are all welcome to come along and help to shape the future of the club, help with some duties or just come to meet-up with fellow geeks.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The XP Club did not manage to book the usual venue, the Victoria Hotel Pub in Leeds, for this months meeting as it was already booked. So for one month only, on the 14th of January, will be next door to Victoria Hotel Pub at the O&rsquo;Neils from 7pm.</p>
<p>Also, it has been decided there will be no technical presentation this month, we will meet to discuss the issues and politics around the club, including club&rsquo;s constitution, bank accounts, rough strategy for the future and so on. You are all welcome to come along and help to shape the future of the club, help with some duties or just come to meet-up with fellow geeks.</p>
<p>As of February we will be back at the Victoria Hotel Pub.</p>
]]></content:encoded>
    </item>
    <item>
      <title>First thoughts on Windows 7 Beta</title>
      <link>https://blog.richardfennell.net/posts/first-thoughts-on-windows-7-beta/</link>
      <pubDate>Mon, 12 Jan 2009 20:59:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-thoughts-on-windows-7-beta/</guid>
      <description>&lt;p&gt;I had the PDC &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/11/14/windows-7-on-the-dell-mini.aspx&#34;&gt;CTP on my Netbook&lt;/a&gt; and that was OK so I had not expected any major issues. That said it has not been without problems, but all the issues I have logged as part of the beta program have been related to hardware detection (missing base stations and ignored physical Wifi switch state) on my Acer laptop. However, these issue can be worked around i.e. don’t use sleep or hibernate. so have not stopped be using the beta on my primary PC.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I had the PDC <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/11/14/windows-7-on-the-dell-mini.aspx">CTP on my Netbook</a> and that was OK so I had not expected any major issues. That said it has not been without problems, but all the issues I have logged as part of the beta program have been related to hardware detection (missing base stations and ignored physical Wifi switch state) on my Acer laptop. However, these issue can be worked around i.e. don’t use sleep or hibernate. so have not stopped be using the beta on my primary PC.</p>
<p>As to using Windows 7, I like it. I am find the revised UI easy to use, and it certainly seems faster than my Vista build on the same PC, but this might just be the fact it is fresh install on a formatted disk.</p>
<p>I will report more then I have used it for a few days in the real world</p>
]]></content:encoded>
    </item>
    <item>
      <title>Session submission for SQLBits IV has been opened</title>
      <link>https://blog.richardfennell.net/posts/session-submission-for-sqlbits-iv-has-been-opened/</link>
      <pubDate>Sun, 04 Jan 2009 14:18:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/session-submission-for-sqlbits-iv-has-been-opened/</guid>
      <description>&lt;p&gt;It is good to see how fast &lt;a href=&#34;http://www.sqlbits.com/information/PublicSessions.aspx&#34;&gt;sessions have been submitted for SQLBits&lt;/a&gt;, a nice range already. And great news that the event on the 28th March is in Manchester.&lt;/p&gt;
&lt;p&gt;I have submitted an updated version of the session I did for SQLBits II on Team System, hope some people find it interesting.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Submit a session for SQLBits IV&#34; loading=&#34;lazy&#34; src=&#34;http://www.sqlbits.com/images/SQLBits/IveSubmmitted.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is good to see how fast <a href="http://www.sqlbits.com/information/PublicSessions.aspx">sessions have been submitted for SQLBits</a>, a nice range already. And great news that the event on the 28th March is in Manchester.</p>
<p>I have submitted an updated version of the session I did for SQLBits II on Team System, hope some people find it interesting.</p>
<p><img alt="Submit a session for SQLBits IV" loading="lazy" src="http://www.sqlbits.com/images/SQLBits/IveSubmmitted.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Running TypeMock based test in Team Build</title>
      <link>https://blog.richardfennell.net/posts/running-typemock-based-test-in-team-build/</link>
      <pubDate>Tue, 23 Dec 2008 16:34:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-typemock-based-test-in-team-build/</guid>
      <description>&lt;p&gt;If you have TypeMock Isolator based MSTests in a solution you will want them to be run as part of any CI build process.&lt;/p&gt;
&lt;p&gt;To get this to work with Team Build you have to make sure Isolator is started in the build box at the right time (something that is done automagically behind the scenes by Visual Studio during developer testing). This is not actually that difficult as TypeMock provide some tasks for just this purpose.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you have TypeMock Isolator based MSTests in a solution you will want them to be run as part of any CI build process.</p>
<p>To get this to work with Team Build you have to make sure Isolator is started in the build box at the right time (something that is done automagically behind the scenes by Visual Studio during developer testing). This is not actually that difficult as TypeMock provide some tasks for just this purpose.</p>
<p>Firstly you have to install Isolator on the build box (and of course license it). Then edit your tfsbuild.proj build script to include the overrides for the beforetest and aftertest targets</p>
<pre tabindex="0"><code>&lt;!-- Import the Typemock list of tasks --&gt;  &lt;PropertyGroup\&gt;     &lt;TypeMockLocation\&gt;C:Program FilesTypemockIsolator5.1&lt;/TypeMockLocation\&gt; &lt;/PropertyGroup\&gt; &lt;Import Project \=&#34;$(TypeMockLocation)TypeMock.MSBuild.Tasks&#34;/&gt;  &lt;!-- Before the tests are run start TypeMock --&gt; &lt;Target Name\=&#34;BeforeTest&#34;\&gt;     &lt;TypeMockStart/&gt; &lt;/Target\&gt;  &lt;!-- And stop it when the are finished --&gt; &lt;Target Name\=&#34;AfterTest&#34;\&gt;     &lt;TypeMockStop/&gt; &lt;/Target\&gt;
</code></pre><p>Once this is done your test should run OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>My problems with Live Messenger inside Visual Studio are fixed.</title>
      <link>https://blog.richardfennell.net/posts/my-problems-with-live-messenger-inside-visual-studio-are-fixed/</link>
      <pubDate>Mon, 22 Dec 2008 22:19:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-problems-with-live-messenger-inside-visual-studio-are-fixed/</guid>
      <description>&lt;p&gt;One of the cool feature of the last October 08 release of &lt;a href=&#34;http://msdn.microsoft.com/en-us/tfs2008/bb980963.aspx&#34;&gt;TFS Power Tools&lt;/a&gt; has been that the members of a Team Project are shown inside Team Explorer.&lt;/p&gt;
&lt;p&gt;One of the ideas of this is that you can use Live Messenger from inside Team Explorer to see team members status, but I and many other were seeing the error shown below as Team Explorer refreshed&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_7BA54BFF.jpg&#34;&gt;&lt;img alt=&#34;clip_image002&#34; loading=&#34;lazy&#34; src=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_thumb_45C056E5.jpg&#34; title=&#34;clip_image002&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;There had been much talk of it being settings in the registry, UAC being used etc. but none of the fixes detailed worked for me.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>One of the cool feature of the last October 08 release of <a href="http://msdn.microsoft.com/en-us/tfs2008/bb980963.aspx">TFS Power Tools</a> has been that the members of a Team Project are shown inside Team Explorer.</p>
<p>One of the ideas of this is that you can use Live Messenger from inside Team Explorer to see team members status, but I and many other were seeing the error shown below as Team Explorer refreshed</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_7BA54BFF.jpg"><img alt="clip_image002" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/clip_image002_thumb_45C056E5.jpg" title="clip_image002"></a></p>
<p>There had been much talk of it being settings in the registry, UAC being used etc. but none of the fixes detailed worked for me.</p>
<p>However, today it has all started working after I updated to the new version of Live Messenger released in the past few days, Version 2009 (Build 14.0.8050.1202). I suspect the problem in my case was that I was using a beta version of Live Messenger</p>
]]></content:encoded>
    </item>
    <item>
      <title>Update on using StyleCop in TFS Team Build</title>
      <link>https://blog.richardfennell.net/posts/update-on-using-stylecop-in-tfs-team-build/</link>
      <pubDate>Mon, 22 Dec 2008 21:51:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/update-on-using-stylecop-in-tfs-team-build/</guid>
      <description>&lt;p&gt;I posted a while ago about trying to wire in the results from &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/10/15/using-stylecop-in-tfs-team-build.aspx&#34;&gt;StyleCop into a Team Build&lt;/a&gt;, the problem I had was that I could not get the StyleCop violations into the build summary.&lt;/p&gt;
&lt;p&gt;Well I still can’t, after much checking and asking around I was reliably informed that the build summary is not editable and there are no immediate plans for it to be in the future versions of TFS.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted a while ago about trying to wire in the results from <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/10/15/using-stylecop-in-tfs-team-build.aspx">StyleCop into a Team Build</a>, the problem I had was that I could not get the StyleCop violations into the build summary.</p>
<p>Well I still can’t, after much checking and asking around I was reliably informed that the build summary is not editable and there are no immediate plans for it to be in the future versions of TFS.</p>
<p>However, <a href="http://www.woodwardweb.com/">Martin Woodward, another Team System MVP</a>, made the suggestion to add the violation information into the build information object. This would not allow the information to be seen in  the build summary in Visual Studio, but it would allow me to programmatically recover it from the IBuildInformation object inside my build wallboard application – which shows the current state of all our current CI Team Builds, it shows a scrolling list row as below</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_572705C5.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_557639F1.png" title="image"></a></p>
<p>Where the key items are:</p>
<ul>
<li>Big graphic showing Building, Success, Partial Success or Failure</li>
<li>The name of the build and the time it finished</li>
<li>CE – Compiler errors</li>
<li>CW – Compiler warnings</li>
<li>FW – FXCop warnings</li>
<li>SW – StyleCop violations</li>
<li>TP – Tests passed</li>
<li>TF – Test failed</li>
<li>and the rabbit shows if the build status is reported by a NazBazTag Build Bunny</li>
</ul>
<p>So to do this I had to write an MSBuild Task, but the code fairly simple as Martin had suggested</p>
<pre tabindex="0"><code>//----------------------------------------------------------------------- // &lt;copyright file=&#34;StyleCopResultsMerge.cs&#34; company=&#34;Black Marble&#34;&gt; //     Black Marble Copyright 2008 // &lt;/copyright&gt; //----------------------------------------------------------------------- namespace BlackMarble.MSBuild.CodeQuality {     using System;     using Microsoft.Build.Framework;     using Microsoft.Build.Utilities;     using Microsoft.TeamFoundation.Build.Client;     using Microsoft.TeamFoundation.Client;          /// &lt;summary&gt;     /// Merges the Stylecop results into the build results for TFS     /// &lt;/summary&gt;     public class StyleCopResultsMerge : Task     {         /// &lt;summary&gt;         /// The tfs server to report to         /// &lt;/summary&gt;         private TeamFoundationServer tfs;          /// &lt;summary&gt;         /// The build server doing the work         /// &lt;/summary&gt;         private IBuildServer buildServer;          /// &lt;summary&gt;         /// The current build         /// &lt;/summary&gt;         private IBuildDetail build;                          /// &lt;summary&gt;         /// Gets or sets the Url of the Team Foundation Server.         /// &lt;/summary&gt;         \[Required\]         public string TeamFoundationServerUrl         {             get;             set;         }          /// &lt;summary&gt;         /// Gets or sets the Uri of the Build for which this task is executing.         /// &lt;/summary&gt;         \[Required\]         public string BuildUri         {             get;             set;         }          /// &lt;summary&gt;         /// Gets or sets the number of stylecop violations found.         /// &lt;/summary&gt;         \[Required\]         public int Violations         {             get;             set;         }          /// &lt;summary&gt;         /// Gets or sets the number of files stylecop failed to parser.         /// &lt;/summary&gt;         \[Required\]         public int Failures         {             get;             set;         }          /// &lt;summary&gt;         /// Gets the lazy init property that gives access to the TF Server specified by TeamFoundationServerUrl.         /// &lt;/summary&gt;         protected TeamFoundationServer Tfs         {             get             {                 if (this.tfs == null)                 {                     if (String.IsNullOrEmpty(this.TeamFoundationServerUrl))                     {                         // Throw some exception.                     }                      this.tfs = TeamFoundationServerFactory.GetServer(this.TeamFoundationServerUrl);                 }                  return this.tfs;             }         }          /// &lt;summary&gt;         /// Gets the lazy init property that gives access to the BuildServer service of the TF Server.         /// &lt;/summary&gt;         protected IBuildServer BuildServer         {             get             {                 if (this.buildServer == null)                 {                    this.buildServer = (IBuildServer)this.Tfs.GetService(typeof(IBuildServer));                 }                  return this.buildServer;             }         }          /// &lt;summary&gt;         /// Gets the lazy init property that gives access to the Build specified by BuildUri.         /// &lt;/summary&gt;         protected IBuildDetail Build         {             get             {                 if (this.build == null)                 {                     this.build = (IBuildDetail)this.BuildServer.GetBuild(new Uri(this.BuildUri), null, QueryOptions.None);                 }                  return this.build;             }         }          /// &lt;summary&gt;         /// ITask implementation - Execute method.         /// &lt;/summary&gt;         /// &lt;returns&gt;         /// True if the task succeeded, false otherwise.         /// &lt;/returns&gt;         public override bool Execute()         {             try             {                 IBuildInformation info = this.Build.Information;                  Log.LogMessage(&#34;StyleCopResultsMerge for build {0} with {1} violations &#34;, this.Build.Uri.ToString(), this.Violations.ToString());                  IBuildInformationNode infoNode = info.CreateNode();                 infoNode.Type = &#34;org.stylecop&#34;;                 infoNode.Fields.Add(&#34;total-violations&#34;, this.Violations.ToString());                 info.Save();                  return true;             }             catch (Exception ex)             {                 Log.LogError(ex.Message);                 return false;             }         }     } }
</code></pre><p>This can then wired into the build process I detailed in the older post and have repeated below with the new additions. The choice you have to make is if StyleCop violations will cause the build to fail or not – both are detailed below.</p>
<pre tabindex="0"><code>&lt;!-- the imports needed --&gt; &lt;Import Project\=&#34;$(MSBuildExtensionsPath)ExtensionPackMSBuild.ExtensionPack.tasks&#34;/&gt; &lt;!-- this could be a task file if you wanted --&gt; &lt;UsingTask AssemblyFile\=&#34;$(BMTasksPath)BlackMarble.MSBuild.CodeQuality.StyleCopResultsMerge.dll&#34; TaskName\=&#34;BlackMarble.MSBuild.CodeQuality.StyleCopResultsMerge&#34;/&gt;  &lt;!-- All the other Target go here --&gt;  &lt;Target Name\=&#34;AfterCompile&#34;\&gt;      &lt;!-- Create a build step to say we are starting StyleCop --&gt;     &lt;BuildStep TeamFoundationServerUrl\=&#34;$(TeamFoundationServerUrl)&#34;         BuildUri\=&#34;$(BuildUri)&#34;              Name\=&#34;StyleCopStep&#34;               Message\=&#34;StyleCop step is executing.&#34;\&gt;       &lt;Output TaskParameter\=&#34;Id&#34; PropertyName\=&#34;StyleCopStep&#34; /&gt;     &lt;/BuildStep\&gt;      &lt;!-- Create a collection of files to scan, \*\* means and sub directories --&gt;     &lt;CreateItem Include\=&#34;$(SolutionRoot)My Project\*\*\*.cs&#34;\&gt;       &lt;Output TaskParameter\=&#34;Include&#34; ItemName\=&#34;StyleCopFiles&#34;/&gt;     &lt;/CreateItem\&gt;      &lt;!-- Run the StyleCop MSBuild Extensions task using the setting file in the same directory as sln file and also stored in TFS --&gt;     &lt;MSBuild.ExtensionPack.CodeQuality.StyleCop         TaskAction=&#34;Scan&#34;         SourceFiles=&#34;@(StyleCopFiles)&#34;         ShowOutput=&#34;true&#34;         ForceFullAnalysis=&#34;true&#34;         CacheResults=&#34;false&#34;         logFile=&#34;$(DropLocation)$(BuildNumber)StyleCopLog.txt&#34;         SettingsFile=&#34;$(SolutionRoot)My ProjectSettings.StyleCop&#34;         ContinueOnError=&#34;false&#34;\&gt;       &lt;Output TaskParameter\=&#34;Succeeded&#34; PropertyName\=&#34;AllPassed&#34;/&gt;       &lt;Output TaskParameter\=&#34;ViolationCount&#34; PropertyName\=&#34;Violations&#34;/&gt;       &lt;Output TaskParameter\=&#34;FailedFiles&#34; ItemName\=&#34;Failures&#34;/&gt;     &lt;/MSBuild.ExtensionPack.CodeQuality.StyleCop\&gt;      &lt;!-- Run the new results merge task --&gt;     &lt;BlackMarble.MSBuild.CodeQuality.StyleCopResultsMerge       TeamFoundationServerUrl=&#34;$(TeamFoundationServerUrl)&#34;       BuildUri=&#34;$(BuildUri)&#34;       Violations=&#34;$(Violations)&#34;       Failures =&#34;0&#34;      /&gt;       &lt;!-- Put up a message in the build log to show results irrespective of what we do next --&gt;     &lt;Message Text\=&#34;StyleCop Succeeded: $(AllPassed), Violations: $(Violations)&#34;/&gt;      &lt;!-- FailedFile format is:         &lt;ItemGroup&gt;             &lt;FailedFile Include=&#34;filename&#34;&gt;                 &lt;CheckId&gt;SA Rule Number&lt;/CheckId&gt;                 &lt;RuleDescription&gt;Rule Description&lt;/RuleDescription&gt;                 &lt;RuleName&gt;Rule Name&lt;/RuleName&gt;                 &lt;LineNumber&gt;Line the violation appears on&lt;/LineNumber&gt;                 &lt;Message&gt;SA violation message&lt;/Message&gt;             &lt;/FailedFile&gt;         &lt;/ItemGroup&gt;--&gt;      &lt;Warning Text\=&#34;%(Failures.Identity) - Failed on Line %(Failures.LineNumber). %(Failures.CheckId): %(Failures.Message)&#34;/&gt;      &lt;!-- The StyleCop task does not throw an error if the analysis failed,           so we need to check the return value and if we choose to treat errors as warnngs           we need to set the error state --&gt;     &lt;Error Text\=&#34;StyleCop analysis warnings occured&#34; Condition\=&#34;&#39;$(AllPassed)&#39; == &#39;False&#39;&#34;  /&gt;      &lt;!-- List out the issues, you only need this if we are not forcing the error above --&gt;     &lt;!--&lt;BuildStep TeamFoundationServerUrl=&#34;$(TeamFoundationServerUrl)&#34;             BuildUri=&#34;$(BuildUri)&#34;             Message=&#34;%(Failures.Identity) - Failed on Line %(Failures.LineNumber). %(Failures.CheckId): %(Failures.Message)&#34;/&gt;--&gt;      &lt;!-- Log the fact that we have finished the StyleCop build step, as we had no error  --&gt;     &lt;BuildStep TeamFoundationServerUrl\=&#34;$(TeamFoundationServerUrl)&#34;                   BuildUri\=&#34;$(BuildUri)&#34;                   Id\=&#34;$(StyleCopStep)&#34;                   Status\=&#34;Succeeded&#34;                   Message\=&#34;StyleCop Succeeded: $(AllPassed), Violations: $(Violations)&#34;/&gt;      &lt;!-- If an error has been raised we call this target           You might have thought you could so the same as the error line above and this followng          OnError line by adding a condition as shown below. However this does not work          as the OnError condition is not evaluated unless an error as previously occured--&gt;     &lt;OnError ExecuteTargets\=&#34;FailTheBuild&#34; /&gt;     &lt;!--&lt;OnError ExecuteTargets=&#34;FailTheBuild&#34; Condition=&#34;&#39;$(AllPassed)&#39; == &#39;False&#39;&#34;  /&gt;--&gt;    &lt;/Target\&gt;    &lt;Target Name\=&#34;FailTheBuild&#34;\&gt;     &lt;!-- We are failing the build due to stylecop issues --&gt;     &lt;BuildStep TeamFoundationServerUrl\=&#34;$(TeamFoundationServerUrl)&#34;             BuildUri\=&#34;$(BuildUri)&#34;             Id\=&#34;$(StyleCopStep)&#34;             Status\=&#34;Failed&#34;             Message\=&#34;StyleCop Failed: $(AllPassed), Violations: $(Violations) \[See $(DropLocation)$(BuildNumber)StyleCopLog.txt\]&#34;/&gt;      &lt;!-- List out the issues--&gt;     &lt;BuildStep TeamFoundationServerUrl\=&#34;$(TeamFoundationServerUrl)&#34;             BuildUri\=&#34;$(BuildUri)&#34;             Message\=&#34;%(Failures.Identity) - Failed on Line %(Failures.LineNumber). %(Failures.CheckId): %(Failures.Message)&#34;/&gt;    &lt;/Target\&gt;
</code></pre><p>Finally to get the new in formation out of the build and into the build wallboard</p>
<pre tabindex="0"><code>public void UpdateStatus(IBuildDetail detail) {     // any other results fields updates       // and now the custom nodes     IBuildInformation info = detail.Information;     foreach (IBuildInformationNode infoNode in info.Nodes)     {         if (infoNode.Type == &#34;org.stylecop&#34;)         {             // we have the correct node             this.SetStyleCopWarnings(infoNode.Fields\[&#34;total-violations&#34;\]);             break;         }     } }
</code></pre><p>So not a perfect solution, but but does everything I need at present.</p>
]]></content:encoded>
    </item>
    <item>
      <title>MSB3155 errors in Team build when publishing to click once</title>
      <link>https://blog.richardfennell.net/posts/msb3155-errors-in-team-build-when-publishing-to-click-once/</link>
      <pubDate>Mon, 22 Dec 2008 14:08:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/msb3155-errors-in-team-build-when-publishing-to-click-once/</guid>
      <description>&lt;p&gt;If your team build project uses the Publish Target option (to create a ClickOnce deploy) you may see the error&lt;/p&gt;
&lt;p&gt;BuildWallboard.csproj&amp;quot; (Publish target) (3:5) -&amp;gt;&lt;br&gt;
(_DeploymentGenerateBootstrapper target) -&amp;gt;&lt;br&gt;
MSB3155: Item &amp;lsquo;Microsoft.Net.Framework.3.5.SP1&amp;rsquo; could not be located in BuildWallboard&amp;rsquo;.&lt;br&gt;
MSB3155: Item &amp;lsquo;Microsoft.Windows.Installer.3.1&amp;rsquo; could not be located in BuildWallboard&amp;rsquo;.&lt;/p&gt;
&lt;p&gt;This is because on the build server needs a ‘default installation’ of Visual Studio Developer (or Suite). The publish function, like the MSTest function is not something the Team Build server can do bit itself it needs Visual Studio to do the heavy lifting.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If your team build project uses the Publish Target option (to create a ClickOnce deploy) you may see the error</p>
<p>BuildWallboard.csproj&quot; (Publish target) (3:5) -&gt;<br>
(_DeploymentGenerateBootstrapper target) -&gt;<br>
MSB3155: Item &lsquo;Microsoft.Net.Framework.3.5.SP1&rsquo; could not be located in BuildWallboard&rsquo;.<br>
MSB3155: Item &lsquo;Microsoft.Windows.Installer.3.1&rsquo; could not be located in BuildWallboard&rsquo;.</p>
<p>This is because on the build server needs a ‘default installation’ of Visual Studio Developer (or Suite). The publish function, like the MSTest function is not something the Team Build server can do bit itself it needs Visual Studio to do the heavy lifting.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Should my TFS Build Server be 32bit or 64bit?</title>
      <link>https://blog.richardfennell.net/posts/should-my-tfs-build-server-be-32bit-or-64bit/</link>
      <pubDate>Mon, 22 Dec 2008 10:53:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/should-my-tfs-build-server-be-32bit-or-64bit/</guid>
      <description>&lt;p&gt;I would say at this time unless you need 64Bit specific assemblies built you are best staying on a 32bit operating system. This will happily build MSIL .NET assemblies which I guess for most of us is the bulk of our work. OK you loose a bit of performance if you have 64bit hardware (or virtual hardware in our case), but I doubt this will be critical, shaving a few seconds of an automated build is not normally important.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I would say at this time unless you need 64Bit specific assemblies built you are best staying on a 32bit operating system. This will happily build MSIL .NET assemblies which I guess for most of us is the bulk of our work. OK you loose a bit of performance if you have 64bit hardware (or virtual hardware in our case), but I doubt this will be critical, shaving a few seconds of an automated build is not normally important.</p>
<p>My main reason for saying this is what as you extend your build process you will no doubt started to use community developed build activities, some of these seem to get a bit confused if you are on a 64bit OS. The issue seems to be that they cannot easily find the TFS Client assemblies in <strong>C:Program File (x86)</strong> as opposed to <strong>C:Program File</strong>. For example we have a build that automatically updates version numbers and deploys via ClickOnce; it works fine on a 32bit W2k8 build server, but on an identically configured 64bt W2K8 build server it gives the error:</p>
<p>error MSB4018: The &ldquo;MSBuild.Community.Tasks.Tfs.TfsVersion&rdquo; task failed unexpectedly.<br>
error MSB4018: System.IO.FileNotFoundException: Could not load file or assembly &lsquo;file:///C:Program FilesMicrosoft Visual Studio 9.0Common7IDEPrivateAssembliesMicrosoft.TeamFoundation.Client.dll&rsquo; or one of its dependencies. The system cannot find the file specified.</p>
<p>So what appears to be just a simple path issue, that is probably fixable in an XML configuration file or with search paths – but is it worth the effort? I would say not in general. I want to keep my installation as near default as possible, which I can with 32bit.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting MSB6006 errors for MSTest under TFS Team Build 2008</title>
      <link>https://blog.richardfennell.net/posts/getting-msb6006-errors-for-mstest-under-tfs-team-build-2008/</link>
      <pubDate>Mon, 22 Dec 2008 09:25:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-msb6006-errors-for-mstest-under-tfs-team-build-2008/</guid>
      <description>&lt;p&gt;I have been rebuilding our TFS build systems on Hyper-V based virtualised hardware. The long term plan being to hold a configured build server as as Hyper-V template to we could prevision extra ones quickly, or rebuild all of them if we need to upgrade some library or tool; in effect to give us revision control over our build servers.&lt;/p&gt;
&lt;p&gt;All seemed to be going OK, initially existing builds seemed to be running OK when targeted at the new server. However I soon saw that tests were failing with the error&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been rebuilding our TFS build systems on Hyper-V based virtualised hardware. The long term plan being to hold a configured build server as as Hyper-V template to we could prevision extra ones quickly, or rebuild all of them if we need to upgrade some library or tool; in effect to give us revision control over our build servers.</p>
<p>All seemed to be going OK, initially existing builds seemed to be running OK when targeted at the new server. However I soon saw that tests were failing with the error</p>
<p><em>MSBUILD : warning MSB6006: &ldquo;MSTest.exe&rdquo; exited with code 1</em></p>
<p>Further digging into the build log showed the tests were being run but the copy to the drop location was failing.</p>
<p><em><strong>Side note:</strong> if you read older TFS documentations and many blogs it says to add the flag</em></p>
<blockquote>
<p><em>/v:diagnostic</em></p></blockquote>
<p><em>to the TFSbuild.rsp file to get more logging – this is wrong with MSBuild 3.5 as used by TFS 2008. This now defaults to the highest level of logging, so to reduced it you must use</em></p>
<blockquote>
<p><em>/fileLoggerParameters:verbosity=normal</em></p></blockquote>
<p><em>so no help in debugging. Anyway back to the plot……</em></p>
<p>In the past our single build server used a share on its own disk as the file drop, but now as we intend multiple build servers I decided to have a central build share on main data store server. This had been setup with read/write access to the folder and the share associated for the <strong>tfsbuild</strong> domain user that the Team Build service runs as.</p>
<p>Turns out this is not enough. You also have to give read/write access to the <strong>tfsservice</strong> domain user as well. It seems the publish of the test results comes from the TFS server  not the build process. hence needing the extra rights. Once the change is made all work fine</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS TeamBuild  and Sharepoint WSP deployment (and any post build events for that matter)</title>
      <link>https://blog.richardfennell.net/posts/tfs-teambuild-and-sharepoint-wsp-deployment-and-any-post-build-events-for-that-matter/</link>
      <pubDate>Fri, 19 Dec 2008 12:45:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-teambuild-and-sharepoint-wsp-deployment-and-any-post-build-events-for-that-matter/</guid>
      <description>&lt;p&gt;We use the &lt;a href=&#34;http://www.codeplex.com/sptemplateland&#34;&gt;SharePoint Visual Studio Project Template on CodePlex&lt;/a&gt; to create WSP deployment packages for our SharePoint features. I tend to think of this WSP creation project in the same way as a MSI installer; so we don’t put SharePoint components into the WSP solution itself, it is an extra project in the solution that assembles the components from a variety of other solutions (e.g. web parts, workflows, event receivers, shared libraries for the GAC etc) and builds a single deployable WSP file.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>We use the <a href="http://www.codeplex.com/sptemplateland">SharePoint Visual Studio Project Template on CodePlex</a> to create WSP deployment packages for our SharePoint features. I tend to think of this WSP creation project in the same way as a MSI installer; so we don’t put SharePoint components into the WSP solution itself, it is an extra project in the solution that assembles the components from a variety of other solutions (e.g. web parts, workflows, event receivers, shared libraries for the GAC etc) and builds a single deployable WSP file.</p>
<p>Running locally on a developers PC inside Visual Studio this template has worked well, the only change I make from the default is to alter the WSP projects pre-build event script to xcopy all the files into the correct directories to allow the VBScript files to create the WSP.</p>
<p>In our drive to automation and automatic testing I have been looking at getting the WSP created as part of our TFS Team Build process. It turns out you get a few problems because Visual Studio and Team Build do macro expansion differently.</p>
<p>So my Pre-build event becomes</p>
<pre tabindex="0"><code>echo PREBUILD STARTED  rem Check if we running in VS or Teambuild if not exist &#34;......CLIENTLIBRARYSharedLibProjectbin$(ConfigurationName)SharedLibProject.dll&#34; goto tfsbuild  echo Copy from VS locations, in this sample we assume a shared library, a webpart and some javascript xcopy &#34;......CLIENTLIBRARYSharedLibProjectbin$(ConfigurationName)SharedLibProject.dll&#34;  &#34;$(ProjectDir)DLLSGAC&#34;  /F /R /Y xcopy &#34;......Web Partbin$(ConfigurationName)\*.dll&#34;  &#34;$(ProjectDir)DLLSGAC&#34;  /F /R /Y xcopy &#34;$(SolutionDir)HOSTbinHOST.dll&#34;  &#34;$(ProjectDir)DLLSGAC&#34;  /F /R /Y xcopy &#34;$(SolutionDir)HOSTjson\*&#34;  &#34;$(ProjectDir)TEMPLATELAYOUTS&#34;  /F /R /Y xcopy &#34;$(SolutionDir)HOST\*.js&#34;  &#34;$(ProjectDir)TEMPLATELAYOUTS&#34;  /F /R /Y  goto end  :tfsbuild echo Copy from TFS build locations  xcopy &#34;$(outdir)SharedLibproject.dll&#34;  &#34;$(ProjectDir)DLLSGAC&#34;  /F /R /Y  xcopy &#34;$(outdir)WebPart.Core.dll&#34;  &#34;$(ProjectDir)DLLSGAC&#34;   /F /R /Y xcopy &#34;$(outdir)WebPart.UI.dll&#34;  &#34;$(ProjectDir)DLLSGAC&#34;   /F /R /Y xcopy &#34;$(outdir)Host.dll&#34;  &#34;$(ProjectDir)DLLSGAC&#34;   /F /R /Y xcopy &#34;$(SolutionDir)HOSTjson\*&#34;  &#34;$(ProjectDir)TEMPLATELAYOUTSjson\*&#34;  /F /R /Y xcopy &#34;$(SolutionDir)HOST\*.js&#34;  &#34;$(ProjectDir)TEMPLATELAYOUTS\*.js&#34;   /F /R /Y  :end  echo PREBUILD COMPLETE
</code></pre><p>Key points to note here are</p>
<ul>
<li>For Visual Studio you can use Xcopy /s it makes no difference as there are no sub-directories (so you might ask why use it it all, I guess in some cases a generic copy all is easier than specifying a fixed file and directory). This is not the case for Team Build, if you use /s you can get multiple copies of DLLs in sub-directories created. This is because of the way Team Build structures it’s directories. The $(outdir)  is not a subdirectory of the $(solutiondir) as it is in Visual Studio, it is an absolute path defined for the build agents settings where all the outputs for all the projects in the build are assembled. So, depending on the project type, you seem to get sub directories. So it is best to be very specific as to what to copy, avoid wildcards and recursion.</li>
<li>When doing a wildcard xcopy as with json* files on Team Build you must specify the copy to file name i.e. json*, if you don’t you get the question ‘is the target a file or a directory’ message which obviously kills the build. This does not occur within Visual Studio.</li>
</ul>
<p>It is also worth altering the post build event, by default the WSP is created in the project root, but if it is copied to the $(outdir) it ends up in the Team build drop location, so can be picked up by anyone, just like a DLL.</p>
<pre tabindex="0"><code>echo POSTBUILD STARTED rem commented out as the build box does not have SharePoint installed rem this could be wrappered in the chheck to see if the directory is present if we are on tema build or not rem XCOPY &#34;$(ProjectDir)TEMPLATE\*&#34; &#34;C:Program FilesCommon FilesMicrosoft Sharedweb server extensions12TEMPLATE&#34; /S /F /R /Y  echo Run the VBscripts to create the XML files &#34;$(ProjectDir)CreateManifest.vbs&#34; &#34;$(ProjectDir)&#34; &#34;$(ProjectName)&#34; &#34;$(ProjectDir)CreateCabDDF.vbs&#34; &#34;$(ProjectDir)&#34; &#34;$(ProjectName)&#34;  echo Build the WSP cd &#34;$(ProjectDir)&#34; makecab.exe /F cab.ddf  echo Copy it to the out directory xcopy \*.wsp &#34;$(TargetDir)\*.wsp&#34; /y  echo POSTBUILD COMPLETE
</code></pre><p>However your problems do not end here. If you build this WSP project locally on a development PC all is fine. However (depending upon you project) it may fail on Team Build, well actually not fail just pause forever. This due to the way that Team Build checks out folders. The WSP project has a folder structure you drop files in that the VBScript files scan to create the manifest and then the WSP. If one of these directories is empty then it is not created on the build box and the VBScript stalls.</p>
<p>The solution is simply just add an extra folder exists check in the CreateCabDDF.vbs file’s EnumFolder method</p>
<pre tabindex="0"><code>sub EnumFolder(sFolder, sRelativePath)     dim oFolder, oFolders, oSub, oFile          rem this is the extra line     If oFS.FolderExists(sFolder) Then          set oFolder = oFS.GetFolder(sFolder)              if (sRelativePath = &#34;TEMPLATE&#34;) then sRelativePath = &#34;&#34;         if (sRelativePath = &#34;FEATURES&#34;) then sRelativePath = &#34;&#34;              if (sRelativePath &lt;&gt; &#34;&#34;) then sRelativePath = sRelativePath + &#34;&#34;              for each oFile in oFolder.Files             oDDF.WriteLine &#34;&#34;&#34;&#34; + oFile.Path + &#34;&#34;&#34;&#34; + vbTab + &#34;&#34;&#34;&#34; + sRelativePath + oFile.Name + &#34;&#34;&#34;&#34;         next          if (sRelativePath &lt;&gt; &#34;&#34; and InStr(1, sFolder, &#34;FEATURES&#34;) &gt; 0) then             sRelativePath = &#34;FEATURES&#34; + sRelativePath         end if         for each oSub in oFolder.SubFolders             EnumFolder oSub.Path, sRelativePath + oSub.Name        next            end if      end sub
</code></pre><p>Once this is all one the you can build the project in the Team Build</p>
]]></content:encoded>
    </item>
    <item>
      <title>Steve Ballmer’s MVP Live Search Challenge</title>
      <link>https://blog.richardfennell.net/posts/steve-ballmers-mvp-live-search-challenge/</link>
      <pubDate>Tue, 09 Dec 2008 23:02:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/steve-ballmers-mvp-live-search-challenge/</guid>
      <description>&lt;p&gt;At the last MVP Summit Steve Ballmer said “I’m going to ask you one week switch your default [search engine], one week. At the end of the week…I’ll want feedback, how was your week, what happened, what did you like, what didn’t you like … Can I make that deal with you? (Cheers and applause.) That’s the deal.”&lt;/p&gt;
&lt;p&gt;Well the week was last week, and how did I find Live Search?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the last MVP Summit Steve Ballmer said “I’m going to ask you one week switch your default [search engine], one week. At the end of the week…I’ll want feedback, how was your week, what happened, what did you like, what didn’t you like … Can I make that deal with you? (Cheers and applause.) That’s the deal.”</p>
<p>Well the week was last week, and how did I find Live Search?</p>
<p>I have to say it is vastly improved, in the past I just assumed Live Search would find nothing of use, especially if I was after something I would expect to find on a Microsoft site like TechNet.</p>
<p>This week I have found that though it does not return exactly the same a Google, it is just as useful; in fact the two are fairly complimentary. For most searches it does not now seem to matter which one I used, but when really digging one might turn up something the other does not.</p>
<p>So am I going to move back to Goggle? Well I am just not sure it matters for day to day searching. I certainly don’t now feel the need to change my default search engine to Google immediately when I setup a PC as I used to.</p>
<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/live_500DE77A.png"><img alt="live" loading="lazy" src="http://blogs.blackmarble.co.uk/blogs/rfennell/live_thumb_245D1D7E.png" title="live"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Developer testing of Sharepoint Webparts using Typemock Isolator and Ivonna</title>
      <link>https://blog.richardfennell.net/posts/developer-testing-of-sharepoint-webparts-using-typemock-isolator-and-ivonna/</link>
      <pubDate>Thu, 04 Dec 2008 14:20:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/developer-testing-of-sharepoint-webparts-using-typemock-isolator-and-ivonna/</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;strong&gt;Updated 3 Dec 2008 –&lt;/strong&gt; I got an email from Artem Smirnov the author of Ivonna pointing out a couple of things, so I have updated this post&lt;br&gt;
&lt;em&gt;&lt;strong&gt;Updated 3 May 2009 –&lt;/strong&gt; I altered the code samples as the previous ones did not seem to work with Typemock Isolator 5.3.0 .&lt;/em&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I have previously written a &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/11/13/typemock-isolator-sptypemock-and-sharepoint-testing.aspx&#34;&gt;post on using Isolator with Sharepoint&lt;/a&gt;, also &lt;a href=&#34;http://www.21apps.com/tdd-getting-into-sharepoint-om/&#34;&gt;Andrew Woodward has written a good and more detailed tutorial on the subject&lt;/a&gt;, so I don’t intend to go over old ground here.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em><strong>Updated 3 Dec 2008 –</strong> I got an email from Artem Smirnov the author of Ivonna pointing out a couple of things, so I have updated this post<br>
<em><strong>Updated 3 May 2009 –</strong> I altered the code samples as the previous ones did not seem to work with Typemock Isolator 5.3.0 .</em></em></p>
<p>I have previously written a <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/11/13/typemock-isolator-sptypemock-and-sharepoint-testing.aspx">post on using Isolator with Sharepoint</a>, also <a href="http://www.21apps.com/tdd-getting-into-sharepoint-om/">Andrew Woodward has written a good and more detailed tutorial on the subject</a>, so I don’t intend to go over old ground here.</p>
<p>Want I want to look in this post is the testing of webparts. A webpart, whether in Sharepoint or not is fundamentally a data viewer, something is rendered to HTML. As a developer a good deal of time is spent making sure what is rendered is what is required. Usually this means making sure the correct controls are rendered and the right CSS applied. Now due to the Sharepoint deploy model the process of editing the webpart, compiling it, building a WSP or manually deploying can be slow; often requiring the use of a VPC based development system. In this post I discuss ways to mitigate these problems.</p>
<h3 id="if-there-are-no-calls-to-sharepoint">If there are no calls to Sharepoint</h3>
<p>If your webpart makes no reference to the Sharepoint object model you can write an ASP.NET test harness to load the webpart as below</p>
<pre tabindex="0"><code>&lt;%@ Page Language=&#34;C#&#34; AutoEventWireup=&#34;true&#34; CodeBehind=&#34;BasicTest.aspx.cs&#34; Inherits=&#34;TestWebSite.BasicTest&#34; %&gt;  &lt;%@ Register Assembly=&#34;DemoWebParts&#34; Namespace=&#34;DemoWebParts&#34; TagPrefix=&#34;wp&#34; %&gt; &lt;!DOCTYPE html PUBLIC &#34;-//W3C//DTD XHTML 1.0 Transitional//EN&#34; &#34;http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd&#34;\&gt; &lt;html xmlns\=&#34;http://www.w3.org/1999/xhtml&#34;\&gt; &lt;head runat\=&#34;server&#34;\&gt;     &lt;title\&gt;Untitled Page&lt;/title\&gt; &lt;/head\&gt; &lt;body\&gt;     &lt;form id\=&#34;form1&#34; runat\=&#34;server&#34;\&gt;     &lt;div\&gt;     &lt;asp:TextBox ID\=&#34;textbox1&#34; runat\=&#34;server&#34; Text\=&#34;Demo text&#34; /&gt;         &lt;asp:WebPartManager ID\=&#34;WebPartManager1&#34; runat\=&#34;server&#34;\&gt;         &lt;/asp:WebPartManager\&gt;         &lt;asp:WebPartZone ID\=&#34;WebPartZone1&#34; runat\=&#34;server&#34; \&gt;             &lt;ZoneTemplate\&gt;                 &lt;wp:HelloWorldWebPart id\=&#34;wp1&#34; runat\=&#34;server&#34; /&gt;             &lt;/ZoneTemplate\&gt;         &lt;/asp:WebPartZone\&gt;     &lt;/div\&gt;     &lt;/form\&gt; &lt;/body\&gt; &lt;/html\&gt;
</code></pre><p>This means I can loading the page with the webpart as fast as any other ASP.NET page and do whatever manual tests I want to do. However, this technique does not work if you need to get data from Sharepoint.</p>
<h3 id="mocking-out-sharepoint">Mocking out Sharepoint</h3>
<p>To address the case when I have to get data from Sharepoint I have been using Typemock Isolator inside the ASP.NET page load. As shown below</p>
<pre tabindex="0"><code>using System;  
using System.Collections;  
using System.Configuration;  
using System.Data;  
using System.Linq;  
using System.Web;  
using System.Web.Security;  
using System.Web.UI;  
using System.Web.UI.HtmlControls;  
using System.Web.UI.WebControls;  
using System.Web.UI.WebControls.WebParts;  
using System.Xml.Linq;  
using TypeMock.ArrangeActAssert;  
using Microsoft.SharePoint;  
using System.Collections.Generic;  
  
namespace TestWebSite  
{  
    public partial class SpSimpleTest : System.Web.UI.Page  
    {  
        public const string ListName = &#34;Test List&#34;;  
  
        protected void Page\_Load(object sender, EventArgs e)  
        {  
            // set the name of the list to read data from  
            wp1.DataList = ListName;  
  
            // set the fake return value for the currently running context  
            // we can us null as the current parameter as this is what this web page will return  
            Isolate.WhenCalled(() =&gt; Microsoft.SharePoint.WebControls.SPControl.GetContextSite(null).Url).WillReturn(&#34;http://mockedsite.com&#34;);  
  
            // Now the site  
            SPSite fakeSite = Isolate.Fake.Instance&lt;SPSite&gt;();  
            Isolate.Swap.NextInstance&lt;SPSite&gt;().With(fakeSite);  
  
            var itemCollection = new List&lt;SPListItem&gt;();  
            for (int i = 0; i &lt; 3; i++)  
            {  
                var fakeItem = Isolate.Fake.Instance&lt;SPListItem&gt;();  
                itemCollection.Add(fakeItem);  
  
                Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Title&#34;\]).WillReturn(string.Format(&#34;Title {0}&#34;, i));  
                Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Email Address&#34;\]).WillReturn(string.Format(&#34;email{0}@email.com&#34;, i));  
  
            }  
  
            Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[ListName\].Items).WillReturnCollectionValuesOf(itemCollection);  
  
  
        }  
    }  
}
</code></pre><p>In effect I do the same as I did in the previous post but have placed the fake object creation in the page load. Now when I tried this with <a href="http://www.typemock.com/Typemock_software_development_tools.html">Typemock 5.1.2</a> it did not work, so I put a query on the <a href="http://www.typemock.com/community/viewtopic.php?t=1034">product forum</a> and turns out there was a namespace configuration file issue. Typemock quickly issued a patch which I am told will be included in 5.1.3.</p>
<p><strong>Updated 3 May 2009</strong> I altered this code sample as the previous form had worked with 5.1.3 did not work with 5.3.0. This new form of faking should be OK with all versions.</p>
<p>So with this setup we can place a Sharepoint dependant webpart in a test ASP.NET page and get it to render, thus again making for a fast development/design/manual test framework that does not require Sharepoint to be installed on the development PC. Great for sorting out all those CSS issues.</p>
<h3 id="mocking-out-the-web-server-too">Mocking out the web server too</h3>
<p>However in  a TDD world it would be nice to automate some of the webpart testing, so we could encode a question like ‘if there are three items in a sharepoint list does the webpart renders a combo box with three items in it?’.</p>
<p>Now the purest might say this is not a unit test, it is an integration test. I am coming to the conclusion that especially in the land of Sharepount this semantic difference is not worth arguing about as all test tend to integration. For this reason I tend to think more of developer tests as opposed to acceptance tests – developer tests being the ones the developer can run repeatedly in the TDD style, during the development and refactor process, as opposed to slow tester that are part of the automated build or QA process.</p>
<p>So to this end I have been looking at <a href="http://www.sm-art.biz/Ivonna.aspx">Ivonna</a>. This allows, using Typemock beneath it, the developer to create a mock webserver. So you can programmatically in a test load a web page that holds a webpart (I use the same test page/site I used above), press some buttons etc and probe the contents of the webpart.</p>
<p>You end up with tests that look like this</p>
<p><strong>Updated 3 May 2009</strong> Again I altered this code sample to work for Isolator 5.3.0.</p>
<pre tabindex="0"><code>using System;  
using System.Collections.Generic;  
using System.Linq;  
using System.Text;  
using TypeMock.ArrangeActAssert;  
using Microsoft.SharePoint;  
using Microsoft.VisualStudio.TestTools.UnitTesting;  
using Ivonna.Framework;  
using System.Web.UI.WebControls;  
using System.Web.UI.WebControls.WebParts;  
  
namespace TestProject  
{  
    \[TestClass, RunOnWeb\]  
    public class IvonnaTest  
    {  
        public const string ListName = &#34;Test List&#34;;  
  
        \[TestMethod\]  
        public void LoadWebPage\_RenderWebPart\_3EntriesInList()  
        {  
            // the fake site is now created inside the test not in te aspx page, remember not to fake it twice!  
            // create the mock SP Site we are using  
            Isolate.WhenCalled(() =&gt; Microsoft.SharePoint.WebControls.SPControl.GetContextSite(null).Url).WillReturn(&#34;http://mockedsite.com&#34;);  
            SPSite fakeSite = Isolate.Fake.Instance&lt;SPSite&gt;();  
            Isolate.Swap.NextInstance&lt;SPSite&gt;().With(fakeSite);  
  
            var itemCollection = new List&lt;SPListItem&gt;();  
            for (int i = 0; i &lt; 3; i++)  
            {  
                var fakeItem = Isolate.Fake.Instance&lt;SPListItem&gt;();  
                itemCollection.Add(fakeItem);  
  
                Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Title&#34;\]).WillReturn(string.Format(&#34;Title {0}&#34;, i));  
                Isolate.WhenCalled(() =&gt; fakeItem\[&#34;Email Address&#34;\]).WillReturn(string.Format(&#34;email{0}@email.com&#34;, i));  
  
            }  
  
            Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[ListName\].Items).WillReturnCollectionValuesOf(itemCollection);  
  
            TestSession session = new TestSession(); //Start each test with this  
            WebRequest request = new WebRequest(&#34;SpMvcTest.aspx&#34;); //Create a WebRequest object  
            WebResponse response = session.ProcessRequest(request); //Process the request  
            System.Web.UI.Page page = response.Page;  
            //Check the page loaded  
            Assert.IsNotNull(page);  
  
            // you would hope you could get to a given cntrol using th efollowing lines  
            // but they do not work  
            //var txt = page.FindControl(&#34;WebPartManager1$wp1$ctl09&#34;);  
            //var txt = page.FindControl(&#34;WebPartManager1\_wp1\_ctl09&#34;);  
  
            // check the webpart, we have to get at this via the zone  
            WebPartZone wpzone = page.FindControl(&#34;WebPartZone1&#34;) as WebPartZone;  
            Assert.IsNotNull(wpzone);  
            var wp = wpzone.WebParts\[0\] as DemoWebParts.SpMvcWebPart;  
            Assert.IsNotNull(wp);  
  
            // so we have to use the following structure and dig knowing the format  
            // webpart/panel/table/row/cell/control  
            var txt = ((TableRow)wp.Controls\[0\].Controls\[0\].Controls\[0\]).Cells\[1\].Controls\[0\] as Label;  
            Assert.IsNotNull(txt);  
            Assert.AreEqual(&#34;http://mockedsite.com&#34;, txt.Text);  
  
            var list = ((TableRow)wp.Controls\[0\].Controls\[0\].Controls\[1\]).Cells\[1\].Controls\[0\] as DropDownList;  
            Assert.IsNotNull(list);  
            Assert.AreEqual(3, list.Items.Count);  
  
        }  
  
    }  
}
</code></pre><p><em>Update after email from Artem</em></p>
<ul>
<li>
<p>_In the code sample I have used the long winded way of loading a page, for clarity of what is going on , but you could just write _</p>
<blockquote>
<p><em>System.Web.UI.Page page = session.GetPage(&ldquo;SpMvcTest.aspx&rdquo;)</em></p></blockquote>
</li>
<li>
<p><em>I stated you cannot write</em></p>
<blockquote>
<p><em>var txt = page.FindControl(&ldquo;WebPartManager1$wp1$ctl09&rdquo;);</em></p></blockquote>
<p><em>this is a limitation/feature of Asp.Net naming containers, not Ivonna&rsquo;s, but you can write</em></p>
<p><em>var wp = (new ControlHelper(page)).FindControl(&ldquo;wp1&rdquo;) as DemoWebParts.SpMvcWebPart;</em></p>
<p><em>and in the version 1.2.0 of Ivonna you can use an extension method:</em></p>
<p><em>var wp = page.FindRecursive&lt;DemoWebParts.SpMvcWebPart&gt;(&ldquo;wp1&rdquo;);</em></p>
<p><em>If you are sure about the &ldquo;ctl09&rdquo; id (that could change if you change the layout), you can also write:</em></p>
<p><em>var txt = (new ControlHelper(page)).FindControl(&ldquo;WebPartManager1&rdquo;, &ldquo;wp1&rdquo;, &ldquo;ctl09&rdquo;) as Label</em></p>
<p><em>so that it looks for something with ID of &ldquo;ctl09&rdquo; inside something with ID of &ldquo;wp1&rdquo; inside something with ID of &ldquo;WebPartManager1&rdquo;</em></p>
</li>
</ul>
<p>There are a couple of gottas with this system</p>
<ul>
<li>the ‘path’ to the controls within the test page are a little nasty, you don’t seem to be able to just use FindControl(string); but you should know what you are after so it is not that limiting.</li>
<li>you have to hard code the webpart into the test page. In theory you could programmatically add them, but this would require a personalisation provider running behind the WebPart manager which in turn would require a SQL provider so not realistic for a test in a mock framework (maybe we could mock this too?). Again I don’t see this as a major limit.</li>
</ul>
<h3 id="a-better-design">A better design</h3>
<p>Up to this point I have been assuming a very naively written webpart with all the logic in the CreateChildControls method and behind button events. Without Typemock and Ivonna this is all but un-testable, but I hope I have shown we now have options to develop and test outside Sharepoint.</p>
<p>At this point I think it is important to also consider a better design for the webpart. Using an MVC model we get many more potential points to test. Now it is an interesting discussion if a webpart can be MVC, as MVC is a design for a whole page (and associated underlying framework) not just a small part of a page. However we can use the basic design principles of separation of roles in MVC thus allow all our Sharepoint calls to be placed in the Sharepoint implementation of some IDataprovider model, which we could manually mock out etc.</p>
<p>This is all good, allowing manual mocking via dependency injection, but again we can use Typemock to dynamically mock out the models, view or controller, or just to <a href="http://blog.typemock.com/2008/12/difference-between-duck-typing-and.html">duck type</a> items thus creating tests as shown below. This should be a great saving in time and effort.</p>
<pre tabindex="0"><code>\[TestMethod\] public void WebPartController\_LoadFromSharePointIntoManuallyMockedView\_Returns3Items() {      TestHelpers.CreateFakeURL();     TestHelpers.CreateFakeSPSite();      var datalayer = new DemoWebParts.Models.SPDataSource(         Microsoft.SharePoint.WebControls.SPControl.GetContextSite(null).Url,         TestHelpers.ListName);               // the controller will create a view, the fake should be swapped in     var controller = new DemoWebParts.Controllers.Controller(datalayer);      // create a local view that expose data for test     TestView view = new TestView();     // and swap it in     Isolate.Swap.CallsOn(controller.View).WithCallsTo(view);       // get the data, there should be data in the view     controller.Init();       // check the data is there as expected     Assert.AreEqual(3, view.TestData.EmailAddresses.Count);   }
</code></pre><p>So in summary, if you are looking at Sharepoint and, as I have, wondered how to test or to speed up your developer/test cycle have a serious Typemock and Ivonna. I think you will like what you find.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Software Craftsmanship 2009</title>
      <link>https://blog.richardfennell.net/posts/software-craftsmanship-2009/</link>
      <pubDate>Tue, 02 Dec 2008 10:18:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/software-craftsmanship-2009/</guid>
      <description>&lt;p&gt;Applications to attend &lt;a href=&#34;http://parlezuml.com/softwarecraftsmanship/&#34;&gt;Software Craftsmanship 2009&lt;/a&gt; have opened, this is a free conference that aims to discuss ‘&lt;em&gt;the &amp;ldquo;hard skills&amp;rdquo; that programmers and teams require to deliver high quality working software’.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;If you have not heard of Software Craftsmanship take a look at Peter McBreen’s book &lt;a href=&#34;http://books.google.co.uk/books?id=C9vvHV1lIawC&amp;amp;dq=peter&amp;#43;mcbreen&amp;#43;%22software&amp;#43;Craftsmanship%22&amp;amp;pg=PP1&amp;amp;ots=pL-y3qeQjJ&amp;amp;source=bn&amp;amp;sig=rSn0wpaw65eFRQ8nOx8HiumKZfg&amp;amp;hl=en&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;resnum=4&amp;amp;ct=result#PPP1,M1&#34;&gt;Software Craftsmanship: The New Imperative&lt;/a&gt; well worth the read.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://parlezuml.com/softwarecraftsmanship/images/logo.gif&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Applications to attend <a href="http://parlezuml.com/softwarecraftsmanship/">Software Craftsmanship 2009</a> have opened, this is a free conference that aims to discuss ‘<em>the &ldquo;hard skills&rdquo; that programmers and teams require to deliver high quality working software’.</em></p>
<p>If you have not heard of Software Craftsmanship take a look at Peter McBreen’s book <a href="http://books.google.co.uk/books?id=C9vvHV1lIawC&amp;dq=peter&#43;mcbreen&#43;%22software&#43;Craftsmanship%22&amp;pg=PP1&amp;ots=pL-y3qeQjJ&amp;source=bn&amp;sig=rSn0wpaw65eFRQ8nOx8HiumKZfg&amp;hl=en&amp;sa=X&amp;oi=book_result&amp;resnum=4&amp;ct=result#PPP1,M1">Software Craftsmanship: The New Imperative</a> well worth the read.</p>
<p><img loading="lazy" src="http://parlezuml.com/softwarecraftsmanship/images/logo.gif"></p>
]]></content:encoded>
    </item>
    <item>
      <title>December XPClub Meeting</title>
      <link>https://blog.richardfennell.net/posts/december-xpclub-meeting/</link>
      <pubDate>Sat, 29 Nov 2008 13:28:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/december-xpclub-meeting/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://xpclub.erudine.com/2008/11/december-meeting-design-patterns-and.html&#34;&gt;next meeting is on the 10th of December at Victoria Hotel&lt;/a&gt; in central Leeds at 7pm as usual.&lt;/p&gt;
&lt;p&gt;The speaker is Gary Short who is speaking on Design Patterns. Come to this free event to find out more about this vital subject to developers in any language from an excellent speaker.&lt;/p&gt;
&lt;p&gt;As an added bonus we will all head off to &lt;a href=&#34;http://www.festiveleeds.com/christmasmarket/&#34;&gt;Leeds Christmas Market&lt;/a&gt;, to have some wurst, sauerkraut and continental lager.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://xpclub.erudine.com/2008/11/december-meeting-design-patterns-and.html">next meeting is on the 10th of December at Victoria Hotel</a> in central Leeds at 7pm as usual.</p>
<p>The speaker is Gary Short who is speaking on Design Patterns. Come to this free event to find out more about this vital subject to developers in any language from an excellent speaker.</p>
<p>As an added bonus we will all head off to <a href="http://www.festiveleeds.com/christmasmarket/">Leeds Christmas Market</a>, to have some wurst, sauerkraut and continental lager.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Interesting news on test SharePoint</title>
      <link>https://blog.richardfennell.net/posts/interesting-news-on-test-sharepoint/</link>
      <pubDate>Tue, 25 Nov 2008 12:48:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/interesting-news-on-test-sharepoint/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://weblogs.asp.net/rosherove/archive/2008/11/24/announcing-isolator-for-sharepoint-with-a-free-full-license-for-bloggers.aspx&#34;&gt;Typemock announced today&lt;/a&gt; a new product &lt;a href=&#34;http://www.typemock.com/sharepointpage.php?utm_source=sp_bb&amp;amp;utm_medium=blog_4sp&amp;amp;utm_campaign=sp_bb&#34;&gt;Isolator for Sharepoint&lt;/a&gt; – which allows unit testing of Sharepoint code without needing Sharepoint installed. Now this is something I have been using the full version of Isolator for of &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/11/13/typemock-isolator-sptypemock-and-sharepoint-testing.aspx&#34;&gt;late&lt;/a&gt;, and there are more blog posts on the way from me, so watch this space.&lt;/p&gt;
&lt;p&gt;So if you are a Sharepoint developer this is an important product you should a least have a look at.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://weblogs.asp.net/rosherove/archive/2008/11/24/announcing-isolator-for-sharepoint-with-a-free-full-license-for-bloggers.aspx">Typemock announced today</a> a new product <a href="http://www.typemock.com/sharepointpage.php?utm_source=sp_bb&amp;utm_medium=blog_4sp&amp;utm_campaign=sp_bb">Isolator for Sharepoint</a> – which allows unit testing of Sharepoint code without needing Sharepoint installed. Now this is something I have been using the full version of Isolator for of <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/11/13/typemock-isolator-sptypemock-and-sharepoint-testing.aspx">late</a>, and there are more blog posts on the way from me, so watch this space.</p>
<p>So if you are a Sharepoint developer this is an important product you should a least have a look at.</p>
<p>It is also interesting to see the <a href="http://blog.typemock.com/2008/11/newisolatorforsharepointtoolforunittest.html">promotion mechanism</a> being used to get the word out onto the blogs, free license to the first 50 bloggers. So I would like to state my position here, I already have a Typemock Isolator license so this post has been written without the carrot of a free license, just written by an very impressed user.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Back from DDD7</title>
      <link>https://blog.richardfennell.net/posts/back-from-ddd7/</link>
      <pubDate>Mon, 24 Nov 2008 10:40:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/back-from-ddd7/</guid>
      <description>&lt;p&gt;Another long day over the weekend at DDD7, but worth it. An excellent selection of sessions; I particularly liked &lt;a href=&#34;http://blog.benhall.me.uk/2008/11/ddd7-slides-and-code-pex-future-of-unit.html&#34;&gt;Ben Hall’s&lt;/a&gt; on &lt;a href=&#34;http://research.microsoft.com/projects/Pex/wiki/book.html&#34;&gt;Pex&lt;/a&gt; and &lt;a href=&#34;http://msmvps.com/blogs/jon_skeet/default.aspx&#34;&gt;Jon Skeet’s on Linq&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;A big thank you to the organisers for putting on such a successful event. It was noticeably fuller than past events, as we know DDD7 filled up in about four hours. Maybe time for a bigger venue, but would we lose the atmosphere?&lt;/p&gt;
&lt;p&gt;The other option is for DDD style events around the country and to this end there were a few announcements yesterday. We already know about &lt;a href=&#34;http://developerdayscotland.com/main/Default.aspx&#34;&gt;DD Scotland in May 2009&lt;/a&gt;, the new ones announced for Q2 2009 were&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Another long day over the weekend at DDD7, but worth it. An excellent selection of sessions; I particularly liked <a href="http://blog.benhall.me.uk/2008/11/ddd7-slides-and-code-pex-future-of-unit.html">Ben Hall’s</a> on <a href="http://research.microsoft.com/projects/Pex/wiki/book.html">Pex</a> and <a href="http://msmvps.com/blogs/jon_skeet/default.aspx">Jon Skeet’s on Linq</a>.</p>
<p>A big thank you to the organisers for putting on such a successful event. It was noticeably fuller than past events, as we know DDD7 filled up in about four hours. Maybe time for a bigger venue, but would we lose the atmosphere?</p>
<p>The other option is for DDD style events around the country and to this end there were a few announcements yesterday. We already know about <a href="http://developerdayscotland.com/main/Default.aspx">DD Scotland in May 2009</a>, the new ones announced for Q2 2009 were</p>
<ul>
<li><a href="http://www.sqlbits.com/">SQLBits</a> 4 (Manchester)</li>
<li><a href="http://www.dddsouthwest.com/">Developer Day South West</a> (Exeter)</li>
</ul>
<p>Look out for more details was the organiser‘s call for speakers</p>
]]></content:encoded>
    </item>
    <item>
      <title>Call for speakers for Developer Day Scotland is open</title>
      <link>https://blog.richardfennell.net/posts/call-for-speakers-for-developer-day-scotland-is-open/</link>
      <pubDate>Fri, 21 Nov 2008 11:27:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/call-for-speakers-for-developer-day-scotland-is-open/</guid>
      <description>&lt;p&gt;The call for speakers at the DDS event on the 2nd of May 2009 is now open. So get your session proposals up and see if there is any interest.&lt;/p&gt;
&lt;p&gt;I am not sure what I will propose, maybe something about testing Sharepoint&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://developerdayscotland.com/images/badges/GetReady2-small.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The call for speakers at the DDS event on the 2nd of May 2009 is now open. So get your session proposals up and see if there is any interest.</p>
<p>I am not sure what I will propose, maybe something about testing Sharepoint</p>
<p><img loading="lazy" src="http://developerdayscotland.com/images/badges/GetReady2-small.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Entering a license key for Typemock Isolator if you are not administrator</title>
      <link>https://blog.richardfennell.net/posts/entering-a-license-key-for-typemock-isolator-if-you-are-not-administrator/</link>
      <pubDate>Mon, 17 Nov 2008 13:44:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/entering-a-license-key-for-typemock-isolator-if-you-are-not-administrator/</guid>
      <description>&lt;p&gt;To license an installation of Typemock Isolator you run the Configuration tools and type in the key, you don’t get the option to enter the key during the installation. When I tried to run this tool today I got the error&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_0AD2F873.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_6AB7EBB5.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Now at first I thought it might be that I was on a64bit OS and it was looking in a portion of the registry for 32bit applications. However I was wrong it was far simpler than that.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>To license an installation of Typemock Isolator you run the Configuration tools and type in the key, you don’t get the option to enter the key during the installation. When I tried to run this tool today I got the error</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_0AD2F873.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_6AB7EBB5.png" title="image"></a></p>
<p>Now at first I thought it might be that I was on a64bit OS and it was looking in a portion of the registry for 32bit applications. However I was wrong it was far simpler than that.</p>
<p>I am no longer running as administrator on my development box, so when I installed Typemock I was asked for elevated privileges via the UAC and all was OK. The configuration tool also needed to run at this privileges so it could update the registry, so the simple fix is that you just need to call the tool with a <em>RunAs</em> option and all is OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>Windows 7 on the Dell Mini</title>
      <link>https://blog.richardfennell.net/posts/windows-7-on-the-dell-mini/</link>
      <pubDate>Fri, 14 Nov 2008 11:35:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/windows-7-on-the-dell-mini/</guid>
      <description>&lt;p&gt;I don’t recommend having automatic update switched on (which is the default) for Windows 7. Yesterday my Windows 7 install decided to install 20 updates. It rebooted and then rebooted again and again. My guess is that a driver updated and killed the boot process.&lt;/p&gt;
&lt;p&gt;It did try to go into the automatic fix, but this just said it could not fix the issue, maybe it was a driver issue. I could not find an equivalent to safe mode to try to delete the updates, so today I am re-installing. This time making setting a system restore point so I can rollback if the same happens again&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I don’t recommend having automatic update switched on (which is the default) for Windows 7. Yesterday my Windows 7 install decided to install 20 updates. It rebooted and then rebooted again and again. My guess is that a driver updated and killed the boot process.</p>
<p>It did try to go into the automatic fix, but this just said it could not fix the issue, maybe it was a driver issue. I could not find an equivalent to safe mode to try to delete the updates, so today I am re-installing. This time making setting a system restore point so I can rollback if the same happens again</p>
]]></content:encoded>
    </item>
    <item>
      <title>TypeMock Isolator, SPTypeMock and SharePoint testing</title>
      <link>https://blog.richardfennell.net/posts/typemock-isolator-sptypemock-and-sharepoint-testing/</link>
      <pubDate>Thu, 13 Nov 2008 13:01:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/typemock-isolator-sptypemock-and-sharepoint-testing/</guid>
      <description>&lt;p&gt;I had to work unexpectedly from home yesterday, this has given me a chance to look at &lt;a href=&#34;http://blog.typemock.com&#34;&gt;TypeMock Isolator&lt;/a&gt; and &lt;a href=&#34;http://www.codeplex.com/SPTypeMock&#34;&gt;SPTypeMock&lt;/a&gt; to aid in the testing of SharePoint without the normal disturbances of being in the office.&lt;/p&gt;
&lt;p&gt;First thing I have to say is TypeMock is an amazing tool, OK it costs some money, unlike &lt;a href=&#34;http://ayende.com/projects/rhino-mocks.aspx&#34;&gt;RhinoMocks&lt;/a&gt;, but it’s ability to mock out sealed classes that have no public constructors is essential when testing SharePoint (which seems to contain nothing but sealed classes with no public constructors).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I had to work unexpectedly from home yesterday, this has given me a chance to look at <a href="http://blog.typemock.com">TypeMock Isolator</a> and <a href="http://www.codeplex.com/SPTypeMock">SPTypeMock</a> to aid in the testing of SharePoint without the normal disturbances of being in the office.</p>
<p>First thing I have to say is TypeMock is an amazing tool, OK it costs some money, unlike <a href="http://ayende.com/projects/rhino-mocks.aspx">RhinoMocks</a>, but it’s ability to mock out sealed classes that have no public constructors is essential when testing SharePoint (which seems to contain nothing but sealed classes with no public constructors).</p>
<p>I decided to try to retro-fit some tests to an old existing WebPart.This was a simple contact form that populated some combo-boxes from SPList collections then saved it’s results to another SPList. This had all been coded making direct calls to the SPList objects as required from within the WebPart. All very old school VB6 style, no <a href="http://en.wikipedia.org/wiki/Model-view-controller">MVC</a> pattern, so a block of legacy code that is hard to test. However, the beauty of using this mocking framework is that all you production code remains unaltered (though as we will see there is a good argument for designing/refactoring to aid testing).</p>
<p>I first started with the SPTypeMock library. This CodePlex project has been produced by a pair of MOSS MVPs Carlos Segura (<a href="http://www.ideseg.com">http://www.ideseg.com)</a> and Gustavo Velez (<a href="http://www.gavd.net">http://www.gavd.net</a> ). It provides a set of wrapper classes in the form <em>MockSPList</em> etc. classes that you can use to construct the the SharePoint mocks, basically they hide some of the TypeMock constructions. This means that test logic ends as follows (using the sample from CodePlex)</p>
<p><strong>Mocking a List Collection</strong><br>
-- Method to be tested:</p>
<pre tabindex="0"><code>        public static string TestMock\_02()         {             string strReturn = String.Empty;             try             {                 using (SPSite mySite = new SPSite(&#34;http://MiServidor&#34;))                 {                     using (SPWeb myWeb = mySite.OpenWeb())                     {                         int intTeller = 0;                         foreach (SPList oneList in myWeb.Lists)                         {                             Debug.WriteLine(oneList.Title);                             intTeller++;                         }                         strReturn = intTeller.ToString();                     }                 }             }             catch (Exception ex)             {                 strReturn = ex.ToString();             }             return strReturn;         }
</code></pre><p>-- Mocking method:```
[TestMethod]         public void TestMethod2()         {             MockSPSite mockSite = new MockSPSite(&ldquo;TestSite&rdquo;);                       MockSPWeb mockWeb = new MockSPWeb(&ldquo;TestWeb&rdquo;);                 MockSPList mockList0 = new MockSPList(&ldquo;MyList0&rdquo;);                      MockSPList mockList1 = new MockSPList(&ldquo;MyList1&rdquo;);                   MockSPList mockList2 = new MockSPList(&ldquo;MyList2&rdquo;);             mockWeb.Lists = new MockSPListCollection(new[]                             {                    mockList0,                    mockList1,                    mockList2                });               mockSite.Mock.ExpectGetAlways(&ldquo;RootWeb&rdquo;, mockWeb.GetInstance());                SPWeb WebMocked = mockWeb.GetInstance();                   using (RecordExpectations recorder = RecorderManager.StartRecording())                 {                 SPSite SiteMocked = new SPSite(&quot;&quot;);                     recorder.ExpectAndReturn(SiteMocked.OpenWeb(), WebMocked);                 }               string expected = &ldquo;3&rdquo;;                string actual;             actual = Program.TestMock_02();             Assert.AreEqual(expected, actual);         }</p>
<pre tabindex="0"><code>
This works well, they have done a good job. You get a more readable way to express standard TypeMock structure. Yes, the SPTypeMock library is missing some bits, but is a first release and they make point out there is work to do themselves. You can always just write the mocks yourself as in basic TypeMock tests.

However, I did not stop looking here, after doing a bit more reading I started to look at  [Isolators&#39; new AAA library](http://blog.typemock.com/2008/08/isolator-aaa-api-basics.html) (Arrange, Act, Assert) which I think first shipped with  5.1.0. This aims to hide much of the mocking process inside Isolator, by default an ‘empty fake’ is created for everything in the object tree being mocked and you just set values for the bits you care about. This makes it very easy to create a fake of a [large system such as Sharepoint](http://blog.typemock.com/2008/09/testing-sharepoint-now-easier-with-new.html) by using the magic **Members.ReturnRecursiveFakes** option

This allowed me to greatly reduce the code required to setup by tests. I create a fake SPSite (and all the object under it) and then set the values for just the items I care about for the test.

SPSite fakeSite = Isolate.Fake.Instance&lt;SPSite&gt;(Members.ReturnRecursiveFakes);  
           Isolate.Swap.NextInstance&lt;SPSite&gt;().With(fakeSite);

           Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[&#34;Centre Locations&#34;\].Items).WillReturnCollectionValuesOf(  
               new List&lt;SPItem&gt; {  
                   Isolate.Fake.Instance&lt;SPItem&gt;(),  
                   Isolate.Fake.Instance&lt;SPItem&gt;(),  
                   Isolate.Fake.Instance&lt;SPItem&gt;() });

           Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[&#34;Centre Locations&#34;\].Items\[0\]\[&#34;Title&#34;\]).WillReturn(&#34;Title1&#34;);  
           Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[&#34;Centre Locations&#34;\].Items\[0\]\[&#34;Email Address&#34;\]).WillReturn(&#34;email1@email.com&#34;);

           Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[&#34;Centre Locations&#34;\].Items\[1\]\[&#34;Title&#34;\]).WillReturn(&#34;Title2&#34;);  
           Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[&#34;Centre Locations&#34;\].Items\[1\]\[&#34;Email Address&#34;\]).WillReturn(&#34;email2@email.com&#34;);

           Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[&#34;Centre Locations&#34;\].Items\[2\]\[&#34;Title&#34;\]).WillReturn(&#34;Title3&#34;);  
           Isolate.WhenCalled(() =&gt; fakeSite.RootWeb.Lists\[&#34;Centre Locations&#34;\].Items\[2\]\[&#34;Email Address&#34;\]).WillReturn(&#34;email3@email.com&#34;);

This I think makes the unit testing of business logic within SharePoint viable without having to jump through too many hoops.

Given the choice between the SPTypeMock and the AAA syntax I think I would stick with the latter, but you never know in the future. I suppose it will all come down to which syntax gives the quickest (and maybe more importantly easiest to read) tests in the future.

I did say said I would come back to the application under test’s architecture. This simple WebPart, which contain a good selection of calls to SPLists and had client side validation has proved to be very hard to test, as you would expect. OK you have used TypeMock to get data into it, but how do you test it is in the right field? You can try to render the control but here are issues of WebPartManagers and script registration that frankly are not worth trying to fix. The better solution is some design for test, which I think for WebParts means MVC. In some ways this pattern would negate the need for TypeMock as you would create an IDataModel interface and have any test version you require, though of course you could mock this with TypeMock.

All this said I am hugely impressed by TypeMock Isolator, a really powerful tool for the testing of complex platforms like SharePoint

[![](http://www.typemock.com/images/mockfan.gif)](http://www.typemock.com/?id=1)
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>Strange guide to Ruby</title>
      <link>https://blog.richardfennell.net/posts/strange-guide-to-ruby/</link>
      <pubDate>Thu, 13 Nov 2008 11:06:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/strange-guide-to-ruby/</guid>
      <description>&lt;p&gt;At the XP Club last night I was pointed at a web site that contains a very strange guide to Ruby ”&lt;a href=&#34;http://poignantguide.net/ruby/index.html&#34;&gt;Why’s (poignant) guide to Ruby&lt;/a&gt;”. This is one of strangest language books I have read in a while, probably since “&lt;a href=&#34;http://www.mrbunny.com/mbgtax.html&#34;&gt;Mr Bunny’s Guide to Active X&lt;/a&gt;”, which was described in its own blurb…&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“This is the first technology book by Carlton Egremont III, author of numerous lengthy grocery lists (unpublished), one or two letters to his mom (unsent), and a doodle on page 117 of the Rochester Public Library&amp;rsquo;s copy of Moby Dick (overdue). Mr. Bunny&amp;rsquo;s Guide to ActiveX makes a lovely gift for the nerd who has everything, and is perfect for propping up uneven table legs. For the high-tech parent there is simply no better antidote to yet another bedtime reading of &amp;ldquo;The Velveteen Rabbit&amp;rdquo; or the &amp;ldquo;OLE 2 Programmer&amp;rsquo;s Reference&amp;rdquo;. Just like Carlton, you and your children will come to believe in a talking bunny, a befuddled farmer, and a technology called ActiveX. “&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the XP Club last night I was pointed at a web site that contains a very strange guide to Ruby ”<a href="http://poignantguide.net/ruby/index.html">Why’s (poignant) guide to Ruby</a>”. This is one of strangest language books I have read in a while, probably since “<a href="http://www.mrbunny.com/mbgtax.html">Mr Bunny’s Guide to Active X</a>”, which was described in its own blurb…</p>
<p><em>“This is the first technology book by Carlton Egremont III, author of numerous lengthy grocery lists (unpublished), one or two letters to his mom (unsent), and a doodle on page 117 of the Rochester Public Library&rsquo;s copy of Moby Dick (overdue). Mr. Bunny&rsquo;s Guide to ActiveX makes a lovely gift for the nerd who has everything, and is perfect for propping up uneven table legs. For the high-tech parent there is simply no better antidote to yet another bedtime reading of &ldquo;The Velveteen Rabbit&rdquo; or the &ldquo;OLE 2 Programmer&rsquo;s Reference&rdquo;. Just like Carlton, you and your children will come to believe in a talking bunny, a befuddled farmer, and a technology called ActiveX. “</em></p>
<p>The strangest thing about these books is they are both a really good introduction to their subjects. As we have seen with the <a href="http://oreilly.com/store/series/headfirst.csp">Head First series of books</a>, training material can come in many forms, because people learn in many ways.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Power Toys</title>
      <link>https://blog.richardfennell.net/posts/tfs-power-toys/</link>
      <pubDate>Sun, 09 Nov 2008 10:42:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-power-toys/</guid>
      <description>&lt;p&gt;I am not a fan of blog posts that are just a repeat of an announcement on other blogs, but in this case I think it is worth noting that the &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2008/11/08/oct-08-tfs-power-tools-are-available.aspx&#34;&gt;TFS October 2008 Release of the Power Toys are out&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The power toys are always interesting but the point of note here is the new shell integration for TFS. This means you can check in/out from Windows Explorer, thus in effect making it far easier to integrate third party products with TFS, like Dreamweaver or Expression Blend (OK not third party but has no TFS integration until version 3).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am not a fan of blog posts that are just a repeat of an announcement on other blogs, but in this case I think it is worth noting that the <a href="http://blogs.msdn.com/bharry/archive/2008/11/08/oct-08-tfs-power-tools-are-available.aspx">TFS October 2008 Release of the Power Toys are out</a>.</p>
<p>The power toys are always interesting but the point of note here is the new shell integration for TFS. This means you can check in/out from Windows Explorer, thus in effect making it far easier to integrate third party products with TFS, like Dreamweaver or Expression Blend (OK not third party but has no TFS integration until version 3).</p>
]]></content:encoded>
    </item>
    <item>
      <title>XPClub meeting on 12th November</title>
      <link>https://blog.richardfennell.net/posts/xpclub-meeting-on-12th-november/</link>
      <pubDate>Fri, 07 Nov 2008 16:44:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/xpclub-meeting-on-12th-november/</guid>
      <description>&lt;p&gt;Next weeks meeting is at the Victoria Hotel in Leeds as usual at 7pm. It is going to be group discussion sort of session, the subjects being:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Daniel Drozdzewski is going to present the future of the computing (based on the article read in recent New Scientist about processors built on logic gates utilising the chaos phenomenon)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;plus&lt;/p&gt;
&lt;p&gt;&lt;em&gt;moderated conversation about design in software projects&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;plus the usual gossip from the industry.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Next weeks meeting is at the Victoria Hotel in Leeds as usual at 7pm. It is going to be group discussion sort of session, the subjects being:</p>
<p><em>Daniel Drozdzewski is going to present the future of the computing (based on the article read in recent New Scientist about processors built on logic gates utilising the chaos phenomenon)</em></p>
<p>plus</p>
<p><em>moderated conversation about design in software projects</em></p>
<p>plus the usual gossip from the industry.</p>
<p>Hope to see you there, remember the event is free and so it at least the first beer.</p>
<p><strong>NB.</strong> Remember next months meeting on the 10th of December is Gary Short’s excellent session on Patterns in software development</p>
]]></content:encoded>
    </item>
    <item>
      <title>The future of paid conferences and other thoughts</title>
      <link>https://blog.richardfennell.net/posts/the-future-of-paid-conferences-and-other-thoughts/</link>
      <pubDate>Thu, 06 Nov 2008 16:31:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-future-of-paid-conferences-and-other-thoughts/</guid>
      <description>&lt;p&gt;Whist at PDC and the VBug conference I have heard a a good deal of chat over the future of paying for conferences and user groups. This is in the light of all the PDC sessions being available on &lt;a href=&#34;http://channel9.msdn.com/posts/pdc2008/RSS/Default.aspx&#34;&gt;Channel9&lt;/a&gt; in under 24 hours and that the content at the Vbug conference is also available at free events like &lt;a href=&#34;http://www.developerday.co.uk/ddd/default.asp&#34;&gt;DDD&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The question boils down to can a person or company justify paying a good few thousand Pounds, Euro or Dollars to fly half way round the world when they could see the same content at home? In my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/11/02/post-pdc-2008-thoughts.aspx&#34;&gt;previous post&lt;/a&gt; on the PDC I suggested it was worth it for the networking, and I still think this is so. However, I have heard an interesting slant on this from more than one person; this is go to the city were the conference is but not to the actual conference; just taking in the parties and maybe watching content via the Internet where available.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whist at PDC and the VBug conference I have heard a a good deal of chat over the future of paying for conferences and user groups. This is in the light of all the PDC sessions being available on <a href="http://channel9.msdn.com/posts/pdc2008/RSS/Default.aspx">Channel9</a> in under 24 hours and that the content at the Vbug conference is also available at free events like <a href="http://www.developerday.co.uk/ddd/default.asp">DDD</a>.</p>
<p>The question boils down to can a person or company justify paying a good few thousand Pounds, Euro or Dollars to fly half way round the world when they could see the same content at home? In my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/11/02/post-pdc-2008-thoughts.aspx">previous post</a> on the PDC I suggested it was worth it for the networking, and I still think this is so. However, I have heard an interesting slant on this from more than one person; this is go to the city were the conference is but not to the actual conference; just taking in the parties and maybe watching content via the Internet where available.</p>
<p>For some people I think this might be a viable option; as long as you get the right party invites! For example at TechEd Europe there many community orientated events organised outside the conference because this is the one time most relevant people are in the same city. Also if you are in this group then you may struggle to find time to go to the actual conference so maybe this plan is viable or even preferred. However, for the average developer I am not certain it is the case, too much of the networking happens randomly inside the conference corridors and at meal tables. For this ‘outside the conference’ model to work you have to know who you want to meet and get invited to the right places/parties i.e. you need some profile in the community</p>
<p>As to the other point whether Vbug like events will continue I think we need to consider who they are aimed it. I had expected at the Vbug conference to see a lot of faces in the audience who I see at DDD, but this was not the case. There were a few but not a majority. Then again I don’t see the same faces at DDD as Alt.net. We have a number of distinct communities going on here, there is some cross over but not that much. I think the three broad groups are:</p>
<ul>
<li>People who go to events (free or otherwise) during office hours – VBug attendees, and people who <a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events">come to the events we host</a> with Microsoft.</li>
<li>People who will go an event in their own time, but it is a passive learning experience – like DDD on a Saturday or a speaker at a user group</li>
<li>People who want to discuss what they do either in a user group over a beer or at an Open Spaces format conference – like Alt.net</li>
</ul>
<p>We are never going to get all three groups merged into one. People will move from one to another and maybe attend all three, but that is their choice.</p>
<p>We are lucky in the UK that we have such an active and high quality community so all three groups can be supported, it will be interesting to see if any one type prevails (judged by attendance) as time goes on. However I do not expect to see any type disappear soon.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Page views not updated in community server</title>
      <link>https://blog.richardfennell.net/posts/page-views-not-updated-in-community-server/</link>
      <pubDate>Thu, 06 Nov 2008 16:02:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/page-views-not-updated-in-community-server/</guid>
      <description>&lt;p&gt;Since we added the new themes to our community server we have not been getting any updates on the Blogs control panel as to the number of times a post has been viewed (but the aggregate views via RSS are incremented OK)&lt;/p&gt;
&lt;p&gt;After a bit of digging it seems that we missing the &lt;strong&gt;IncrementViewCount&lt;/strong&gt; flag on the in the &lt;strong&gt;post.aspx&lt;/strong&gt; file. It should be as shown below.&lt;/p&gt;
&lt;p&gt;&amp;lt;CSBlog:WeblogPostData Property=&amp;ldquo;FormattedBody&amp;rdquo; runat=&amp;ldquo;server&amp;rdquo; IncrementViewCount=&amp;ldquo;true&amp;rdquo; /&amp;gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since we added the new themes to our community server we have not been getting any updates on the Blogs control panel as to the number of times a post has been viewed (but the aggregate views via RSS are incremented OK)</p>
<p>After a bit of digging it seems that we missing the <strong>IncrementViewCount</strong> flag on the in the <strong>post.aspx</strong> file. It should be as shown below.</p>
<p>&lt;CSBlog:WeblogPostData Property=&ldquo;FormattedBody&rdquo; runat=&ldquo;server&rdquo; IncrementViewCount=&ldquo;true&rdquo; /&gt;</p>
<p>If you miss this flag out you page shows OK but the statistics are not updated.</p>
]]></content:encoded>
    </item>
    <item>
      <title>My VBug conference session on TFS</title>
      <link>https://blog.richardfennell.net/posts/my-vbug-conference-session-on-tfs/</link>
      <pubDate>Wed, 05 Nov 2008 21:07:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/my-vbug-conference-session-on-tfs/</guid>
      <description>&lt;p&gt;You can find the slides for my Vbug sessions on the [Black Marble web site](&lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;amp;subsection=Conference&#34;&gt;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;subsection=Conference&lt;/a&gt; Papers).&lt;/p&gt;
&lt;p&gt;I hope those you attended found it useful.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>You can find the slides for my Vbug sessions on the [Black Marble web site](<a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;subsection=Conference">http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&subsection=Conference</a> Papers).</p>
<p>I hope those you attended found it useful.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Going to conferences is worth it, well the chats in the corridor certainly are.</title>
      <link>https://blog.richardfennell.net/posts/going-to-conferences-is-worth-it-well-the-chats-in-the-corridor-certainly-are/</link>
      <pubDate>Tue, 04 Nov 2008 22:54:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/going-to-conferences-is-worth-it-well-the-chats-in-the-corridor-certainly-are/</guid>
      <description>&lt;p&gt;I am down in Reading for the VBug conference where I am speaking on TFS tomorrow.&lt;/p&gt;
&lt;p&gt;Whilst in the bar chatting to &lt;a href=&#34;http://weblogs.asp.net/ROsherove/&#34;&gt;Roy Osherove&lt;/a&gt; from Typemock, the keynote speaker for the conference, he asked if I had looked at the &lt;a href=&#34;http://www.codeplex.com/spg&#34;&gt;Sharepoint patterns and practices document&lt;/a&gt; that details using &lt;a href=&#34;http://www.typemock.com/free_open_source_license_form.php&#34;&gt;Typemock Isolator&lt;/a&gt; for unit testing in Sharepoint.&lt;/p&gt;
&lt;p&gt;On a first look it seems very interesting; as usual at this point  I just wonder how I missed the announcement of this document last month! Is is just me or does everyone struggle to keep up with the the blogs and site you should read?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am down in Reading for the VBug conference where I am speaking on TFS tomorrow.</p>
<p>Whilst in the bar chatting to <a href="http://weblogs.asp.net/ROsherove/">Roy Osherove</a> from Typemock, the keynote speaker for the conference, he asked if I had looked at the <a href="http://www.codeplex.com/spg">Sharepoint patterns and practices document</a> that details using <a href="http://www.typemock.com/free_open_source_license_form.php">Typemock Isolator</a> for unit testing in Sharepoint.</p>
<p>On a first look it seems very interesting; as usual at this point  I just wonder how I missed the announcement of this document last month! Is is just me or does everyone struggle to keep up with the the blogs and site you should read?</p>
<p><strong>Update</strong> 7th Nov - <a href="http://www.typemock.com/sharepointpage.php">http://www.typemock.com/sharepointpage.php</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>patterns &amp;amp; practices Acceptance Test Engineering Guidance</title>
      <link>https://blog.richardfennell.net/posts/patterns-practices-acceptance-test-engineering-guidance/</link>
      <pubDate>Mon, 03 Nov 2008 21:11:20 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/patterns-practices-acceptance-test-engineering-guidance/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/boss/archive/2008/11/02/patterns-and-practices-acceptance-test-engineering-guidance.aspx&#34;&gt;Robert blogged&lt;/a&gt; about the new beta release of the &lt;a href=&#34;http://www.codeplex.com/TestingGuidance&#34;&gt;patterns &amp;amp; practices Acceptance Test Engineering Guidance document.&lt;/a&gt; I have had a chance to do a quick read now and I have to say I am impressed. If nothing else it gives great comparative look at waterfall and agile methods for delivery, and a review of many types of acceptance testing.&lt;/p&gt;
&lt;p&gt;As with many of the p&amp;amp;p documents it is not exhaustive in what it covers, but what it does give is an excellent and detailed starting point for you to make the decisions that are right for your project. It does not give all the answers just most of the right questions.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/boss/archive/2008/11/02/patterns-and-practices-acceptance-test-engineering-guidance.aspx">Robert blogged</a> about the new beta release of the <a href="http://www.codeplex.com/TestingGuidance">patterns &amp; practices Acceptance Test Engineering Guidance document.</a> I have had a chance to do a quick read now and I have to say I am impressed. If nothing else it gives great comparative look at waterfall and agile methods for delivery, and a review of many types of acceptance testing.</p>
<p>As with many of the p&amp;p documents it is not exhaustive in what it covers, but what it does give is an excellent and detailed starting point for you to make the decisions that are right for your project. It does not give all the answers just most of the right questions.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Post PDC 2008 thoughts</title>
      <link>https://blog.richardfennell.net/posts/post-pdc-2008-thoughts/</link>
      <pubDate>Sun, 02 Nov 2008 13:29:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-pdc-2008-thoughts/</guid>
      <description>&lt;p&gt;So back home now after a reasonable journey back from LA all things considered; so I have had a bit of time to reflect, was the PDC good?&lt;/p&gt;
&lt;p&gt;Well I think I enjoyed my previous PDC in 2005 more, your first time always sticks in your memory. I think that this might be due to the fact that at the 2005 PDC LINQ was announced and it was a real left field thing, nobody seemed to see it coming. Due to the prior announcements (leaks) there was nothing that was not expected at this years PDC.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So back home now after a reasonable journey back from LA all things considered; so I have had a bit of time to reflect, was the PDC good?</p>
<p>Well I think I enjoyed my previous PDC in 2005 more, your first time always sticks in your memory. I think that this might be due to the fact that at the 2005 PDC LINQ was announced and it was a real left field thing, nobody seemed to see it coming. Due to the prior announcements (leaks) there was nothing that was not expected at this years PDC.</p>
<p>This said, it does not mean the the announcements were not important, for the future of Microsoft probably far more important than LINQ was. You could argue that <a href="http://www.microsoft.com/azure/">Azure</a> is more of an IT pro announcement, on a day to day basis it will certainly effect them, due to remote hosting of core services, more than developers who will still basically be using .NET via WCF, EF etc. just altering a connection string or two. So an announcement at the Professional <em><strong>Developers</strong></em> Conference was in itself interesting, but where else would Microsoft do it?</p>
<p>On a more step change for developers front, will <a href="http://www.microsoft.com/soa/products/oslo.aspx">Oslo</a> change the world? Well in my opinion not yet, but this was a PDC so we expect the new ‘real’ product to be a few years out. Next year’s PDC 2009 I think will see the Oslo and Dublin technologies <em>productify</em> (is that a word?). It is worth comment that a PDC two years running is rare, so Microsoft must have something up their corporate sleeve.</p>
<p>Since getting back I have done the conference survey and I found one question interesting ‘does the fact that the sessions were all available via <a href="http://www.microsoftpdc.com">www.microsoftpdc.com</a> effect your choice to go to the conference in the future?’. I have to say yes, but on reflection it was worth the trip. A conference is more than the keynote and breakout sessions, maybe there is a future in fully online conferences but it is not there for me yet. Whether I want to travel best part of half way round the world is a interesting point; a 2 hour flight to Barcelona did seemed attractive when sitting in Heathrow Terminal 5 prior to my11 hour flight. But I was lucky I suppose, a friend was off to Hawaii for an air compressor conference the same week I was in LA (each to their own I suppose). The location for me does not warrant the travel, the inside of a conference centre is much the same in any country. I suppose a factor here is how much time you spend in the conference against the beach, for me if you are going to a conference it is to learn not have a holiday (or hunt for swag!), but I am not sure all people have the same opinion here. Some people seem to see a conference a reward for work done in the year, so at treat not a learning experience.</p>
<p>So will I be at PDC 2009 – I expect so. Whether Black Marble send as many to the PDC as opposed to the various TechEds I am not sure, this is a discussion we need to have post conference season. We defiantly need to cover both types of conference, as does any forward looking Microsoft partner, but the ratio is the question.</p>
<p>As for me next it is the <a href="http://www.vbug.com/News/August-2008/VBUGS-11th-Annual-Developer-Conference.aspx">Vbug conference</a> I am speaking on VSTS; I believe there are still spaces available, maybe I will see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Team System Database GDR Edition RTM Released</title>
      <link>https://blog.richardfennell.net/posts/team-system-database-gdr-edition-rtm-released/</link>
      <pubDate>Sat, 01 Nov 2008 22:19:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/team-system-database-gdr-edition-rtm-released/</guid>
      <description>&lt;p&gt;While I was at the PDC I missed the announcement that the release candidate for Database Edition was made, I was not following blogs too much. You can find all the &lt;a href=&#34;http://blogs.msdn.com/gertd/archive/2008/10/27/the-gdr-rc-is-here.aspx&#34;&gt;download links for the RTM at Gert Drapers Blog&lt;/a&gt;, you can also see his &lt;a href=&#34;http://channel9.msdn.com/pdc2008/TL45/&#34;&gt;PDC session where he made the announcement&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Why did I miss the session you might ask? well I was in &lt;a href=&#34;http://channel9.msdn.com/pdc2008/TL09/&#34;&gt;Agile development with Team System session&lt;/a&gt; – some big improvement in Office integration with TFS&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>While I was at the PDC I missed the announcement that the release candidate for Database Edition was made, I was not following blogs too much. You can find all the <a href="http://blogs.msdn.com/gertd/archive/2008/10/27/the-gdr-rc-is-here.aspx">download links for the RTM at Gert Drapers Blog</a>, you can also see his <a href="http://channel9.msdn.com/pdc2008/TL45/">PDC session where he made the announcement</a>.</p>
<p>Why did I miss the session you might ask? well I was in <a href="http://channel9.msdn.com/pdc2008/TL09/">Agile development with Team System session</a> – some big improvement in Office integration with TFS</p>
<p>Anyway I have now installed the GDR edition and it is easy and fast, at least if you are upgrading from the previous version. I have not tried a virgin install which I am sure many people will be doing given that any person with Team System Developer license can download the Database GDR edition.</p>
]]></content:encoded>
    </item>
    <item>
      <title>PDC Day 3</title>
      <link>https://blog.richardfennell.net/posts/pdc-day-3/</link>
      <pubDate>Thu, 30 Oct 2008 15:33:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pdc-day-3/</guid>
      <description>&lt;p&gt;The keynote today was all about &lt;a href=&#34;http://research.microsoft.com/&#34;&gt;MSR&lt;/a&gt;, interesting as ever. I particularly liked the demos of &lt;em&gt;&lt;a href=&#34;http://www.techflash.com/Video_Microsofts_SecondLight_project.html&#34;&gt;Second Light&lt;/a&gt;&lt;/em&gt; (Surface computing that reaches beyond the surface of the physical pc) and &lt;em&gt;Boku&lt;/em&gt; (a system to help children program). The latter is close to our hearts at Black Marble due to the work we have done on &lt;a href=&#34;http://www.myfpl.co.uk/&#34;&gt;FPL&lt;/a&gt;, another system to teach children to program (watch out for free downloads of this application soon)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The keynote today was all about <a href="http://research.microsoft.com/">MSR</a>, interesting as ever. I particularly liked the demos of <em><a href="http://www.techflash.com/Video_Microsofts_SecondLight_project.html">Second Light</a></em> (Surface computing that reaches beyond the surface of the physical pc) and <em>Boku</em> (a system to help children program). The latter is close to our hearts at Black Marble due to the work we have done on <a href="http://www.myfpl.co.uk/">FPL</a>, another system to teach children to program (watch out for free downloads of this application soon)</p>
<p>It was also interesting to see that there was a date for a <a href="http://www.microsoftpdc.com/View.aspx?post=http://channel9.msdn.com/posts/PDCNews/Save-the-Date-PDC09/&amp;tag=">PDC 2009</a> – shows that Microsoft have plenty of new things in the pipeline.</p>
<p>Outside of the keynote, what could be more directly useful to me will be Visual Studio Team Lab, a new SKU for 2010 (sorry still can’t find a link with more details) that will manage the provisioning of test environments: Hyper-v VHDs are stored in a repository and created using pre defined rules as part of a build process. Test can then be run either automatically or manually using the new VS2010 test tools. Test results are then fed back into the TFS work item tracking system including screen shots, error test information – in fact enough information to allow a developer to connect to the Hyper-v pc at the point of error and debug. This idea is something we are working on internally with current VS2008 and Hyper-V tools. Unfortunately we are unlikely to see Team Lab until VS2010 reaches beta, so a good way off – so I am going to have to persist with our own internal projects it seems.</p>
]]></content:encoded>
    </item>
    <item>
      <title>PDC 2008 Day 2 Keynote</title>
      <link>https://blog.richardfennell.net/posts/pdc-2008-day-2-keynote/</link>
      <pubDate>Tue, 28 Oct 2008 19:47:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pdc-2008-day-2-keynote/</guid>
      <description>&lt;p&gt;Well it was all end user focused today; Windows 7 and experience in &lt;a href=&#34;https://www.mesh.com/Welcome/Welcome.aspx&#34;&gt;Live systems&lt;/a&gt;. All looks very nice, given the usually question you have to raise in a connected environment over personal data security. I am sure Microsoft have done a good job of physical and logical data security, but the whole concept of mesh networks opens up a huge potential for social attacks. No developer can protect against the user clicking on an ill advised email or now mesh link; I know I have fixed too many friends PCs with the &lt;em&gt;XP Antivirus 2008&lt;/em&gt; Trojan of late, where they click on a link because an email said their anti virus was out of date, they thought they were doing good.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Well it was all end user focused today; Windows 7 and experience in <a href="https://www.mesh.com/Welcome/Welcome.aspx">Live systems</a>. All looks very nice, given the usually question you have to raise in a connected environment over personal data security. I am sure Microsoft have done a good job of physical and logical data security, but the whole concept of mesh networks opens up a huge potential for social attacks. No developer can protect against the user clicking on an ill advised email or now mesh link; I know I have fixed too many friends PCs with the <em>XP Antivirus 2008</em> Trojan of late, where they click on a link because an email said their anti virus was out of date, they thought they were doing good.</p>
<p>It was interesting that the major third party demo’s big demo’s were both from the UK: Tesco and the BBC. Is it me or does the fact Tesco plan to offer a WPF application to handle your online orders but also manages family photos see a little scary? Where is their reach going to end?</p>
<p>The second half of the keynote was the Don Box and Chris Anderson show  - excellent as ever. A whistle stop tour of programming against Azure. And all the demo’s coded against the live web sites even worked!.</p>
]]></content:encoded>
    </item>
    <item>
      <title>PDC Buzzword bingo</title>
      <link>https://blog.richardfennell.net/posts/pdc-buzzword-bingo/</link>
      <pubDate>Tue, 28 Oct 2008 19:36:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pdc-buzzword-bingo/</guid>
      <description>&lt;p&gt;It has been a while but I knew a conference would turn up some good buzzwords&lt;/p&gt;
&lt;p&gt;Meshify – to add Live mesh functions to a site&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It has been a while but I knew a conference would turn up some good buzzwords</p>
<p>Meshify – to add Live mesh functions to a site</p>
]]></content:encoded>
    </item>
    <item>
      <title>It made me laugh…</title>
      <link>https://blog.richardfennell.net/posts/it-made-me-laugh/</link>
      <pubDate>Tue, 28 Oct 2008 16:30:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/it-made-me-laugh/</guid>
      <description>&lt;p&gt;At the PDC expo drinks last night I was asked to show age ID to get a drink! Just me it seems, nobody else from the company.&lt;/p&gt;
&lt;p&gt;I thought I was doing well to be carded on my 30th birthday whilst in the US, but 10 years on it getting silly – You must ask how do I keep my youthfully countenance? Wish I knew, but they say it is youth culture here in LA.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the PDC expo drinks last night I was asked to show age ID to get a drink! Just me it seems, nobody else from the company.</p>
<p>I thought I was doing well to be carded on my 30th birthday whilst in the US, but 10 years on it getting silly – You must ask how do I keep my youthfully countenance? Wish I knew, but they say it is youth culture here in LA.</p>
<p>So my new aim is to get asked to prove my age to get a drink with an OAP bus pass.</p>
]]></content:encoded>
    </item>
    <item>
      <title>VSTS 2010 at the PDC</title>
      <link>https://blog.richardfennell.net/posts/vsts-2010-at-the-pdc/</link>
      <pubDate>Tue, 28 Oct 2008 00:59:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vsts-2010-at-the-pdc/</guid>
      <description>&lt;p&gt;Though not really mentioned in the keynote there are a lot of sessions on VSTS 2010 at the PDC; it is going to be a major really major release.&lt;/p&gt;
&lt;p&gt;Chatting in between the sessions with other delegates there seems to loads of interest in the new testing features, but we know this is pain point from &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/09/14/alt-net-the-day-after.aspx&#34;&gt;Alt.net meetings&lt;/a&gt;. It will be interesting to see how these new tools deliver, I am sure the &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/bb725993.aspx&#34;&gt;manual testing tools&lt;/a&gt; will be useful, but I am a bit more doubtful over the UI testing tools. We have all seen the demo promise of these products before and hit problems when we try to use them for real.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Though not really mentioned in the keynote there are a lot of sessions on VSTS 2010 at the PDC; it is going to be a major really major release.</p>
<p>Chatting in between the sessions with other delegates there seems to loads of interest in the new testing features, but we know this is pain point from <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/09/14/alt-net-the-day-after.aspx">Alt.net meetings</a>. It will be interesting to see how these new tools deliver, I am sure the <a href="http://msdn.microsoft.com/en-us/vstudio/bb725993.aspx">manual testing tools</a> will be useful, but I am a bit more doubtful over the UI testing tools. We have all seen the demo promise of these products before and hit problems when we try to use them for real.</p>
<p>I think the improved integration for Office is also going to be important. At the moment we use the eScrum template for VSTS and the main reasons for this are the easy project visibility it gives to non-developer users via it’s web site and the way it manages the relationship between the product backlog and sprint tasks (given the lack of hieratical work items in the current VSTS). With 2010 I am not sure we are going to need the eScrum web site. The ease of reporting (and live work item updating) in Excel and hieratical work items will make this it superfluous, so the basic Agile template in 2010 may will be able to do the job.</p>
]]></content:encoded>
    </item>
    <item>
      <title>PDC 2008 thoughts day 1</title>
      <link>https://blog.richardfennell.net/posts/pdc-2008-thoughts-day-1/</link>
      <pubDate>Mon, 27 Oct 2008 22:59:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/pdc-2008-thoughts-day-1/</guid>
      <description>&lt;p&gt;So it seems we are going to have themed days at the 2008 PDC and day one is all about  the &lt;a href=&#34;http://www.microsoft.com/azure/default.mspx&#34;&gt;Azure&lt;/a&gt; services platform. Though judging by the expo stands the key announcements for tomorrow - Oslo and Dublin are out there too. As conferences go it seems a bit confused to me, I guess Microsoft are aiming for three big bangs at three keynotes; but we seem to have had a big bang today and splutter of future items.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So it seems we are going to have themed days at the 2008 PDC and day one is all about  the <a href="http://www.microsoft.com/azure/default.mspx">Azure</a> services platform. Though judging by the expo stands the key announcements for tomorrow - Oslo and Dublin are out there too. As conferences go it seems a bit confused to me, I guess Microsoft are aiming for three big bangs at three keynotes; but we seem to have had a big bang today and splutter of future items.</p>
<p>However, I might be wrong, there could be stuff we have not even suspected, lets wait and see what we get tomorrow. Don Box and Chris Anderson are usually good value whenever they present so I await their keynote session with anticipation!</p>
<p>So as to Azure, a new move for Microsoft? Well not really it seems the logical next step especially given the offerings of Amazon and Google. It will be interesting how this develops, but on a first quick look Azure seems a very strong offering, I guess it will all be down to price in the end. Oh and if you trust Microsoft to host your business data.</p>
]]></content:encoded>
    </item>
    <item>
      <title>All life is here</title>
      <link>https://blog.richardfennell.net/posts/all-life-is-here/</link>
      <pubDate>Mon, 27 Oct 2008 18:05:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/all-life-is-here/</guid>
      <description>&lt;p&gt;I won’t repeat &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/boss/archive/2008/10/27/black-marble-founders-day.aspx&#34;&gt;Robert’s blog entry on the Black Marble founders reunion&lt;/a&gt;, but just to add the missing photo&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_656052A3.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_1E5974BD.png&#34; title=&#34;image&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I won’t repeat <a href="http://blogs.blackmarble.co.uk/blogs/boss/archive/2008/10/27/black-marble-founders-day.aspx">Robert’s blog entry on the Black Marble founders reunion</a>, but just to add the missing photo</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_656052A3.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1E5974BD.png" title="image"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TF53010 &amp;amp; TF213000 unable to load</title>
      <link>https://blog.richardfennell.net/posts/tf53010-tf213000-unable-to-load/</link>
      <pubDate>Fri, 24 Oct 2008 10:16:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf53010-tf213000-unable-to-load/</guid>
      <description>&lt;p&gt;I got this error when installing TFS 2008. In the error log I could see the problem was when the &lt;strong&gt;TfsGssInit.exe&lt;/strong&gt;  was run, it said&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Detailed Message: TF213000: A required user account could not be added during installation or repair of Team Foundation Server.  The installation or repair failed with the following exception message: System.TypeInitializationException…..&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I found the answer in the &lt;a href=&#34;http://social.msdn.microsoft.com/Forums/en-US/tfssetup/thread/1a484f4d-9b14-44e1-b3a3-cb23e62f1e4e/&#34;&gt;TFS forum&lt;/a&gt;. The thread (and others) did suggest there were DNS lookup issues, but the thing that got it for me was removing VS2008 Team Client (and it’s SP1) from the server. I had installed these whilst I had been waiting for the IT department do some work in the SQL Data Tier. I thought I was saving times, I was wrong!.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I got this error when installing TFS 2008. In the error log I could see the problem was when the <strong>TfsGssInit.exe</strong>  was run, it said</p>
<p><strong>Detailed Message: TF213000: A required user account could not be added during installation or repair of Team Foundation Server.  The installation or repair failed with the following exception message: System.TypeInitializationException…..</strong></p>
<p>I found the answer in the <a href="http://social.msdn.microsoft.com/Forums/en-US/tfssetup/thread/1a484f4d-9b14-44e1-b3a3-cb23e62f1e4e/">TFS forum</a>. The thread (and others) did suggest there were DNS lookup issues, but the thing that got it for me was removing VS2008 Team Client (and it’s SP1) from the server. I had installed these whilst I had been waiting for the IT department do some work in the SQL Data Tier. I thought I was saving times, I was wrong!.</p>
<p>I think the issue was the VS2008 SP1, but I removed both and the install of TFS worked OK.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF220050 error in TFS install</title>
      <link>https://blog.richardfennell.net/posts/tf220050-error-in-tfs-install/</link>
      <pubDate>Fri, 24 Oct 2008 10:16:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf220050-error-in-tfs-install/</guid>
      <description>&lt;p&gt;Whilst doing a new TFS 2008 dual tier install I was getting a failure with a TF220050 error when I entered the data tier DB instance name. The setup wizard just said ‘failed to connect to data tier’, but you can find the actual error number in the install logs to be found under &lt;em&gt;C:Documents and Settings[setup user]Local SettingsTemp&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This error seems to be just a generic low level ‘cannot connect to the DB’; in my case it was caused by one of two issues:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst doing a new TFS 2008 dual tier install I was getting a failure with a TF220050 error when I entered the data tier DB instance name. The setup wizard just said ‘failed to connect to data tier’, but you can find the actual error number in the install logs to be found under <em>C:Documents and Settings[setup user]Local SettingsTemp</em></p>
<p>This error seems to be just a generic low level ‘cannot connect to the DB’; in my case it was caused by one of two issues:</p>
<ul>
<li>The SQL server I was pointing at was a SQL 2000 not SQL 2005 (I had been given the wrong server name) – the moral is always check the version yourself.</li>
<li>Also analysis services was not running on the SQL  2000 box.</li>
</ul>
<p>So I am not sure which is the actual cause of the error message here, version or missing OLAP service In my case the fix was to just connection to the correct SQL 2005 instance.</p>
]]></content:encoded>
    </item>
    <item>
      <title>... and I understand DDD7 is now full</title>
      <link>https://blog.richardfennell.net/posts/and-i-understand-ddd7-is-now-full/</link>
      <pubDate>Wed, 22 Oct 2008 18:40:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-i-understand-ddd7-is-now-full/</guid>
      <description>&lt;p&gt;So that took about 6 hours by my estimation. Popular or what!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So that took about 6 hours by my estimation. Popular or what!</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD7 Registration is open</title>
      <link>https://blog.richardfennell.net/posts/ddd7-registration-is-open/</link>
      <pubDate>Wed, 22 Oct 2008 10:22:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd7-registration-is-open/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://codebetter.com/blogs/ian_cooper/archive/2008/10/22/developer-day-registration-is-open.aspx&#34;&gt;Ian Cooper has just announced&lt;/a&gt; that you can now &lt;a href=&#34;http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032393874&amp;amp;Culture=en-GB&#34;&gt;register for DDD7&lt;/a&gt;, but be quick it is expected to be full in 24 hours.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://codebetter.com/blogs/ian_cooper/archive/2008/10/22/developer-day-registration-is-open.aspx">Ian Cooper has just announced</a> that you can now <a href="http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032393874&amp;Culture=en-GB">register for DDD7</a>, but be quick it is expected to be full in 24 hours.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Iterations not appearing in IterationPath</title>
      <link>https://blog.richardfennell.net/posts/tfs-iterations-not-appearing-in-iterationpath/</link>
      <pubDate>Thu, 16 Oct 2008 12:35:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-iterations-not-appearing-in-iterationpath/</guid>
      <description>&lt;p&gt;I have been working on site that has had to do a disaster recovery of their TFS application tier (AT) due to hardware failure. For a short period they have had to use a spare PC as their AT. Due to the hast required to get the developers working this AT was only configured to for source control and work item editing.&lt;/p&gt;
&lt;p&gt;So I was onsite to put the proper replacement AT in place. All seemed to go OK until we added a new Iteration to a team project. It did not appear in the &lt;strong&gt;IterationPath&lt;/strong&gt; field for work items.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working on site that has had to do a disaster recovery of their TFS application tier (AT) due to hardware failure. For a short period they have had to use a spare PC as their AT. Due to the hast required to get the developers working this AT was only configured to for source control and work item editing.</p>
<p>So I was onsite to put the proper replacement AT in place. All seemed to go OK until we added a new Iteration to a team project. It did not appear in the <strong>IterationPath</strong> field for work items.</p>
<p>This problem actually manifested itself for us in the inability to add a new sprint from inside <a href="http://www.microsoft.com/downloads/details.aspx?familyid=55a4bde6-10a7-4c41-9938-f388c1ed15e9">eScrum</a>. Unlike most team process templates the eScrum front end creates sprints by creating an iterations and an associated work item (to hold extra information) all in one go. This was failing as after the iteration was created it’s creation was not propagated to allow a work item to be associated with it.</p>
<p>After checking the ATs event log we saw TF53010 and TF51338 errors. I then ran the TFS <a href="http://msdn.microsoft.com/en-us/vs2005/aa718340.aspx">Best Practice Analyser</a> (BPA) and this showed two issues:</p>
<ul>
<li>the <strong>MyDomainTFSService</strong> account not being in the TFS <strong>[Server]Service Accounts</strong> group**.** I think this was due to fact that the temporary AT system had used using different accounts and the installation of the new AT had left some behind.</li>
<li>due to this the TFS Scheduler was not running reliably, this would explain why the new iterations were not being propagated.</li>
</ul>
<p>We fixed this using the <a href="http://msdn.microsoft.com/en-us/library/ms252504%28VS.80%29.aspx">tfssecurity /g</a> command to add the <strong>MyDomainTFSService</strong>  account to the TFS <strong>[Server]Service Accounts</strong> group and then restarted the server.</p>
<p>Once this was done we checked the configuration was right using the BPA again, and finally checked we could create sprints in eScrum.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2008 SP1 resets service accounts</title>
      <link>https://blog.richardfennell.net/posts/tfs-2008-sp1-resets-service-accounts/</link>
      <pubDate>Thu, 16 Oct 2008 12:23:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2008-sp1-resets-service-accounts/</guid>
      <description>&lt;p&gt;I installed the &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?familyid=9e40a5b6-da41-43a2-a06d-3cee196bfe3d&#34;&gt;TFS 2008 SP1&lt;/a&gt; on a site that was using custom accounts for the identities that run the application pools for the WSS instance and Report Services.&lt;/p&gt;
&lt;p&gt;These user accounts got reset back to &lt;strong&gt;Network Service&lt;/strong&gt; when the service pack was installed; I had not see this occur on any site I had upgraded previously. This meant you could not start WSS or Reporting Services.&lt;/p&gt;
&lt;p&gt;Manually resetting them back to their old correct accounts fixed the problem.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I installed the <a href="http://www.microsoft.com/downloads/details.aspx?familyid=9e40a5b6-da41-43a2-a06d-3cee196bfe3d">TFS 2008 SP1</a> on a site that was using custom accounts for the identities that run the application pools for the WSS instance and Report Services.</p>
<p>These user accounts got reset back to <strong>Network Service</strong> when the service pack was installed; I had not see this occur on any site I had upgraded previously. This meant you could not start WSS or Reporting Services.</p>
<p>Manually resetting them back to their old correct accounts fixed the problem.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using StyleCop in TFS Team Build</title>
      <link>https://blog.richardfennell.net/posts/using-stylecop-in-tfs-team-build/</link>
      <pubDate>Wed, 15 Oct 2008 22:19:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-stylecop-in-tfs-team-build/</guid>
      <description>&lt;p&gt;The recent release of the &lt;a href=&#34;http://www.codeplex.com/MSBuildExtensionPack&#34;&gt;MSBuild Extensions&lt;/a&gt; includes a task for &lt;a href=&#34;http://code.msdn.microsoft.com/sourceanalysis&#34;&gt;StyleCop 4.3&lt;/a&gt;. I have been trying to get this integrated into our TFS TeamBuild, I think it is a far more preferable way to do it than editing the various project files in our solution to link in &lt;a href=&#34;http://blogs.msdn.com/sourceanalysis/pages/stylecop-4-2-msbuild-integration.aspx&#34;&gt;StyleCop as you had to do in 4.2&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There are a good few steps I had to follow to get it doing:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Install the StyleCop 4.3 Msi&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The recent release of the <a href="http://www.codeplex.com/MSBuildExtensionPack">MSBuild Extensions</a> includes a task for <a href="http://code.msdn.microsoft.com/sourceanalysis">StyleCop 4.3</a>. I have been trying to get this integrated into our TFS TeamBuild, I think it is a far more preferable way to do it than editing the various project files in our solution to link in <a href="http://blogs.msdn.com/sourceanalysis/pages/stylecop-4-2-msbuild-integration.aspx">StyleCop as you had to do in 4.2</a>.</p>
<p>There are a good few steps I had to follow to get it doing:</p>
<ul>
<li>
<p>Install the StyleCop 4.3 Msi</p>
</li>
<li>
<p>Install the MSBuild Extensions Msi</p>
</li>
<li>
<p>Now we have to do some fixed/changes. First copy the <strong>MSBuild.ExtensionPack.StyleCop.dll</strong> from the <strong>C:Program FilesMSBuildExtensionPack</strong> to <strong>C:Program FilesMSBuildMicrosoftStyleCopv4.3</strong>. We need to do this as the StyleCop DLLs are not <em>automagically</em> found (you could fix this using a search path I suppose)</p>
</li>
<li>
<p>Next we need to modify the <strong>C:Program FilesMSBuildExtensionPackMSBuild.ExtensionPack.tasks</strong> file to <a href="http://www.codeplex.com/MSBuildExtensionPack/Thread/View.aspx?ThreadId=37382">fix a typo that is a known issue</a>. The StyleCop line at the end of the file should read</p>
<blockquote>
<UsingTask AssemblyFile="$(MSBuildExtensionsPath)MicrosoftStyleCopv4.3MSBuild.ExtensionPack.StyleCop.dll" TaskName="MSBuild.ExtensionPack.CodeQuality.StyleCop"/></blockquote>
</li>
<li>
<p>Now edit your Team Build <strong>tfsbuild.proj</strong> file; import the extension tasks</p>
<blockquote>
<p>  <Import Project="$(MSBuildExtensionsPath)ExtensionPackMSBuild.ExtensionPack.tasks"/></p></blockquote>
</li>
<li>
<p>Now you need to edit or add the <em>AfterCompile</em> target, something like as shown below. I have added comments for each block.</p>
</li>
</ul>
<blockquote>
<Target Name="AfterCompile"></blockquote>
<blockquote>
<p>   &lt;!— Put up the start processing message – we clear it later &ndash;&gt;<br>
   &lt;BuildStep TeamFoundationServerUrl=&quot;$(TeamFoundationServerUrl)&quot;<br>
      BuildUri=&quot;$(BuildUri)&quot;<br>
           Name=&ldquo;StyleCopStep&rdquo;<br>
            Message=&ldquo;StyleCop step is executing.&quot;&gt;<br>
    <Output TaskParameter="Id" PropertyName="StyleCopStep" /><br>
  </BuildStep></p>
<p>   <!-- Create a collection of files to scan, the \*\* means subdirectories --> <br>
    <CreateItem Include="$(SolutionRoot)MyTeamProjectMySolution\*\*\*.cs"><br>
      <Output TaskParameter="Include" ItemName="StyleCopFiles"/><br>
    </CreateItem></p>
<p>   <!-- Run the StyleCop MSBuild task using the setting file in the same directory as sln file and also stored in TFS --><br>
  &lt;MSBuild.ExtensionPack.CodeQuality.StyleCop<br>
      TaskAction=&ldquo;Scan&rdquo;<br>
      SourceFiles=&rdquo;@(StyleCopFiles)&quot;<br>
      ShowOutput=&ldquo;true&rdquo;<br>
      ForceFullAnalysis=&ldquo;true&rdquo;<br>
      CacheResults=&ldquo;false&rdquo;<br>
      logFile=&quot;$(DropLocation)$(BuildNumber)StyleCopLog.txt&quot;<br>
      SettingsFile=&quot;$(SolutionRoot)MyTeamProjectMySolutionSettings.StyleCop&quot;<br>
      ContinueOnError=&ldquo;false&rdquo;&gt;<br>
    <Output TaskParameter="Succeeded" PropertyName="AllPassed"/><br>
    <Output TaskParameter="ViolationCount" PropertyName="Violations"/><br>
    <Output TaskParameter="FailedFiles" ItemName="Failures"/><br>
  &lt;/MSBuild.ExtensionPack.CodeQuality.StyleCop&gt;</p></blockquote>
<blockquote>
<p>&lt;!—Log the summary of the results &ndash;&gt;<br>
<Message Text="StyleCop Succeeded: $(AllPassed), Violations: $(Violations)"/></p></blockquote>
<p>  &lt;!&ndash; FailedFile format is:<br>
      <ItemGroup><br>
          <FailedFile Include="filename"><br>
              <CheckId>SA Rule Number</CheckId><br>
              <RuleDescription>Rule Description</RuleDescription><br>
              <RuleName>Rule Name</RuleName><br>
              <LineNumber>Line the violation appears on</LineNumber><br>
              <Message>SA violation message</Message><br>
          </FailedFile><br>
      </ItemGroup>—&gt;</p>
<blockquote>
<!-- Log the details of any violations -->  
<Warning Text="%(Failures.Identity) - Failed on Line %(Failures.LineNumber). %(Failures.CheckId): %(Failures.Message)"/>  
<!-- The StyleCop task does not throw an error if the analysis failed,   
so we need to check the return value and if we choose to treat errors as warnings   
we need to set the error state, if set it will cause us to jump to the failure target -->  
<blockquote>
<p>  &lt;Error Text=&ldquo;StyleCop analysis warnings occured&rdquo; Condition=&quot;&rsquo;$(AllPassed)&rsquo; == &lsquo;False&rsquo;&quot;  /&gt;</p></blockquote></blockquote>
<p>  <!-- List out the issues, we only need this if we are not forcing the error above, as if we have we never get here –>  
<!— you would normally have either this OR the error line uncommented, as if there are  
are not errors this following line can generate a empty failure line --><br>
  &lt;BuildStep TeamFoundationServerUrl=&quot;$(TeamFoundationServerUrl)&quot;<br>
          BuildUri=&quot;$(BuildUri)&quot;<br>
          Message=&quot;%(Failures.Identity) - Failed on Line %(Failures.LineNumber). %(Failures.CheckId): %(Failures.Message)&quot;/&gt;</p>
<p>  &lt;!—Complete the stylecop step, we get here if we have no thrown an error  &ndash;&gt;<br>
&lt;BuildStep TeamFoundationServerUrl=&quot;$(TeamFoundationServerUrl)&quot;<br>
                BuildUri=&quot;$(BuildUri)&quot;<br>
                Id=&quot;$(StyleCopStep)&quot;<br>
                Status=&ldquo;Succeeded&rdquo;<br>
                Message=&ldquo;StyleCop Succeeded: $(AllPassed), Violations: $(Violations)&rdquo;/&gt;</p>
<p>  <!-- If an error has been raised we need to call the failure target  
       You might have thought you could get the same effect as the error line a few lines above by adding a condition to the OnError as shown commented out below. However this does not work as the OnError condition is not evaluated unless an error has previously occured in a task, the condition clause is secondary--><br>
  <OnError ExecuteTargets="FailTheBuild" /></p>
<!--<OnError ExecuteTargets="FailTheBuild" Condition="'$(AllPassed)' == 'False'"  />-->
</Target>
<Target Name="FailTheBuild">  
  <!-- We are failing the build due to stylecop issues -->  
  <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)"  
          BuildUri="$(BuildUri)"  
          Id="$(StyleCopStep)"  
          Status="Failed"  
          Message="StyleCop Failed: $(AllPassed), Violations: $(Violations) \[See $(DropLocation)$(BuildNumber)StyleCopLog.txt\]"/>
<p>  <!-- List out the issues--><br>
  &lt;BuildStep TeamFoundationServerUrl=&quot;$(TeamFoundationServerUrl)&quot;<br>
          BuildUri=&quot;$(BuildUri)&quot;<br>
          Message=&quot;%(Failures.Identity) - Failed on Line %(Failures.LineNumber). %(Failures.CheckId): %(Failures.Message)&quot;/&gt;</p>
</Target>
<p>So this will run StyleCop as a separate build step and you have the option to fail the build or not depending on your view of StyleCop violations by commenting out one line. Either way violations will be listed as rows in the list of build steps.</p>
<p>I had really wanted to get the number of violations added to either the total errors or warnings as listed in the Results Details at the end of the build, also I had wanted a simple way to access the StyleCopLog.txt created in the drops directory. However, I have not worked out how this final step yet. If I manage to work it out I will blog on the solution – or if you know to do please post a comment.</p>
<p><strong>Updated</strong> –17 &amp; 21 Oct 08 – Added some extra details to the comments in the XML sample and removed hard coded paths, now use team build parameters so everything is referenced via build agents root location.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD7 Agenda Published</title>
      <link>https://blog.richardfennell.net/posts/ddd7-agenda-published/</link>
      <pubDate>Mon, 13 Oct 2008 22:15:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd7-agenda-published/</guid>
      <description>&lt;p&gt;Well the votes are in and my proposed session for &lt;a href=&#34;http://www.developerday.co.uk/ddd/agendaddd7lineup.asp&#34;&gt;DDD7 on automated testing did not make the cut&lt;/a&gt;, but thanks to anyone who voted for it. I can’t say I am surprised that I am not on the list given the larger number of very interesting sessions proposed.&lt;/p&gt;
&lt;p&gt;However, I am a little disappointed that even though there were a good percentage of the proposed sessions on testing related subjects only &lt;a href=&#34;http://codebetter.com/blogs/ian_cooper/&#34;&gt;Ian’s on TDD&lt;/a&gt; and  &lt;a href=&#34;http://blog.benhall.me.uk/2008/10/ddd7-microsoft-pex-future-of-unit.html&#34;&gt;Ben’s on Pex&lt;/a&gt; made it through; I really had expected to see &lt;a href=&#34;http://gojko.net/about/&#34;&gt;Gojko&lt;/a&gt;’s on Fitnesse.Net on the list.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Well the votes are in and my proposed session for <a href="http://www.developerday.co.uk/ddd/agendaddd7lineup.asp">DDD7 on automated testing did not make the cut</a>, but thanks to anyone who voted for it. I can’t say I am surprised that I am not on the list given the larger number of very interesting sessions proposed.</p>
<p>However, I am a little disappointed that even though there were a good percentage of the proposed sessions on testing related subjects only <a href="http://codebetter.com/blogs/ian_cooper/">Ian’s on TDD</a> and  <a href="http://blog.benhall.me.uk/2008/10/ddd7-microsoft-pex-future-of-unit.html">Ben’s on Pex</a> made it through; I really had expected to see <a href="http://gojko.net/about/">Gojko</a>’s on Fitnesse.Net on the list.</p>
<p>Given that much of the last <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/09/14/alt-net-the-day-after.aspx">Alt.Net conference</a> was focused on acceptance testing the relative lack of testing related sessions surprised me. There seems to be a difference in interests between the DDD voting community and the Alt.Net attendees. Does this mean that the average person attending (or at least voting) for DDD sessions does not care about testing or thinks they have nothing to learn? or have I missed something about the nature of the two events?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Like the new Blog theme?</title>
      <link>https://blog.richardfennell.net/posts/like-the-new-blog-theme/</link>
      <pubDate>Thu, 09 Oct 2008 20:29:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/like-the-new-blog-theme/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/lfear/default.aspx&#34;&gt;Lauren&lt;/a&gt; has has restyled our &lt;a href=&#34;http://blogs.blackmarble.co.uk/&#34;&gt;blog server&lt;/a&gt; to match the &lt;a href=&#34;http://www.blackmarble.co.uk/&#34;&gt;new company web site&lt;/a&gt; and also done a matching blog template.&lt;/p&gt;
&lt;p&gt;Do you like it?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/lfear/default.aspx">Lauren</a> has has restyled our <a href="http://blogs.blackmarble.co.uk/">blog server</a> to match the <a href="http://www.blackmarble.co.uk/">new company web site</a> and also done a matching blog template.</p>
<p>Do you like it?</p>
]]></content:encoded>
    </item>
    <item>
      <title>XP Club meeting</title>
      <link>https://blog.richardfennell.net/posts/xp-club-meeting/</link>
      <pubDate>Wed, 08 Oct 2008 18:50:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/xp-club-meeting/</guid>
      <description>&lt;p&gt;An excellent turnout for tonights XP Club meeting. As I write Nick McKenna is talking about his experiences in adopting agile processes.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>An excellent turnout for tonights XP Club meeting. As I write Nick McKenna is talking about his experiences in adopting agile processes.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Flash problems</title>
      <link>https://blog.richardfennell.net/posts/flash-problems/</link>
      <pubDate>Tue, 07 Oct 2008 12:29:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/flash-problems/</guid>
      <description>&lt;p&gt;Are you seeing the error &amp;ldquo;&lt;em&gt;Cannot play media. You do not have the correct version of the flash player. Download the correct version&lt;/em&gt;&amp;rdquo; on the BBC web site or on YouTube  &amp;ldquo;&lt;em&gt;Hello, you either have JavaScript turned off or an old version of Macromedia&amp;rsquo;s Flash Player. Get the latest Flash player&lt;/em&gt;&amp;rdquo;?&lt;/p&gt;
&lt;p&gt;I have been on my Dell Mini; I suspect the problem was the upgrade route I took from XP-Home -&amp;gt; IE8 Beta -&amp;gt;XP Prof meant the registry was a mess. Repeated re-installation of Flash and Shockwave had no effect.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Are you seeing the error &ldquo;<em>Cannot play media. You do not have the correct version of the flash player. Download the correct version</em>&rdquo; on the BBC web site or on YouTube  &ldquo;<em>Hello, you either have JavaScript turned off or an old version of Macromedia&rsquo;s Flash Player. Get the latest Flash player</em>&rdquo;?</p>
<p>I have been on my Dell Mini; I suspect the problem was the upgrade route I took from XP-Home -&gt; IE8 Beta -&gt;XP Prof meant the registry was a mess. Repeated re-installation of Flash and Shockwave had no effect.</p>
<p>After much fiddling I fixed it by downgrading to IE7, running the <a href="http://kb.adobe.com/selfservice/viewContent.do?externalId=tn_14157">Abode Uninstaller</a>, the installing Flash and upgrading to IE8 again. However I suspect I did not need the IE downgrade first, but I had done this previously.</p>
<p>Hope this saves you some time.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running fitness.Net tests in unit test – some tips</title>
      <link>https://blog.richardfennell.net/posts/running-fitness-net-tests-in-unit-test-some-tips/</link>
      <pubDate>Sat, 04 Oct 2008 21:39:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-fitness-net-tests-in-unit-test-some-tips/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/07/18/running-fitnesse-net-tests-using-mstest.aspx&#34;&gt;posted a while ago on wiring in Fitness.Net into a unit test framework using HTML files to hold the tests&lt;/a&gt;. Well I have been using the technique for some workflow acceptance testing and hit a couple of gotta’s that are easy to forget:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Make sure the HTML files containing the user story tests are set to copy to the project output directory in the IDE&lt;/strong&gt; – if they are not then the framework cannot find the tests, so obviously none are run. The danger is you think the problem is an incorrect class or method name, when it is a simple missing file problem.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;If you edit the user story HTML file make sure you rebuild the solution.&lt;/strong&gt; If you don’t do this the copy to the output directory might not be triggered as a simple build maybe skipped as the IDE will not see any changes to source files it needs to compile the project. This is especially easy to forget if you are using a test add-in such as Testdriven.net as opposed to clicking on build yourself.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Be careful with the HTML editor you use to create user story file in case it reformats the page&lt;/strong&gt;. This is important for parameters (but it seems not for the method name fields). You need to make they are formatted&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;           &lt;td&gt;My Value&lt;/td&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/07/18/running-fitnesse-net-tests-using-mstest.aspx">posted a while ago on wiring in Fitness.Net into a unit test framework using HTML files to hold the tests</a>. Well I have been using the technique for some workflow acceptance testing and hit a couple of gotta’s that are easy to forget:</p>
<ul>
<li><strong>Make sure the HTML files containing the user story tests are set to copy to the project output directory in the IDE</strong> – if they are not then the framework cannot find the tests, so obviously none are run. The danger is you think the problem is an incorrect class or method name, when it is a simple missing file problem.</li>
<li><strong>If you edit the user story HTML file make sure you rebuild the solution.</strong> If you don’t do this the copy to the output directory might not be triggered as a simple build maybe skipped as the IDE will not see any changes to source files it needs to compile the project. This is especially easy to forget if you are using a test add-in such as Testdriven.net as opposed to clicking on build yourself.</li>
<li><strong>Be careful with the HTML editor you use to create user story file in case it reformats the page</strong>. This is important for parameters (but it seems not for the method name fields). You need to make they are formatted</li>
</ul>
<blockquote>
<p>           <td>My Value</td></p>
<p>as opposed to</p>
<p>           <td><br>
           My Value<br>
           </td></p>
<p>as the latter will include the carriage returns characters in the parameters passed into the test so will probably fail.</p></blockquote>
<p>I think that is all for now, I will post any others I find as they crop up</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upcoming events in October</title>
      <link>https://blog.richardfennell.net/posts/upcoming-events-in-october/</link>
      <pubDate>Sat, 04 Oct 2008 15:54:44 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upcoming-events-in-october/</guid>
      <description>&lt;p&gt;It is getting to that conference time of year again; I can’t believe the &lt;a href=&#34;http://www.microsoftpdc.com/&#34;&gt;PDC&lt;/a&gt; is only 3 weeks away, then &lt;a href=&#34;http://www.vbug.co.uk/Events/October-2008/THE-VBUG-NET-ANNUAL-CONFERENCE-2008.aspx&#34;&gt;VBug&lt;/a&gt;  the next week and &lt;a href=&#34;http://www.developerday.co.uk/&#34;&gt;DDD7&lt;/a&gt; just after that.&lt;/p&gt;
&lt;p&gt;A bit closer to home there are some free events coming up this month:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;XPClub - &lt;a href=&#34;http://xpclub.erudine.com/2008/09/october-meeting-agile-narrative-by-nick.html&#34;&gt;October Meeting: &amp;ldquo;Agile Narrative&amp;rdquo; by Nick McKenna, Wednesday, 8th October, 7pm for 7:30pm&lt;/a&gt; at the Victoria Hotel in the centre of Leeds (free event inc. free beer!), no need to register just turn up&lt;/li&gt;
&lt;li&gt;Black Marble community events – [Silverlight, XNA and Gaming](&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=Silverlight&#34;&gt;http://www.blackmarble.co.uk/events.aspx?event=Silverlight&lt;/a&gt;, XNA and Gaming) Pete Mcgann (XNA) and Richard Costall (Silverlight) talk about XNA, Gaming and Silverlight in their own unique style. 22nd October 6pm for 6:15pm at The Holiday Inn, Tong (between Leeds and Bradford). This free event also provides free food, you can just turn up, but really helps if you register online.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Look forward to seeing you at one of the events in Yorkshire&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is getting to that conference time of year again; I can’t believe the <a href="http://www.microsoftpdc.com/">PDC</a> is only 3 weeks away, then <a href="http://www.vbug.co.uk/Events/October-2008/THE-VBUG-NET-ANNUAL-CONFERENCE-2008.aspx">VBug</a>  the next week and <a href="http://www.developerday.co.uk/">DDD7</a> just after that.</p>
<p>A bit closer to home there are some free events coming up this month:</p>
<ul>
<li>XPClub - <a href="http://xpclub.erudine.com/2008/09/october-meeting-agile-narrative-by-nick.html">October Meeting: &ldquo;Agile Narrative&rdquo; by Nick McKenna, Wednesday, 8th October, 7pm for 7:30pm</a> at the Victoria Hotel in the centre of Leeds (free event inc. free beer!), no need to register just turn up</li>
<li>Black Marble community events – [Silverlight, XNA and Gaming](<a href="http://www.blackmarble.co.uk/events.aspx?event=Silverlight">http://www.blackmarble.co.uk/events.aspx?event=Silverlight</a>, XNA and Gaming) Pete Mcgann (XNA) and Richard Costall (Silverlight) talk about XNA, Gaming and Silverlight in their own unique style. 22nd October 6pm for 6:15pm at The Holiday Inn, Tong (between Leeds and Bradford). This free event also provides free food, you can just turn up, but really helps if you register online.</li>
</ul>
<p>Look forward to seeing you at one of the events in Yorkshire</p>
<p>A bit further from home, Guy has had a bit of a coup at the Bristol .Net Usergroup. He has got <a href="http://www.dotnetdevnet.com/Meetings/tabid/54/EntryID/26/Default.aspx">Oren Eini the developer of RhinoMocks speaking on the 13th of October</a> during a rare UK visit. For those of you who the name Oren is not familiar, he is the man behind blog <a href="http://ayende.com/blog/" title="http://ayende.com/blog/">http://ayende.com/blog/</a> a major resource for all things Agile.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The October Power Tools Release</title>
      <link>https://blog.richardfennell.net/posts/the-october-power-tools-release/</link>
      <pubDate>Fri, 03 Oct 2008 18:44:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-october-power-tools-release/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/09/18/blend-and-source-control.aspx&#34;&gt;posted about the problems of using Blend with source control&lt;/a&gt;, specifically TFS. Well the next version of TFS &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2008/10/01/preview-of-the-next-tfs-power-tools-release.aspx&#34;&gt;Power tools gives a partial answer&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;They are to include Windows Shell Extensions so at least the check-in/out process can be managed would having another application open other than an explorer windows.&lt;/p&gt;
&lt;p&gt;Well it is step forward, until Blend 3 appears&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/09/18/blend-and-source-control.aspx">posted about the problems of using Blend with source control</a>, specifically TFS. Well the next version of TFS <a href="http://blogs.msdn.com/bharry/archive/2008/10/01/preview-of-the-next-tfs-power-tools-release.aspx">Power tools gives a partial answer</a>.</p>
<p>They are to include Windows Shell Extensions so at least the check-in/out process can be managed would having another application open other than an explorer windows.</p>
<p>Well it is step forward, until Blend 3 appears</p>
]]></content:encoded>
    </item>
    <item>
      <title>Testing Driven Development for Workflow Foundation</title>
      <link>https://blog.richardfennell.net/posts/testing-driven-development-for-workflow-foundation/</link>
      <pubDate>Fri, 03 Oct 2008 18:28:22 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/testing-driven-development-for-workflow-foundation/</guid>
      <description>&lt;p&gt;As we move into the SOA world workflows will become more common and so the need to test them will increase. If we take the most simplistic view these are a sets if statements and loops so should be amenable to automated testing and TDD.&lt;/p&gt;
&lt;p&gt;However you have to careful how you try to do this, it is too easy to forget that your workflow will run the the WF thread. Why is this problem?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As we move into the SOA world workflows will become more common and so the need to test them will increase. If we take the most simplistic view these are a sets if statements and loops so should be amenable to automated testing and TDD.</p>
<p>However you have to careful how you try to do this, it is too easy to forget that your workflow will run the the WF thread. Why is this problem?</p>
<p>Consider this scenario, if you create a simple sequential WF console application you get a block of code like this in the <strong>Main(args)</strong> method</p>
<p>using (WorkflowRuntime workflowRuntime = new WorkflowRuntime())<br>
{<br>
   AutoResetEvent waitHandle = new AutoResetEvent(false);<br>
   workflowRuntime.WorkflowCompleted += delegate(object sender, WorkflowCompletedEventArgs e)<br>
   {<br>
      // could put asserts here (A)<br>
      waitHandle.Set();<br>
   };</p>
<p>   workflowRuntime.WorkflowTerminated += delegate(object sender, WorkflowTerminatedEventArgs e)<br>
   {<br>
      // could put asserts here (B)    waitHandle.Set();<br>
   };</p>
<p>  WorkflowInstance instance = workflowRuntime.CreateWorkflow(typeof(WorkflowConsoleApplication1.Workflow1));<br>
   instance.Start();</p>
<p>   waitHandle.WaitOne();<br>
}</p>
<p>// could put asserts here (C)</p>
<p>This block of code will be the basis of each of your unit tests, but as indicated you could put your test asserts at one of three places</p>
<ul>
<li>In the success delegate</li>
<li>In the fail/terminate delegate</li>
<li>After the workflow has completed</li>
</ul>
<p>The best place is the third option, when the workflow has finished, even though you know in a test whether it should completed OK or failed. The problem with the previous two locations is that the anonymous delegate will be run in the WF worker thread. Interestingly, if the assert passes in this location all will be OK, but if it fails the thrown assert exception (the mechanism used in nUnit to trap failure and in other test frameworks to my knowledge) will be in the wrong thread and so will not be picked up by the test harness, so in effect the test runner stalls waiting for completion or an exception, neither of which arrive. This issue is not the the case if the asserts are done when the workflow completes. This does mean that you have to pass results out of the workflow, but this should not be a major issue as the results should be simple objects (workflow parameters) or checks on external (mock?) systems, such as file systems, DBs or Smtp servers.</p>
<p>Though not always standard practice in TDD, I think it is a good idea here to move the code in the above sample’s <strong>using</strong> statement into a static method in your test class; it is boiler plate that will probably be use in many tests. As tests are meant to readable I think a single line call to the <strong>RunWorkFlow()</strong> method is better than a block of repeated loop and delegate code. Remember you do have to be careful here, as you may need to pass in parameters and get a return value to get the end state of the workflow; the requirement will depend what your workflow does and how it interacts with other systems. It maybe you need more than one version of the <strong>RunWorkFlow()</strong> static method depending on what is under test e.g one to harvest output parameters from completed workflow runs and another to harvest inner exceptions that are inside the workflow terminated delegate.</p>
<p>So now there is not reason to not test your WF workflows – assuming you can mock out any external connections, and if you are using web services or IoC to talk to data stores this should be fairly easy. </p>
<p>Next I am off to try to do the same with SharePoint workflows.</p>
]]></content:encoded>
    </item>
    <item>
      <title>XP Home to XP Professional when IE8 is involved</title>
      <link>https://blog.richardfennell.net/posts/xp-home-to-xp-professional-when-ie8-is-involved/</link>
      <pubDate>Thu, 02 Oct 2008 08:55:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/xp-home-to-xp-professional-when-ie8-is-involved/</guid>
      <description>&lt;p&gt;The one major thing that I did not like with my Dell Mini was the fact it had installed XP Home. My main grip with this was the fact my main user account had to be an admin user, and I could not login as a user called administrator unless in safe mode. So I decided to do an in place upgrade to XP Professional (which also meant i could join the company domain). This seemed straight forward, I attached an external USB DVD and got the XP Professional SP3 slipstream media. The process continued as you would expect until the final reboot where it sat on the splash screen saying &amp;lsquo;please wait&amp;rsquo; for a hour or two.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The one major thing that I did not like with my Dell Mini was the fact it had installed XP Home. My main grip with this was the fact my main user account had to be an admin user, and I could not login as a user called administrator unless in safe mode. So I decided to do an in place upgrade to XP Professional (which also meant i could join the company domain). This seemed straight forward, I attached an external USB DVD and got the XP Professional SP3 slipstream media. The process continued as you would expect until the final reboot where it sat on the splash screen saying &lsquo;please wait&rsquo; for a hour or two.</p>
<p>Normally you would have a disk activity light to give you confidence something was happening (oh that the other thing I miss on the Dell Mini). Anyway after a couple of hours I decided nothing was happening so did a hard reboot and up came the PC upgraded. I could login as administrator (in normal mode as opposed to just in safe mode) and set file encryption - so all seemed OK.</p>
<p>However I then tried to access the Internet. Initially this seemed to work, but as soon as tried to follow a link on a page I got a message about &rsquo;no activation context in the registry&rsquo;. Also when I tried to activate the new Windows license I got no dialog, it seemed the MSOOBE.EXE that is run behind the scenes was failing. It became obvious I had a corrupt IE install; the in place upgrade had left me with an interesting mix of IE 6, 7 and 8 beta. This is why I think it stalled on the splash screen.</p>
<p>So I downloaded the IE8 beta again, reinstalled it and everything started to work.</p>
<p>So the technical tip is - if you are doing an in place Windows upgraded remember that you are probably going to alter the version of IE and as it is intrinsic to how Windows operates you might get problems. Make sure you get the known version you want at hand to reinstall it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Further thoughts on mocking out a SMTP Server</title>
      <link>https://blog.richardfennell.net/posts/further-thoughts-on-mocking-out-a-smtp-server/</link>
      <pubDate>Tue, 30 Sep 2008 20:31:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/further-thoughts-on-mocking-out-a-smtp-server/</guid>
      <description>&lt;p&gt;I posted on the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/09/27/mocking-out-an-email-server.aspx&#34;&gt;problems I had had mocking out an SMTP server&lt;/a&gt;, well I have moved on a bit. As I said in the update note in the last post, I had given up on &lt;a href=&#34;http://sourceforge.net/projects/ndumbster/&#34;&gt;nDumbster&lt;/a&gt; and moved over to &lt;a href=&#34;http://www.lumisoft.ee/lswww/ENG/Products/Mail_Server/mail_index_eng.aspx?type=info&#34;&gt;LumiSoft&amp;rsquo;s freeware mail server&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Lumisoft server is a far better starting point as it is a &amp;lsquo;real&amp;rsquo; server that supports all the major mail protocols. As all the source (and binaries if you can&amp;rsquo;t be bothered to build it yourself) are shipped it is easy to create a &lt;a href=&#34;http://www.lumisoft.ee/forum/default.aspx?g=posts&amp;amp;m=1494&#34;&gt;wrapper class&lt;/a&gt; for unit testing purposes that can do whatever you need.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted on the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/09/27/mocking-out-an-email-server.aspx">problems I had had mocking out an SMTP server</a>, well I have moved on a bit. As I said in the update note in the last post, I had given up on <a href="http://sourceforge.net/projects/ndumbster/">nDumbster</a> and moved over to <a href="http://www.lumisoft.ee/lswww/ENG/Products/Mail_Server/mail_index_eng.aspx?type=info">LumiSoft&rsquo;s freeware mail server</a>.</p>
<p>The Lumisoft server is a far better starting point as it is a &lsquo;real&rsquo; server that supports all the major mail protocols. As all the source (and binaries if you can&rsquo;t be bothered to build it yourself) are shipped it is easy to create a <a href="http://www.lumisoft.ee/forum/default.aspx?g=posts&amp;m=1494">wrapper class</a> for unit testing purposes that can do whatever you need.</p>
<p>However even with this much improved server I still had a problem with <strong>System.Net.Mail</strong> calls. I had four tests</p>
<ul>
<li>Send a single email with <strong>System.Web.Mail</strong></li>
<li>Send a single email with <strong>System.Net.Mail</strong></li>
<li>Send a batch of three emails with <strong>System.Web.Mail</strong></li>
<li>Send a batch of three emails with <strong>System.Net.Mail</strong></li>
</ul>
<p>Before each test a new SMTP server was created, and when the test completed it was disposed of. Each test worked if run by itself.</p>
<p>The problem was that if the tests were run as a batch the final test failed. When the SMTP server was checked it was found to have no messages recorded, not the expected three. However, the logging said three messages had been sent OK. Swapping the order if the tests did not effect the basic issue that the second <strong>System.Net.Mail</strong> test reported no emails received, whilst the <strong>System.Web.Mail</strong> tests were fine.</p>
<p>By adding a unique identifier to each created SMTP server it could be seen that the fourth test was sending its mail to the second SMTP server (which should have been disposed of well before the fourth test was started) hence the final test failing.</p>
<p>The problem appears to be that the threading model inside <strong>System.Net.Mail</strong> holds the TcpClient object in memory for longer than you would expect, so somehow allows the fourth test to reuse the connection (and server) from the second test. Though it is unclear how you are able to have two servers both on port 25 at the same time. I guess this theory could also go some way to explaining the issues I had with the nDumbster implementation.</p>
<p>Though not perfect, the solution I used was to make the Smtp Server instance static for the test class so for all the tests I created just one instance of the server. Before each test I cleared down the received messages store. Thus far this is working reliably for both single tests and batches of tests.</p>
<p><strong>Update 6 Oct 2008</strong> - The use of a static Smtp server per class is not a full solution, but just moves the problem on i.e. you get the same problem is you have two classe in the batch each with their own server. I need to find a better way to rip the Smtp server of memory - Maybe an AppDomain?</p>
<p><strong>Update 10 Oct 2008</strong> - Well AppDomains don&rsquo;t help. The best solution I have found is to not use a static Smtp server but to instaniate a non static member one and then start it in the test systems TestInit method. In the test systems TestCleardown I stop the server and dispose of it. However this does not fix the problem. I therefore then send an email to the server I have just disposed using System.Net,Mail. This obviously fails but does cleardown the internal cached items inside the System.Net.Mail structure. I just catch the expected exception and carry on. I know it is not elegent but it works both within a test class and an assembly containing many classes</p>
]]></content:encoded>
    </item>
    <item>
      <title>&#39;Datadude&#39; merged with Team Developer</title>
      <link>https://blog.richardfennell.net/posts/datadude-merged-with-team-developer/</link>
      <pubDate>Tue, 30 Sep 2008 12:45:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/datadude-merged-with-team-developer/</guid>
      <description>&lt;p&gt;It was &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2008/09/29/shining-the-light-on-rosario.aspx&#34;&gt;announced overnight by Microsoft&lt;/a&gt; that the Database Professional SKU for Visual Studio will be made available to all people who have a licensed copy of Team Developer.&lt;/p&gt;
&lt;p&gt;This is great news as it addresses the problem of where to put the expensive copy of DataDude (which I think has been a barrier to it&amp;rsquo;s uptake), in most companies there is not the clear distinction between code and DB devs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It was <a href="http://blogs.msdn.com/bharry/archive/2008/09/29/shining-the-light-on-rosario.aspx">announced overnight by Microsoft</a> that the Database Professional SKU for Visual Studio will be made available to all people who have a licensed copy of Team Developer.</p>
<p>This is great news as it addresses the problem of where to put the expensive copy of DataDude (which I think has been a barrier to it&rsquo;s uptake), in most companies there is not the clear distinction between code and DB devs.</p>
<p>So as of tomorrow you can get DataDude via MSDN. You can read more about the recent announcements here: <a href="http://msdn.microsoft.com/en-us/vstudio/products/cc948977.aspx">http://msdn.microsoft.com/en-us/vstudio/products/cc948977.aspx</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Mocking out an email server</title>
      <link>https://blog.richardfennell.net/posts/mocking-out-an-email-server/</link>
      <pubDate>Sat, 27 Sep 2008 19:37:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mocking-out-an-email-server/</guid>
      <description>&lt;p&gt;I am currently looking at automation of acceptance testing for workflows and a common task is to make sure an email has been sent. To do this in a TDD style I have been using the mock SMTP server &lt;a href=&#34;http://sourceforge.net/projects/ndumbster&#34;&gt;nDumbster&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Now this was a port from Java in the the days of .NET 1.1 and does not seem to have had any development since. This can be seen because the following test using the .Net 1.1 System.Web.Mail call works beautifully, returning in a second or so&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am currently looking at automation of acceptance testing for workflows and a common task is to make sure an email has been sent. To do this in a TDD style I have been using the mock SMTP server <a href="http://sourceforge.net/projects/ndumbster">nDumbster</a>.</p>
<p>Now this was a port from Java in the the days of .NET 1.1 and does not seem to have had any development since. This can be seen because the following test using the .Net 1.1 System.Web.Mail call works beautifully, returning in a second or so</p>
<p>[TestMethod]<br>
public void CanRecieveSingleWebMail()<br>
{<br>
System.Web.Mail.SmtpMail.SmtpServer = &ldquo;localhost&rdquo;;<br>
System.Web.Mail.SmtpMail.Send(&ldquo;<a href="mailto:somebody@foo.com">somebody@foo.com</a>&rdquo;, &ldquo;<a href="mailto:everybody@bar.com">everybody@bar.com</a>&rdquo;, &ldquo;This is the subject&rdquo;, &ldquo;This is the body.&rdquo;);<br>
Assert.AreEqual(1, smtpServer.ReceivedEmail.Length);<br>
}</p>
<p>However the test using the .Net 2.0 System.Net.Mail is not so good</p>
<p>[TestMethod]<br>
public void CanRecieveSingleNetMail()<br>
{<br>
System.Net.Mail.SmtpClient client = new System.Net.Mail.SmtpClient(&ldquo;localhost&rdquo;);<br>
System.Net.Mail.MailMessage msg = new System.Net.Mail.MailMessage(&ldquo;<a href="mailto:somebody@foo.com">somebody@foo.com</a>&rdquo;, &ldquo;<a href="mailto:everybody@bar.com">everybody@bar.com</a>&rdquo;, &ldquo;This is the subject&rdquo;, &ldquo;This is the body.&rdquo;);<br>
client.Send(msg);<br>
Assert.AreEqual(1, smtpServer.ReceivedEmail.Length);<br>
}</p>
<p>It does work but takes 100 seconds to return, a bit of a pain for TDD!</p>
<p>After a bit of debugging I found the problem was the final input.ReadLine(); in HandleSmtpTranslation which we expect to get a null at the end of a message, for Systsem.Web.Mail this returns instantly, but for System.Net.Mail this takes 100 second, but then is fine.</p>
<p>As a work round I have put in the logic</p>
<p>string line = null;<br>
if (smtpState != SmtpState.QUIT)<br>
{<br>
    line = input.ReadLine();<br>
}<br>
if (line == null)<br>
{<br>
    break;<br>
}</p>
<p>This works fine for single mail messages, using both the old and new API.</p>
<p>There is still a problem, with the System.Web.Mail API you can send multiple emails in a test, but this still fails for System.Net.Mail – the first goes OK, but the second fails saying the server is not present.</p>
<p>By the way I have ruled anti virus and anything specific to port 25 causing the problem. I suspect a threading issue as I believe System.Web.Mail was single threaded wrapper for COM and System.Net.Mail is multi threaded pure .Net logic.</p>
<p>If I get a better solution I will post about it, but I have enough functionality for me at present.</p>
<p><strong>Update 28th Sep</strong> – Got an email today via the <a href="https://sourceforge.net/forum/message.php?msg_id=5334518">SourceForge forum</a> that pointed me at the freeware <a href="http://www.lumisoft.ee/lsWWW/ENG/Products/Mail_Server/mail_index_eng.aspx?type=info">LumiSoft mail server</a>. On a first look this certainly appears to do the job. You just need to wire it into your tests in <a href="http://www.lumisoft.ee/forum/default.aspx?g=posts&amp;m=1494">a wrapper class</a> of your own.</p>
]]></content:encoded>
    </item>
    <item>
      <title>First thoughts on the Dell Mini 9</title>
      <link>https://blog.richardfennell.net/posts/first-thoughts-on-the-dell-mini-9/</link>
      <pubDate>Wed, 24 Sep 2008 18:45:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-thoughts-on-the-dell-mini-9/</guid>
      <description>&lt;p&gt;After Alt.net I had thought getting an ultra light notebook was a good idea. At most conference I need to browser, blog and read email; I don’t need to carry around a full development desktop replacement laptop.&lt;/p&gt;
&lt;p&gt;Whilst at ReMix I had enough of the old battery in my Acer laptop, so just before it ran out of juice again I ordered a &lt;a href=&#34;http://www.dell.com/content/products/productdetails.aspx/laptop-inspiron-9&#34;&gt;Dell Mini 9&lt;/a&gt;, which was actually cheaper than my &lt;a href=&#34;http://www.htc.com/www/default.aspx&#34;&gt;HTC phone&lt;/a&gt;! Today it arrived, a lot sooner than I was expecting, Dell has said it would be next month.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After Alt.net I had thought getting an ultra light notebook was a good idea. At most conference I need to browser, blog and read email; I don’t need to carry around a full development desktop replacement laptop.</p>
<p>Whilst at ReMix I had enough of the old battery in my Acer laptop, so just before it ran out of juice again I ordered a <a href="http://www.dell.com/content/products/productdetails.aspx/laptop-inspiron-9">Dell Mini 9</a>, which was actually cheaper than my <a href="http://www.htc.com/www/default.aspx">HTC phone</a>! Today it arrived, a lot sooner than I was expecting, Dell has said it would be next month.</p>
<p><a href="http://www.dell.com/"><img alt="Inspiron Mini 9" loading="lazy" src="http://i.dell.com/resize.aspx/laptop-inspiron-9-hero-295/295"></a></p>
<p>My first thoughts? Well the keyboard is small, typing is an interesting experience as I type this post. Also I would like a disk activity light; during the XP setup there a good period when I was not sure what was going on, a flickering led is always reassuring.</p>
<p>Other than these minor gripes it seem very good, it is light, the battery life seem as advertised and it is fast enough for Office 2007 (so I can use it for presentations) and messing around development (I popped on <a href="http://www.microsoft.com/express/">C#  Express</a> but there is disk space left for VS2008 if I needed it).</p>
<p>I will report further when I have lived with it a while.</p>
]]></content:encoded>
    </item>
    <item>
      <title>So what will we need to run &#39;Rosario&#39;?</title>
      <link>https://blog.richardfennell.net/posts/so-what-will-we-need-to-run-rosario/</link>
      <pubDate>Tue, 23 Sep 2008 20:10:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/so-what-will-we-need-to-run-rosario/</guid>
      <description>&lt;p&gt;Brian Harry has published an interesting &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2008/09/23/charting-a-course-for-tfs-rosario.aspx&#34;&gt;post on the platform decisions for &amp;lsquo;Rosario&lt;/a&gt;&amp;rsquo;.&lt;/p&gt;
&lt;p&gt;The most interesting item is it will only support SQL 2008; so get planning a migration for that central enterprise SQL server!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Brian Harry has published an interesting <a href="http://blogs.msdn.com/bharry/archive/2008/09/23/charting-a-course-for-tfs-rosario.aspx">post on the platform decisions for &lsquo;Rosario</a>&rsquo;.</p>
<p>The most interesting item is it will only support SQL 2008; so get planning a migration for that central enterprise SQL server!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Post ReMix thoughts</title>
      <link>https://blog.richardfennell.net/posts/post-remix-thoughts/</link>
      <pubDate>Sat, 20 Sep 2008 14:34:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/post-remix-thoughts/</guid>
      <description>&lt;p&gt;So how was ReMix? well a bit like last year not earth shattering, but what can you expect. We have PDC in a month so there is going to be no major announcements, also this is a rerun event from the &lt;a href=&#34;http://visitmix.com/&#34;&gt;US MIX conference&lt;/a&gt;, again reducing the chances of anything new.&lt;/p&gt;
&lt;p&gt;So if you are a developer and been to TechEd or just to local community events (and as &lt;a href=&#34;http://www.andrewwestgarth.co.uk/Blog/&#34;&gt;Andy Westgarth&lt;/a&gt; said we are very lucky at the quality and number of community events and speakers in the UK) you will have heard nothing new about &lt;a href=&#34;http://silverlight.net/&#34;&gt;Silverlight&lt;/a&gt; etal. Though it is far to say that this may not be true for designers, and I felt the development track was pitched at this level.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So how was ReMix? well a bit like last year not earth shattering, but what can you expect. We have PDC in a month so there is going to be no major announcements, also this is a rerun event from the <a href="http://visitmix.com/">US MIX conference</a>, again reducing the chances of anything new.</p>
<p>So if you are a developer and been to TechEd or just to local community events (and as <a href="http://www.andrewwestgarth.co.uk/Blog/">Andy Westgarth</a> said we are very lucky at the quality and number of community events and speakers in the UK) you will have heard nothing new about <a href="http://silverlight.net/">Silverlight</a> etal. Though it is far to say that this may not be true for designers, and I felt the development track was pitched at this level.</p>
<p>It is with design where I think ReMix was different this year; there were noticeably more designers present and I thought the design track a lot stronger. As I had seen most of the developer track elsewhere I spent a good while in designer sessions, and as usual when going beyond your normal bounds it makes you think.</p>
<p>What has been bubbling in my mind is how does the design process fit into agile processes? <a href="http://www.billbuxton.com/">Bill Buxton</a> in his sessions (and <a href="http://www.amazon.co.uk/Sketching-User-Experiences-Getting-Design/dp/0123740371/ref=pd_bbs_sr_1/104-7396138-7307151?ie=UTF8&amp;s=books&amp;qid=1177046911&amp;sr=8-1">new book</a>) spoke about how it was important for the designer to provide a variety of concepts for the client/project and not invest all their effort in a single design, thus removing choice form the &lsquo;client&rsquo; . It got me think should the same apply to the developers/architect? Is this even possible? At the grossest sense it would mean developers would propose (prototype?) versions of a product in PHP, Java, Flash ASP.NET, Silverlight etc. Now we all know this does not happen, early in a project a hopefully informed choice will made as to the technology to use (often this choice is made in the choice of who gets the business - a PHP house or a Microsoft house), and it is rare a change will be made in base technology once the choice is made. Even though a project is agile there are limits as to how major a change of direction could be,</p>
<p>So is this a difference between design and development, in web design we can ask the client how it should look, you still need to provide quality design with sane user interactions, but the client can have some choice e.g. &lsquo;I like  a more cartoon style&rsquo;, &lsquo;I want just like that but in red&rsquo;. You are not going to get the same interaction from the bulk of clients for development and architecture, it is rare you hear &lsquo;Just like that but can you use SOAP&rsquo;. Development constrains tend to be just that, things that must be done to meet a standard, not something open to choice at the non specialist level.</p>
<p>So ReMix was an interesting event, and good to see many friend from the community. And if it opened my mind to something new it must have been worth attending.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Rebuild of GUITester</title>
      <link>https://blog.richardfennell.net/posts/rebuild-of-guitester/</link>
      <pubDate>Sat, 20 Sep 2008 13:39:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/rebuild-of-guitester/</guid>
      <description>&lt;p&gt;I have eventually got round to rebuilding my &lt;a href=&#34;http://www.codeplex.com/guitester&#34;&gt;GUITester on Codeplex&lt;/a&gt; for VS2008. I must get round to looking to see if a similar declarative test markup system has any mileage for XAML. Something was talking about at Alt.net in the spring - how time flies.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have eventually got round to rebuilding my <a href="http://www.codeplex.com/guitester">GUITester on Codeplex</a> for VS2008. I must get round to looking to see if a similar declarative test markup system has any mileage for XAML. Something was talking about at Alt.net in the spring - how time flies.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Local user groups</title>
      <link>https://blog.richardfennell.net/posts/local-user-groups/</link>
      <pubDate>Fri, 19 Sep 2008 23:13:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/local-user-groups/</guid>
      <description>&lt;p&gt;I have been asked a couple of times whist at the &lt;a href=&#34;http://www.altnetuk.com/&#34;&gt;Alt.net&lt;/a&gt; and &lt;a href=&#34;http://www.microsoft.com/uk/remix08/default.aspx&#34;&gt;ReMix&lt;/a&gt; conferences about the local user groups around Leeds. Well these are the regular free community events I know about:&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events&#34;&gt;Black Marble community events&lt;/a&gt; - my company runs free community events at a hotel between Leeds and Bradford roughly monthly from autumn to spring. We invite a variety of leading .NET community speakers.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://xpclub.erudine.com/&#34;&gt;The Yorkshire Extreme Programming and Agile Methods Club&lt;/a&gt; (XPClub) - meets the 2nd Wednesday of the month in the centre of Leeds. This is a group of .NET and Java developers interested in agile development bast practice.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been asked a couple of times whist at the <a href="http://www.altnetuk.com/">Alt.net</a> and <a href="http://www.microsoft.com/uk/remix08/default.aspx">ReMix</a> conferences about the local user groups around Leeds. Well these are the regular free community events I know about:</p>
<p><a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events">Black Marble community events</a> - my company runs free community events at a hotel between Leeds and Bradford roughly monthly from autumn to spring. We invite a variety of leading .NET community speakers.</p>
<p><a href="http://xpclub.erudine.com/">The Yorkshire Extreme Programming and Agile Methods Club</a> (XPClub) - meets the 2nd Wednesday of the month in the centre of Leeds. This is a group of .NET and Java developers interested in agile development bast practice.</p>
<p><a href="http://www.westyorkshire.bcs.org/">BCS West Yorkshire Branch</a> - the local branch of the BCS meet either in the Leeds or York providing a wide range of IT related sessions (they also have links to the events run by the local <a href="http://www.theiet.org/local/uk/yorks/west/index.cfm">Institution of Engineering and Technology</a> branches)</p>
<p><a href="http://www.sqlserverfaq.com/events/117/Leeds-Area-SQL-Server-User-Group-Meeting.aspx">Leeds branch of SQL Server User Group</a> - the local branch of the national SQL User groups. As the name suggests is targeted at SQL users.</p>
<p><a href="http://geekup.org/">Geek Up</a> - run monthly events cross the north of England targeted at web designers and web developers (I have not been to this group, but members of the XPClub have and say they enjoyed it)</p>
<p><a href="http://www.vbug.co.uk/">Vbug</a> - run some events in Leeds, but I think these are organised by the Manchester branch</p>
<p>Now there my be others that I don&rsquo;t know about, if you know of one please let me know.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Not speaking at Vbug next week</title>
      <link>https://blog.richardfennell.net/posts/not-speaking-at-vbug-next-week/</link>
      <pubDate>Fri, 19 Sep 2008 22:13:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/not-speaking-at-vbug-next-week/</guid>
      <description>&lt;p&gt;I have just found out that the session I was doing at &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/08/21/speaking-on-team-system-in-coventry.aspx&#34;&gt;Vbug in Coventry next week&lt;/a&gt; has been cancelled due to low numbers. Sorry if you were planning to come, but don&amp;rsquo;t worry I will be doing a similar sessions (the the bonus of post &lt;a href=&#34;http://www.microsoftpdc.com/&#34;&gt;PDC&lt;/a&gt; &lt;a href=&#34;http://msdn.microsoft.com/en-us/vstudio/bb725993.aspx&#34;&gt;Rosario&lt;/a&gt; updates) at the &lt;a href=&#34;http://www.vbug.co.uk/Events/October-2008/THE-VBUG-NET-ANNUAL-CONFERENCE-2008.aspx&#34;&gt;VBug conference&lt;/a&gt; in November.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just found out that the session I was doing at <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/08/21/speaking-on-team-system-in-coventry.aspx">Vbug in Coventry next week</a> has been cancelled due to low numbers. Sorry if you were planning to come, but don&rsquo;t worry I will be doing a similar sessions (the the bonus of post <a href="http://www.microsoftpdc.com/">PDC</a> <a href="http://msdn.microsoft.com/en-us/vstudio/bb725993.aspx">Rosario</a> updates) at the <a href="http://www.vbug.co.uk/Events/October-2008/THE-VBUG-NET-ANNUAL-CONFERENCE-2008.aspx">VBug conference</a> in November.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Blend and Source control</title>
      <link>https://blog.richardfennell.net/posts/blend-and-source-control/</link>
      <pubDate>Thu, 18 Sep 2008 11:50:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/blend-and-source-control/</guid>
      <description>&lt;p&gt;It is all well and good Microsoft saying that a developers and designer can share the same project WPF/Silverlight files in Visual Studio and &lt;a href=&#34;http://www.microsoft.com/expression/products/Overview.aspx?key=blend&#34;&gt;Expression Blend&lt;/a&gt;, but whilst Blend does not have the ability to use a source control repository (TFS, SVN or anything else for that matter) and actually strips out  any source control binding it finds in a project file, this is for me unworkable experience. How has this product got to V2 without this feature?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is all well and good Microsoft saying that a developers and designer can share the same project WPF/Silverlight files in Visual Studio and <a href="http://www.microsoft.com/expression/products/Overview.aspx?key=blend">Expression Blend</a>, but whilst Blend does not have the ability to use a source control repository (TFS, SVN or anything else for that matter) and actually strips out  any source control binding it finds in a project file, this is for me unworkable experience. How has this product got to V2 without this feature?</p>
<p>For any company interested in a quality development process source control must be the most basic safety net, irrespective of how agile their methodology is.</p>
<p>I suppose there is one agile model where it could almost work and that is the designer and developer pair programing on the same PC with Blend and VS installed - but how realistic is that? and certainly does not scale beyond two people.</p>
<p>So as I am at Remix, I just asked <a href="http://www.microsoft.com/uk/remix08/speakers.aspx#guthrie">Scott Guthrie</a> when we would see source control in Blend - he said next version, some time next year, but on the plus side it should also include TFS work item integration. All good but we need it now, this is such a barrier to adoption of the Blend products.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Remix UK 08 Keynote</title>
      <link>https://blog.richardfennell.net/posts/remix-uk-08-keynote/</link>
      <pubDate>Thu, 18 Sep 2008 11:27:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/remix-uk-08-keynote/</guid>
      <description>&lt;p&gt;Just seen the keynote for Remix in Brighton, I must say that &lt;a href=&#34;http://www.microsoft.com/uk/remix08/speakers.aspx#buxton&#34;&gt;Bill Buxton&lt;/a&gt;&amp;rsquo;s part was one of the best keynotes I have seen (watch out for the streamed webcast). For me the most important thing for a keynote is to set the tone, the &lt;a href=&#34;http://en.wikipedia.org/wiki/Meme&#34;&gt;meme&lt;/a&gt;, for the conference and his session certainly did. OK it is good to see the demos of products but you can&amp;rsquo;t beat a good bit of thought provoking public speaking. First time I have seen this done well by Microsoft since the last &lt;a href=&#34;http://www.microsoftpdc.com/&#34;&gt;PDC&lt;/a&gt; a couple of years ago. Interestingly, setting the tone was something always done very well at the Sun &lt;a href=&#34;http://en.wikipedia.org/wiki/JavaOne&#34;&gt;JavaOne&lt;/a&gt; conferences I attended in the late 90s.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just seen the keynote for Remix in Brighton, I must say that <a href="http://www.microsoft.com/uk/remix08/speakers.aspx#buxton">Bill Buxton</a>&rsquo;s part was one of the best keynotes I have seen (watch out for the streamed webcast). For me the most important thing for a keynote is to set the tone, the <a href="http://en.wikipedia.org/wiki/Meme">meme</a>, for the conference and his session certainly did. OK it is good to see the demos of products but you can&rsquo;t beat a good bit of thought provoking public speaking. First time I have seen this done well by Microsoft since the last <a href="http://www.microsoftpdc.com/">PDC</a> a couple of years ago. Interestingly, setting the tone was something always done very well at the Sun <a href="http://en.wikipedia.org/wiki/JavaOne">JavaOne</a> conferences I attended in the late 90s.</p>
<p>Bill&rsquo;s concept of strength through bring together excellent people with different skills, but with a common understanding and language, to make strong teams was also a common theme from <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/09/14/alt-net-the-day-after.aspx">Alt.net</a> and the post bar drinks from last nights <a href="http://upcoming.yahoo.com/event/911641/">VBug meeting</a>. If the whole team does not buy into a concept whether it be excellence in design or <a href="http://en.wikipedia.org/wiki/Test-driven_development">TDD</a> then delivering high quality product is down to luck.</p>
<p>I think I will be doing to Bill&rsquo;s other sessions.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Not enough people locally for a TFS user group?</title>
      <link>https://blog.richardfennell.net/posts/not-enough-people-locally-for-a-tfs-user-group/</link>
      <pubDate>Tue, 16 Sep 2008 17:07:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/not-enough-people-locally-for-a-tfs-user-group/</guid>
      <description>&lt;p&gt;Do you feel alone, nobody nearby who you can talk to about TFS? Well try the new &lt;a href=&#34;http://www.tsug-ve.com/&#34;&gt;virtual user group&lt;/a&gt; meeting in Second Life.&lt;/p&gt;
&lt;p&gt;Only issue is the meeting will be 2am UK time by my reckoning.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Do you feel alone, nobody nearby who you can talk to about TFS? Well try the new <a href="http://www.tsug-ve.com/">virtual user group</a> meeting in Second Life.</p>
<p>Only issue is the meeting will be 2am UK time by my reckoning.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Alt.net - the day after</title>
      <link>https://blog.richardfennell.net/posts/alt-net-the-day-after/</link>
      <pubDate>Sun, 14 Sep 2008 15:16:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alt-net-the-day-after/</guid>
      <description>&lt;p&gt;Back home now after a excellent two days at &lt;a href=&#34;http://altdotnet.org/events/5&#34;&gt;Alt.Net in London&lt;/a&gt;. As with the spring conference this was a thought provoking event. I really like the whole open space format, though there were four &amp;lsquo;main sessions&amp;rsquo; the event started the night before in the planning session and the bar and carried on without a pause, including the train trip home.&lt;/p&gt;
&lt;p&gt;The main sessions can be excellent; but it is usually the chat walking to get a coffee or over lunch was where you get a nugget of information that completes a picture for you. It is great to find out you are not alone in the problems you have and refreshing to hear people speak so openly over the challenges and successes they have had.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Back home now after a excellent two days at <a href="http://altdotnet.org/events/5">Alt.Net in London</a>. As with the spring conference this was a thought provoking event. I really like the whole open space format, though there were four &lsquo;main sessions&rsquo; the event started the night before in the planning session and the bar and carried on without a pause, including the train trip home.</p>
<p>The main sessions can be excellent; but it is usually the chat walking to get a coffee or over lunch was where you get a nugget of information that completes a picture for you. It is great to find out you are not alone in the problems you have and refreshing to hear people speak so openly over the challenges and successes they have had.</p>
<p>The main take away for me were ideas on acceptance testing:</p>
<ul>
<li>The gathering of the tests can be improved by using a three pronged attack using the client (or Business Analyst), the developers and the tester/QA team. It is three pronged as each group has their own view of what the product should do. The client want tests that prove a product meets a business need, the developer will test some edge cases around the business tests and the tester will add tests that just try to break the product. Together this group should provide a reasonable test coverage, not perfect but better than the view of any single group.</li>
<li>I also got a number of ideas on techniques for writing the acceptance tests. The most immediately interesting was the idea of using log files via a product such as <a href="http://texttest.carmen.se/">TestTest</a>. My first thought is that this provides an interesting way to tackle automated testing of Sharepoint Workflows - a current issue for me.</li>
</ul>
<p>At the event I mention that we at Black Marble were considering hosting a similar event, but I had been worried that there would not be the interest as I perceived the attendees of the London event were fairly London based. I am pleased to say that this does not seem to be the case. And true to the principles of an open spaces event we have to recognise that &lsquo;whoever comes are the right people&rsquo;. Keep an eye of for announcements of an event in the new year.</p>
<p>So thanks again to <a href="http://codebetter.com/blogs/ian_cooper">Ian Cooper</a>, <a href="http://thoughtpad.net/alan-dean.html">Alan Dean</a> and <a href="http://blog.benhall.me.uk">Ben Hall</a> for organising the event, I look forward to the next one.</p>
<p>Technorati Tags: <a href="http://technorati.com/tags/altnetuk">altnetuk</a></p>
<p><a href="/wp-content/uploads/sites/2/historic/altdotnet_2.jpg"><img alt="altdotnet" loading="lazy" src="/altdotnet_thumb.jpg"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Signing Powershell scripts with a Thawte code signing certificate</title>
      <link>https://blog.richardfennell.net/posts/signing-powershell-scripts-with-a-thawte-code-signing-certificate/</link>
      <pubDate>Wed, 10 Sep 2008 22:19:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/signing-powershell-scripts-with-a-thawte-code-signing-certificate/</guid>
      <description>&lt;p&gt;I hit a problem today when trying to &lt;a href=&#34;http://www.hanselman.com/blog/SigningPowerShellScripts.aspx&#34;&gt;sign a powershell script as detailed on Scott Hanselman&amp;rsquo;s blog&lt;/a&gt; with a &lt;a href=&#34;https://www.thawte.com/code-signing/index.html?click=main-nav-products-codesigning&#34;&gt;Thawte code signing certificate&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The basic issue was that the certificate could be seen in the Personal section of the Certificates MMC snap-in. Also it was listed if I issued the Powershell command&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Get-ChildItem cert:CurrentUserMy&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;but if I added the flag to only show usable codesigning (Class III) certificates it was not listed.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I hit a problem today when trying to <a href="http://www.hanselman.com/blog/SigningPowerShellScripts.aspx">sign a powershell script as detailed on Scott Hanselman&rsquo;s blog</a> with a <a href="https://www.thawte.com/code-signing/index.html?click=main-nav-products-codesigning">Thawte code signing certificate</a>.</p>
<p>The basic issue was that the certificate could be seen in the Personal section of the Certificates MMC snap-in. Also it was listed if I issued the Powershell command</p>
<blockquote>
<p><em>Get-ChildItem cert:CurrentUserMy</em></p></blockquote>
<p>but if I added the flag to only show usable codesigning (Class III) certificates it was not listed.</p>
<blockquote>
<p><em>Get-ChildItem cert:CurrentUserMy -codesigning</em></p></blockquote>
<p>Turns out the issue was the same as you see when trying to sign Office 2000 VBA scripts. You have to have imported the certificate with it&rsquo;s key as <a href="https://www.thawte.com/ssl-digital-certificates/technical-support/code/multi.html#pk">detailed on the Thawte site</a>, using the PVKIMPRT.EXE tool. This means you need the MYCERT.P7B and the MYKEY.PVK for the import.</p>
<p>This is made a bit more complex if using the Thawte web site and a Vista client PC as your purchased certificate is installed into you local registry automatically (you don&rsquo;t get a separate key file). So it would work on the PC used to purchase the certificate but you could not export it. Hence the tip here is to purchase the certificate on an XP client PC so you get both the certificate and key files; Ok you have to manually install the certificate and the key but is is easier in the long term.</p>
<p>Once this is done you can sign the powershell script using the command</p>
<blockquote>
<p><em>Set-AuthenticodeSignature file.ps1 @(Get-ChildItem cert:CurrentUserMy -codesigning)[0]</em></p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>September XPClub Meeting is this Wednesday</title>
      <link>https://blog.richardfennell.net/posts/september-xpclub-meeting-is-this-wednesday/</link>
      <pubDate>Sun, 07 Sep 2008 15:49:59 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/september-xpclub-meeting-is-this-wednesday/</guid>
      <description>&lt;p&gt;The next XPClub meeting is a double bill: &lt;a href=&#34;http://xpclub.erudine.com/2008/09/next-meeting-is-double-bill-code.html&#34;&gt;Code Quality and Scala + Lift&lt;/a&gt;. and is being held at the usual venue the &lt;a href=&#34;http://local.google.co.uk/maps?f=q&amp;amp;hl=en&amp;amp;geocode=&amp;amp;time=&amp;amp;date=&amp;amp;ttype=&amp;amp;q=victoria&amp;#43;hotel&amp;#43;LS1&amp;amp;ie=UTF8&amp;amp;ll=53.800549,-1.548729&amp;amp;spn=0.009175,0.028539&amp;amp;z=16&amp;amp;iwloc=A&amp;amp;om=1&#34;&gt;Victoria Hotel Leeds&lt;/a&gt;. This is a free event open to all.&lt;/p&gt;
&lt;p&gt;As usual drinks and snacks thanks to the club sponsors are free, also there will be a draw for one free personal license for any of the products by &lt;a href=&#34;http://www.jetbrains.com&#34;&gt;JetBrains&lt;/a&gt; - the new club sponsor.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The next XPClub meeting is a double bill: <a href="http://xpclub.erudine.com/2008/09/next-meeting-is-double-bill-code.html">Code Quality and Scala + Lift</a>. and is being held at the usual venue the <a href="http://local.google.co.uk/maps?f=q&amp;hl=en&amp;geocode=&amp;time=&amp;date=&amp;ttype=&amp;q=victoria&#43;hotel&#43;LS1&amp;ie=UTF8&amp;ll=53.800549,-1.548729&amp;spn=0.009175,0.028539&amp;z=16&amp;iwloc=A&amp;om=1">Victoria Hotel Leeds</a>. This is a free event open to all.</p>
<p>As usual drinks and snacks thanks to the club sponsors are free, also there will be a draw for one free personal license for any of the products by <a href="http://www.jetbrains.com">JetBrains</a> - the new club sponsor.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading Team Foundation Server 2008 with SP1</title>
      <link>https://blog.richardfennell.net/posts/upgrading-team-foundation-server-2008-with-sp1/</link>
      <pubDate>Sat, 06 Sep 2008 17:58:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-team-foundation-server-2008-with-sp1/</guid>
      <description>&lt;p&gt;Today I got round to upgrading our TFS 2008 (dual server setup) with &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?familyid=9e40a5b6-da41-43a2-a06d-3cee196bfe3d&amp;amp;displaylang=en&#34;&gt;Team Server 2008 SP1&lt;/a&gt;. I tried it first on my demo VPC I use for presentations (reminder I am at &lt;a href=&#34;http://www.vbug.co.uk/Events/September-2008/VBUG-Coventry-Team-Foundation-Server-with-Richard-Fennell.aspx&#34;&gt;VBug Coventry at the end of the talking about TFS&lt;/a&gt;, a free event open to all). Anyway, this update went fine and did not take too long, 5 or 10 minutes.&lt;/p&gt;
&lt;p&gt;On the main system all seemed to go OK, but it did take around 45 minutes. I assumed the time it takes is dependant on volume of data in the DB, some sort of schema update I guess.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today I got round to upgrading our TFS 2008 (dual server setup) with <a href="http://www.microsoft.com/downloads/details.aspx?familyid=9e40a5b6-da41-43a2-a06d-3cee196bfe3d&amp;displaylang=en">Team Server 2008 SP1</a>. I tried it first on my demo VPC I use for presentations (reminder I am at <a href="http://www.vbug.co.uk/Events/September-2008/VBUG-Coventry-Team-Foundation-Server-with-Richard-Fennell.aspx">VBug Coventry at the end of the talking about TFS</a>, a free event open to all). Anyway, this update went fine and did not take too long, 5 or 10 minutes.</p>
<p>On the main system all seemed to go OK, but it did take around 45 minutes. I assumed the time it takes is dependant on volume of data in the DB, some sort of schema update I guess.</p>
<p>All seemed to go OK, I got no errors and after the upgrade I am able to manage work items, check in and out etc. However, there is one problem, in Team Explorer (I was on the server console) I saw a red cross next to the Reports sections in all team projects.</p>
<p>The interesting part was that if I copied the URL from the properties window for the Reports and pasted it into a browser it works without error and without being prompted for any login details. So I assumed that there is no issue with the Reporting services or authentication (also the links in Team Explorer had worked prior to the update). As another check,  we use <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=55A4BDE6-10A7-4C41-9938-F388C1ED15E9&amp;displaylang=en">eScrum process template</a>  and it&rsquo;s associated web site could accessing the Reporting services without any errors. It all pointed to a problem inside the Team System client.</p>
<p>When I tried to create a new team project there was also a problem. It failed with the error:</p>
<p><em>TF30224: Failed to retrieve projects from the report server.</em></p>
<p>I google&rsquo;d around a bit to no real effect, all the people who had seen this error before turned out to have rights issues in Reporting Services, mine were fine.</p>
<p>I then had a thought, I had not upgraded the VS2008 Team Explorer on the server (the Team Server SP1 update does not include this). I therefore ran the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyId=FBEE1648-7106-44A7-9649-6D9F6D58056E%20&amp;displaylang=en">VS2008 SP1</a> update, rebooted the server and low and behold the error went away.</p>
<p>So the technical tip is - if you put TFS 2008 SP1 on a server make sure you also updated the Team Explorer clients to SP1. If you don&rsquo;t you will probably be OK doing day to days source control and work item editing, but expect problems creating new projects.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Controlling a Ajax Control ModalPopupExtender from Code behind</title>
      <link>https://blog.richardfennell.net/posts/controlling-a-ajax-control-modalpopupextender-from-code-behind/</link>
      <pubDate>Tue, 02 Sep 2008 10:21:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/controlling-a-ajax-control-modalpopupextender-from-code-behind/</guid>
      <description>&lt;p&gt;I have had a nightmare getting this going.&lt;/p&gt;
&lt;p&gt;Adding the ModalPopupExtender to a form is easy, you drop it on and tell it the two required controls parameters&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;PopupControlID=&amp;ldquo;MyModalPanel&amp;rdquo;&lt;/li&gt;
&lt;li&gt;TargetControlID=&amp;ldquo;ButtonToLoadIt&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And it just works fine, but is triggered by a client side click of the Target Control.&lt;/p&gt;
&lt;p&gt;If you want to do some server side code behind first you are meant to set the&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;TargetControlID to a fake control that is visible on form e.g. &amp;ldquo;FakeButtonToLoadIt&amp;rdquo; IT IS VITAL THIS CONTROL IS VISIBLE AND HENCE RENDERED, but the control  can be effectively hidden via CSS styling with something like Style=&amp;ldquo;display: none&amp;rdquo;&lt;/li&gt;
&lt;li&gt;In the code behind after your own processing you are just meant to call this.MPE.Show(); where MPE is the ID of your extender&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Well this did not work for me. If I clicked my fake control (when it was not hidden obviously) it all worked by the server side call just displayed my panel where it was in the underlying HTML flow - the extension never kicked in.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have had a nightmare getting this going.</p>
<p>Adding the ModalPopupExtender to a form is easy, you drop it on and tell it the two required controls parameters</p>
<ul>
<li>PopupControlID=&ldquo;MyModalPanel&rdquo;</li>
<li>TargetControlID=&ldquo;ButtonToLoadIt&rdquo;</li>
</ul>
<p>And it just works fine, but is triggered by a client side click of the Target Control.</p>
<p>If you want to do some server side code behind first you are meant to set the</p>
<ul>
<li>TargetControlID to a fake control that is visible on form e.g. &ldquo;FakeButtonToLoadIt&rdquo; IT IS VITAL THIS CONTROL IS VISIBLE AND HENCE RENDERED, but the control  can be effectively hidden via CSS styling with something like Style=&ldquo;display: none&rdquo;</li>
<li>In the code behind after your own processing you are just meant to call this.MPE.Show(); where MPE is the ID of your extender</li>
</ul>
<p>Well this did not work for me. If I clicked my fake control (when it was not hidden obviously) it all worked by the server side call just displayed my panel where it was in the underlying HTML flow - the extension never kicked in.</p>
<p>After too many hours of fiddling and reading posts I found the answer. IT IS ALSO VITAL that the whole set of panel, extension and all associated controls are inside an UpdatePanel. If any are not it just does not work - this appears not to have been the case in older versions of the Ajax control toolkit and hence web sites give incorrect instructions.</p>
<p>So it should look something like</p>
<p>&lt;%@ Register Assembly=&ldquo;AjaxControlToolkit&rdquo; Namespace=&ldquo;AjaxControlToolkit&rdquo; TagPrefix=&ldquo;cc1&rdquo; %&gt;<br>
&lt;%@ Register TagPrefix=&ldquo;cc2&rdquo; TagName=&ldquo;AddressList&rdquo; Src=&quot;~/SelectAddressControl.ascx&quot; %&gt;</p>
<!-- This update panel is vital else the codebehind cannot show the modal dialog -->  
<p>&lt;asp:UpdatePanel runat=&ldquo;server&rdquo; ID=&ldquo;UpdatePanel1&rdquo;&gt;<br>
<ContentTemplate><br>
     &lt;asp:ScriptManager ID=&ldquo;ScriptManager1&rdquo; runat=&ldquo;server&rdquo;&gt;<br>
     &lt;/asp:ScriptManager&gt;<br>
     &lt;asp:Label ID=&ldquo;LabelPostcode&rdquo; runat=&ldquo;server&rdquo; Text=&ldquo;Postcode&rdquo; AssociatedControlID=&ldquo;Postcode&rdquo;&gt;&lt;/asp:Label&gt;<br>
     &lt;asp:TextBox ID=&ldquo;Postcode&rdquo; runat=&ldquo;server&rdquo; MaxLength=&ldquo;9&rdquo;&gt;&lt;/asp:TextBox&gt;</p>
<p>    <!-- The button  with the code behind --><br>
    &lt;asp:Button ID=&ldquo;FindAddressButton&rdquo; runat=&ldquo;server&rdquo; Text=&ldquo;Find Address&rdquo; CausesValidation=&ldquo;false&rdquo; OnClick=&ldquo;FindAddress_Click&rdquo; /&gt;<br>
     </p>
<blockquote>
<!-- The modal panel -->  
<p>   &lt;asp:Panel ID=&ldquo;ModalPanel&rdquo; runat=&ldquo;server&rdquo; CssClass=&ldquo;modalPopup&rdquo; &gt;<br>
        <!-- Inside we have a user control --><br>
        &lt;cc2:AddressList ID=&ldquo;AddressList&rdquo; runat=&ldquo;server&rdquo; /&gt;<br>
   &lt;/asp:Panel&gt;   <!-- We have to have a dummy control to hold the start event we handle in code behind --><br>
            &lt;asp:Button ID=&ldquo;MpeFakeTarget&rdquo; runat=&ldquo;server&rdquo; CausesValidation=&ldquo;False&rdquo; Style=&ldquo;display: none&rdquo; /&gt;</p>
<p>  <!-- We have to use the long name for the cancel button as we have a user control--><br>
            &lt;cc1:ModalPopupExtender ID=&ldquo;MPE&rdquo; runat=&ldquo;server&rdquo; TargetControlID=&ldquo;MpeFakeTarget&rdquo;<br>
                PopupControlID=&ldquo;ModalPanel&rdquo; DropShadow=&ldquo;true&rdquo; CancelControlID=&ldquo;ctl00$ctl00$ContentPlaceHolder1$ContentPlaceHolder1$AddressList$CancelButton&rdquo;<br>
                BackgroundCssClass=&ldquo;modalBackground&rdquo;  /&gt;<br>
        </ContentTemplate><br>
    &lt;/asp:UpdatePanel&gt;</p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking on Team System in Coventry</title>
      <link>https://blog.richardfennell.net/posts/speaking-on-team-system-in-coventry/</link>
      <pubDate>Thu, 21 Aug 2008 22:50:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-on-team-system-in-coventry/</guid>
      <description>&lt;p&gt;I am speaking at VBug Coventry on the 24th of September about Visual Studio Team System. For details have a look at the [VBug events](Elmbank Training, Juniper Room (Ground Floor), Mill Lane, Coventry, CV1 2LQ, GB) site, look forward to seeing you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking at VBug Coventry on the 24th of September about Visual Studio Team System. For details have a look at the [VBug events](Elmbank Training, Juniper Room (Ground Floor), Mill Lane, Coventry, CV1 2LQ, GB) site, look forward to seeing you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>eScrum 1,1 available</title>
      <link>https://blog.richardfennell.net/posts/escrum-11-available/</link>
      <pubDate>Thu, 21 Aug 2008 08:23:24 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/escrum-11-available/</guid>
      <description>&lt;p&gt;For those who have not seen it &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?familyid=55a4bde6-10a7-4c41-9938-f388c1ed15e9&amp;amp;displaylang=en&#34;&gt;eScrum 1.1&lt;/a&gt; is now available.  This basically a update for the installer so it works on VS2008 so you don&amp;rsquo;t have to go through all the manual &lt;a href=&#34;http://www.sharepointblogs.com/johnwpowell/archive/2007/09/29/how-to-install-microsoft-escrum-1-0-process-template-on-tfs-2008-beta-2-quot-orcas-quot.aspx&#34;&gt;fiddling you had to in the past&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So if you want a nice simple agile team project process, with a nice web site to give progress visibility, give it a look.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For those who have not seen it <a href="http://www.microsoft.com/downloads/details.aspx?familyid=55a4bde6-10a7-4c41-9938-f388c1ed15e9&amp;displaylang=en">eScrum 1.1</a> is now available.  This basically a update for the installer so it works on VS2008 so you don&rsquo;t have to go through all the manual <a href="http://www.sharepointblogs.com/johnwpowell/archive/2007/09/29/how-to-install-microsoft-escrum-1-0-process-template-on-tfs-2008-beta-2-quot-orcas-quot.aspx">fiddling you had to in the past</a>.</p>
<p>So if you want a nice simple agile team project process, with a nice web site to give progress visibility, give it a look.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why doesn&#39;t my ActionsPane appear when I create a new document form a VSTO enabled template?</title>
      <link>https://blog.richardfennell.net/posts/why-doesnt-my-actionspane-appear-when-i-create-a-new-document-form-a-vsto-enabled-template/</link>
      <pubDate>Mon, 11 Aug 2008 18:47:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-doesnt-my-actionspane-appear-when-i-create-a-new-document-form-a-vsto-enabled-template/</guid>
      <description>&lt;p&gt;When you are using a VSTO enabled Word Template inside a SharePoint custom content type you have to be careful which actual .DOTX file you use within the SharePoint custom content type.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;DO NOT&lt;/strong&gt; - use the .DOTX from your VSTO Visual Studio project directory&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DO&lt;/strong&gt; USE - the .DOTX from the the project&amp;rsquo;s ClickOnce publish location.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The key point here is that until you have published the VSTO project via ClickOnce the .DOTX template does not know where to find the deployment of the associated assemblies. The copy in the project directory never actually knows this location as it is just a source file not a deliverable.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you are using a VSTO enabled Word Template inside a SharePoint custom content type you have to be careful which actual .DOTX file you use within the SharePoint custom content type.</p>
<ul>
<li><strong>DO NOT</strong> - use the .DOTX from your VSTO Visual Studio project directory</li>
<li><strong>DO</strong> USE - the .DOTX from the the project&rsquo;s ClickOnce publish location.</li>
</ul>
<p>The key point here is that until you have published the VSTO project via ClickOnce the .DOTX template does not know where to find the deployment of the associated assemblies. The copy in the project directory never actually knows this location as it is just a source file not a deliverable.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Argh... more media rights issues</title>
      <link>https://blog.richardfennell.net/posts/argh-more-media-rights-issues/</link>
      <pubDate>Sun, 10 Aug 2008 07:53:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/argh-more-media-rights-issues/</guid>
      <description>&lt;p&gt;I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/08/09/no-silverlight-nbc-olympics-for-us.aspx&#34;&gt;posted about NBC  Olympic coverage&lt;/a&gt; - today I discovered I cannot stream BBC provided media content from the Olympics on my Windows PDA using Opera via the Vodafone 3G service. Seems the BBC does not think I am in the UK. I know Yorkshire assumes it should be independent, but I don&amp;rsquo;t think it has happened yet.&lt;/p&gt;
&lt;p&gt;I suppose this is all to be expected, technology that allows reasonable media delivery over a mobile network at a vaguely sane price is new; especially when roaming between countries/networks. I doubt we will see this problem disappear soon unless the media rights are more commonly picked up by the telco providers such as Vodfone as opposed to the broadcast media companies like NBC and the BBC.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/08/09/no-silverlight-nbc-olympics-for-us.aspx">posted about NBC  Olympic coverage</a> - today I discovered I cannot stream BBC provided media content from the Olympics on my Windows PDA using Opera via the Vodafone 3G service. Seems the BBC does not think I am in the UK. I know Yorkshire assumes it should be independent, but I don&rsquo;t think it has happened yet.</p>
<p>I suppose this is all to be expected, technology that allows reasonable media delivery over a mobile network at a vaguely sane price is new; especially when roaming between countries/networks. I doubt we will see this problem disappear soon unless the media rights are more commonly picked up by the telco providers such as Vodfone as opposed to the broadcast media companies like NBC and the BBC.</p>
<p>Makes you think how long can terrestrial broadcasters last in it current form as boundaries blur between delivery mechanisms?</p>
<p>That said the BBC has 7 media streams up at the moment across digital terrestrial, satellite, cable and the Internet, but I still find myself watching the primary BBC1 terrestrial coverage. After watching events on the other streams that are showing coverage of just one sport, I realise that in general I want the editorial service the BBC provides on it primary coverage, unless I have a real dedicated interest in one sport.</p>
]]></content:encoded>
    </item>
    <item>
      <title>No Silverlight  NBC Olympics for us</title>
      <link>https://blog.richardfennell.net/posts/no-silverlight-nbc-olympics-for-us/</link>
      <pubDate>Sat, 09 Aug 2008 10:19:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/no-silverlight-nbc-olympics-for-us/</guid>
      <description>&lt;p&gt;After the &lt;a href=&#34;http://visitmix.com/blogs/News/Keynote-NBC/&#34;&gt;announcement at Mix&lt;/a&gt; of the &lt;a href=&#34;http://www.nbcolympics.com&#34;&gt;NBC Silverlight Olympic streaming service&lt;/a&gt; I was looking forward to have a look at what had been created. Turns out you have to be in the USA to watch it, I assume due to media rights issues, a bit of a shame. I hope something can be worked out with a small fraction of the 2200 hours of content so we can all see what Silverlight can do for such a major event.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After the <a href="http://visitmix.com/blogs/News/Keynote-NBC/">announcement at Mix</a> of the <a href="http://www.nbcolympics.com">NBC Silverlight Olympic streaming service</a> I was looking forward to have a look at what had been created. Turns out you have to be in the USA to watch it, I assume due to media rights issues, a bit of a shame. I hope something can be worked out with a small fraction of the 2200 hours of content so we can all see what Silverlight can do for such a major event.</p>
<p>Back to the <a href="http://news.bbc.co.uk/sport1/hi/olympics/default.stm">BBC Flash streams</a> then&hellip;&hellip;&hellip;&hellip;</p>
<p>ps. I suppose it is to be expected, you get the same problem with the <a href="http://news.bbc.co.uk/sport1/hi/cricket/tms/default.stm">TMS cricket</a> coverage and other BBC Internet streams</p>
]]></content:encoded>
    </item>
    <item>
      <title>Blog mirror</title>
      <link>https://blog.richardfennell.net/posts/blog-mirror/</link>
      <pubDate>Thu, 07 Aug 2008 22:56:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/blog-mirror/</guid>
      <description>&lt;p&gt;My blog is now being mirrored at &lt;a href=&#34;http://msmvps.com/blogs/rfennell/Default.aspx&#34; title=&#34;http://msmvps.com/blogs/rfennell/Default.aspx&#34;&gt;http://msmvps.com/blogs/rfennell/Default.aspx&lt;/a&gt; as well as being available in it&amp;rsquo;s real location &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell&#34; title=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell&#34;&gt;http://blogs.blackmarble.co.uk/blogs/rfennell&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;To quote MSMVPS it is &lt;em&gt;The Ultimate Destination for Blogs by Current and Former Microsoft Most Valuable Professionals&lt;/em&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>My blog is now being mirrored at <a href="http://msmvps.com/blogs/rfennell/Default.aspx" title="http://msmvps.com/blogs/rfennell/Default.aspx">http://msmvps.com/blogs/rfennell/Default.aspx</a> as well as being available in it&rsquo;s real location <a href="http://blogs.blackmarble.co.uk/blogs/rfennell" title="http://blogs.blackmarble.co.uk/blogs/rfennell">http://blogs.blackmarble.co.uk/blogs/rfennell</a></p>
<p>To quote MSMVPS it is <em>The Ultimate Destination for Blogs by Current and Former Microsoft Most Valuable Professionals</em>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Moving the document store in TFS</title>
      <link>https://blog.richardfennell.net/posts/moving-the-document-store-in-tfs/</link>
      <pubDate>Thu, 07 Aug 2008 14:50:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/moving-the-document-store-in-tfs/</guid>
      <description>&lt;p&gt;I have at last got round to moving the location of the WSS sites used by our TFS server from the WSS instance running on our TFS Application Tier (AT) to our main 64bit MOSS 2007 server farm. A job I have been putting off for ages.&lt;/p&gt;
&lt;p&gt;I decided that there was nothing on the TFS WSS sites of importance, other than documents (which I could easily copy) so decided to create new WSS sites as opposed to trying a backup and restore model.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have at last got round to moving the location of the WSS sites used by our TFS server from the WSS instance running on our TFS Application Tier (AT) to our main 64bit MOSS 2007 server farm. A job I have been putting off for ages.</p>
<p>I decided that there was nothing on the TFS WSS sites of importance, other than documents (which I could easily copy) so decided to create new WSS sites as opposed to trying a backup and restore model.</p>
<p><strong>Add templates</strong></p>
<p>The first step is to make sure the templates for the WSS sites I needed were present on the MOSS farm. These are installed on the TFS AT WSS as part of the main install, you also get the option to run the MSI on a separate MOSS farm. Historically this has been a problem for us as the TFS installer did not support 64Bit WSS, however a <a href="http://www.microsoft.com/downloads/details.aspx?familyid=00803636-1d16-4df1-8a3d-ef1ad4f4bbab">power toy has been released for just the job</a>. The only problem was when I ran this I got an error</p>
<p><em>&lsquo;No templates for Windows SharePoint Services Extensions were uploaded&hellip;.&rsquo;</em></p>
<p>This pointed to a WSS config issue, but the MOSS farm seemed to be working fine. Instead of fiddling with this I just decided to install the templates manually.</p>
<p>Now we use <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=55A4BDE6-10A7-4C41-9938-F388C1ED15E9&amp;displaylang=en">eScrum</a>, but I thought I would be good and install the two MSF standard ones as well. These are stored as .STP files. I copied these files to the MOSS web server (you can find them on the AT using search) and ran the stsadm command as below:</p>
<p><code>Stsadm.exe&quot; -o addtemplate -filename MSFAgile3.0.stp</code> <code>-title VSTS_MSFAgile30</code><br>
<code>Stsadm.exe&quot; -o addtemplate -filename MSFFormal3.0.stp</code> <code>-title VSTS_MSF_CMMI30</code><br>
<code>Stsadm.exe&quot; -o addtemplate -filename escrum.stp</code> <code>-title eScrum</code><br>
<code>iisreset</code></p>
<p><code>**Update**: Note that the _-title_ in the commands above is case sensitive, it must be _eScrum_ not _escrum_.</code></p>
<p><strong>Site location</strong></p>
<p>Next I decided that I did not want the TFS sites to be created on the default <strong>/sites</strong> managed path. So we created a new managed path of <strong>/tfsprojects.</strong> (Central admin, application management, define managed paths)</p>
<p><strong>Test MOSS</strong></p>
<p>Once the templates were installed and the managed path created I made sure all was OK  by creating a new site collection (using the Central admin, application management, create site collection).</p>
<p>It is worth doing this to make sure you used the correct STP files - there is a potential issue here if you upgraded from TFS 2005 in the past you may have stray WSS 2.0 format STP files hanging around you need version 3.0 ones.</p>
<p><strong>Reconfigure TFS</strong></p>
<p>The TFS AT needs to be reconfigured to point at the new farm. This is done from the AT using the <em>TfsAdminUtil</em> command. <em>tfsadminutil configureconnections</em> will list the current settings.</p>
<p>You need to edit the <em>SharepointUri,  SharepointSitesUri, SharepointAdminUri</em> and <em>SharepointUnc</em>  to point at the MOSS farm as opposed to the WSS on the TFS AT. The general form to change these is:</p>
<p><em>tfsadminutil configureconnections /<em>SharepointUri</em>:http://server1.domain.com</em></p>
<p>Once all the changes are made Visual Studio can be loaded to get at Team Explorer. Now in theory the new setting should be picked up, but you might need to delete the local cache (c:documents and settings[username]local settingsapplication datamicrosoftteam foundation).</p>
<p><strong>Creating New Team Project</strong></p>
<p>At this point I think it is good idea to try to create a new team project and make sure the Sharepoint site is created with the correct template and rights.</p>
<p><strong>Fixing up old Team projects</strong></p>
<p>If all is working you can now fix up the older projects. Basically all that is required is that a site exists with the correct uri e.g,</p>
<p><em><a href="http://server1.domain.com/tfsproject/project1">http://server1.domain.com/tfsproject/project1</a></em></p>
<p>You create this site collection using the MOSS Central admin, just like the test site created previously after the templates were installed.</p>
<p>Once this created you should be able to right click on the team project in team explorer and open the portal site. Now you might hit rights problems here depending who you are logged in as. It is a good idea to check that the newly created site has the correct permissions set i.e make user the TFS project contributors have equivalent rights set on the newly created WSS site.</p>
<p>The final step is to copy any documents over from the old site (on the AT) to the new site on the MOSS farm.</p>
<p>Once this is all done you system should be using the MOSS server farm without any problems.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A Digital Road Trip</title>
      <link>https://blog.richardfennell.net/posts/a-digital-road-trip/</link>
      <pubDate>Tue, 05 Aug 2008 21:31:13 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-digital-road-trip/</guid>
      <description>&lt;p&gt;Over the past few days I have been travelling around the country by car, something I have not done for a while, preferring the train. It was a good chance to try out a couple of gadgets.  One I have had a for a while, my &lt;a href=&#34;http://www.htc.com/uk/product.aspx?id=8768&#34;&gt;HTC Touch Cruise phone&lt;/a&gt;, but the other was new a &lt;a href=&#34;http://www.puredigital.com/products/product.asp?Product=VL-60905&#34;&gt;Pure Highway DAB car radio&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So what did I learn?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The phone&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I have recently set my phone to use push email (so constantly sync&amp;rsquo;ing with Exchange during office hours). This seems to have had a serious adverse effect on battery life. I had not noticed this before as I usually connect my phone to my PC via USB when in the office thus keeping it charged and up to date. Whilst away, without really thinking about it,  I had expect my usual (pre push email) 3 day battery life, but the constant sync and using the phone as a camera knocked this down to about 8 hours! The simple solution seems to be to set the sync to every hour, thus reducing the data calls.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Over the past few days I have been travelling around the country by car, something I have not done for a while, preferring the train. It was a good chance to try out a couple of gadgets.  One I have had a for a while, my <a href="http://www.htc.com/uk/product.aspx?id=8768">HTC Touch Cruise phone</a>, but the other was new a <a href="http://www.puredigital.com/products/product.asp?Product=VL-60905">Pure Highway DAB car radio</a>.</p>
<p>So what did I learn?</p>
<p><strong>The phone</strong></p>
<p>I have recently set my phone to use push email (so constantly sync&rsquo;ing with Exchange during office hours). This seems to have had a serious adverse effect on battery life. I had not noticed this before as I usually connect my phone to my PC via USB when in the office thus keeping it charged and up to date. Whilst away, without really thinking about it,  I had expect my usual (pre push email) 3 day battery life, but the constant sync and using the phone as a camera knocked this down to about 8 hours! The simple solution seems to be to set the sync to every hour, thus reducing the data calls.</p>
<p>More seriously the MicroSD card in the phone failed. If it were a &lsquo;real&rsquo; hard disk I would guess at MBR corruption (don&rsquo;t know enough about SD technology to say if it is the case here). When I popped it into my PC I saw it was 75% full but could read no data (just like on the phone), I formatted it and it all worked fine again. My guess is that power loss on the phone occurred during a disk write and hence caused the corruption - not what I would expect, I expect kit to fail safe. This failure was a shame as I lost photos of the <a href="http://www.pacesetterevents.com/national-team-relay.php">Triathlon National Relays</a> and also my TomTom maps.</p>
<p>The loss of TomTom raised an interesting point, I had got too use to it&rsquo;s ease of navigation so I had not bothered to write down anything other than the postcodes of my hotels. Luckily I still had a road atlas in the car and had my confirmation emails in the phone (not stored on the SD card) so could dig out phone numbers so I could call for directions.</p>
<p><strong>The radio</strong></p>
<p>The <a href="http://www.bbc.co.uk/radio/waystolisten/digitalradio/">BBC has not stopped</a> going on about &lsquo;digital radio now available in cars&rsquo; for a while. I love radio and so decided to get one. The Pure model picks up the DAB signal and re-broadcasts it as FM to the standard built in car radio. The key bit here is the special windscreen mount aerial (I had tried by handheld DAB radio in the car in the past too no effect).</p>
<p>Around Leeds it has been working OK, once you find a free FM frequency for rebroadcast. The real test was how well it picked up DAB around the country, the <a href="http://www.digitalradionow.com/whatin.php">coverage map</a> showed it was good, but can you trust it?</p>
<p>The answer for me is to paraphase the poem &lsquo;when it was good it was very very good, but when it was bad it was horrid&rsquo;. It was great to actually hear the <a href="http://news.bbc.co.uk/sport1/hi/cricket/tms/default.stm">TMS cricket</a> on a DAB station like Radio 5 Live Sports Extra, and equally good to clearly hear an AM talk station without the usual hiss. The problem was they just dropped out with no warning, sometime due to obvious local geography such as a deep cuttings, but usually for no obvious reason whist driving down a straight flat motorway. This seemed particular bad in South Yorkshire and the East Midlands, especially when leaving the motorway network but still on major A roads such as the A38.</p>
<p><strong>So what did I learn?</strong></p>
<p>A simple summary - as with all IT take a backup - whether it be an FM radio or a road atlas. Technology is good until it fails.</p>
]]></content:encoded>
    </item>
    <item>
      <title>&#39;Unable to find control&#39; when programmatically adding ASP.NET validation controls</title>
      <link>https://blog.richardfennell.net/posts/unable-to-find-control-when-programmatically-adding-asp-net-validation-controls/</link>
      <pubDate>Wed, 23 Jul 2008 22:11:07 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/unable-to-find-control-when-programmatically-adding-asp-net-validation-controls/</guid>
      <description>&lt;p&gt;I have been building a webpart that needs client side validation. I kept getting the error:&lt;/p&gt;
&lt;p&gt;&amp;ldquo;Unable to find control id txtNotes referenced by the &amp;lsquo;ControlToValidate&amp;rsquo; property&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Now most of the posts say just use the command to get the &amp;lsquo;real&amp;rsquo; ID at runtime&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;this.rfvNotes.ControlToValidate = this.txtNotes.ClientID;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;But this did not work, in the end the form I found worked was:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;protected override void OnInit(EventArgs e)&lt;br&gt;
    {&lt;br&gt;
        base.OnInit(e);&lt;br&gt;
        this.txtNotes.ID = &amp;ldquo;txtNotes&amp;rdquo;;&lt;br&gt;
        this.rfvNotes.ControlToValidate = this.txtNotes.ID;&lt;br&gt;
        this.rfvNotes.Enabled = true;&lt;br&gt;
        this.rfvNotes.Text = &amp;ldquo;* Required&amp;rdquo;;&lt;br&gt;
    }&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been building a webpart that needs client side validation. I kept getting the error:</p>
<p>&ldquo;Unable to find control id txtNotes referenced by the &lsquo;ControlToValidate&rsquo; property&rdquo;</p>
<p>Now most of the posts say just use the command to get the &lsquo;real&rsquo; ID at runtime</p>
<blockquote>
<p>this.rfvNotes.ControlToValidate = this.txtNotes.ClientID;</p></blockquote>
<p>But this did not work, in the end the form I found worked was:</p>
<blockquote>
<p>protected override void OnInit(EventArgs e)<br>
    {<br>
        base.OnInit(e);<br>
        this.txtNotes.ID = &ldquo;txtNotes&rdquo;;<br>
        this.rfvNotes.ControlToValidate = this.txtNotes.ID;<br>
        this.rfvNotes.Enabled = true;<br>
        this.rfvNotes.Text = &ldquo;* Required&rdquo;;<br>
    }</p></blockquote>
<p>The key step was to set the textbox ID manually, it seems that if this is not done it is the root of the error.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running Fitnesse.NET tests using MSTest</title>
      <link>https://blog.richardfennell.net/posts/running-fitnesse-net-tests-using-mstest/</link>
      <pubDate>Fri, 18 Jul 2008 22:00:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-fitnesse-net-tests-using-mstest/</guid>
      <description>&lt;p&gt;Update 29 Mar 2010 - &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/29/running-fitnesse-net-tests-using-mstest-revisited.aspx&#34;&gt;See this post&lt;/a&gt; for some updated usage notes &lt;/p&gt;
&lt;p&gt;I have been looking at &lt;a href=&#34;http://fitnesse.org/FitNesse.DotNet&#34;&gt;Fitnesse.NET&lt;/a&gt; for a while; if you have not come across this testing tool I cannot recommend &lt;a href=&#34;http://gojko.net/fitnesse/book/&#34;&gt;Gojko Adzic book&lt;/a&gt; highly enough; an excellent real world introduction to Fitnesse.NET.&lt;/p&gt;
&lt;p&gt;For me, the problem with Fitnesse is that it&amp;rsquo;s WIKI architecture does not lend itself to working within our Team Foundation Server based development model. However, as I am not really looking for a tool to allow Business Analysts to edit a central repository of tests the WIKI was not that important; so I have been looking at a way to store the Fitnesse tests inside a Visual Studio solution, so they are managed like a unit test.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Update 29 Mar 2010 - <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2010/03/29/running-fitnesse-net-tests-using-mstest-revisited.aspx">See this post</a> for some updated usage notes </p>
<p>I have been looking at <a href="http://fitnesse.org/FitNesse.DotNet">Fitnesse.NET</a> for a while; if you have not come across this testing tool I cannot recommend <a href="http://gojko.net/fitnesse/book/">Gojko Adzic book</a> highly enough; an excellent real world introduction to Fitnesse.NET.</p>
<p>For me, the problem with Fitnesse is that it&rsquo;s WIKI architecture does not lend itself to working within our Team Foundation Server based development model. However, as I am not really looking for a tool to allow Business Analysts to edit a central repository of tests the WIKI was not that important; so I have been looking at a way to store the Fitnesse tests inside a Visual Studio solution, so they are managed like a unit test.</p>
<p>The first step was to get my head round using Fitnesse without a WIKI. This is achieved using FolderRunner,exe that ships as part of Fitnesse.NET. This allows you to define a test as a HTML file as opposed to a WIKI page. Firstly I thought you could just copy the relevant <strong>contents.txt</strong> page out of the WIKI structure, but this does not work as this file holds the test not as a HTML table, as required, but in the WIKIs own format using | characters to define the table cells. So you have to create the test file containing the Fitnesse tables using your HTML editor of choice. Once you have done this you run the test using the following command line.</p>
<p><strong>..fitnessedotnet2FolderRunner.exe -i userstory1.htm -a MyAssembly.dll -o results</strong></p>
<p>A HTML results file ends up in the <strong>results</strong> directory and you get a simple results test count on the console.</p>
<p>All this is very good but how can you bolt this into Visual Studio? After a bit of thought I decided that the easiest option was to put the Fitnesse calls within a wrapper unit test. I chose MSTest as I was aiming to run the final solution within TFS Build (but any type if unit testing framework such as <a href="http://www.nunit.org/">nUnit</a> or <a href="http://www.mbunit.com/">mbUnit</a> would have done).</p>
<p>Next I decided to make the required calls to run the tests in C#, calling the methods in the Fitnesse DLLs. This was as opposed to trying to shell out and run the <strong>FolderRunner.exe</strong>.</p>
<p>The easiest way to find the correct calls was to use <a href="http://www.aisto.com/roeder/dotnet/">Reflector</a> to have a look at the code in the <strong>FolderRunner.exe</strong> (and the other Fit assemblies). After a brief <em>splunk</em> around I found the calls were fairly simple, so this is the process I ended up with to run the tests via MSTest:</p>
<ul>
<li>
<p>I started with a solution that contains an application assembly and an second assembly that had all the methods required for the Fit tests (and made sure it all worked via the WIKI and/or command line FolderRunner)</p>
</li>
<li>
<p>Next I added a test project to the solution</p>
</li>
<li>
<p>In the test project I add a reference to the assembly that contains fit test methods and also the <strong>fit.dll.</strong> I had some problems here that when I ran the test, I got a <em>The location of the file or directory &lsquo;(path omitted)fit.dll&rsquo; is not trusted.</em> error. If you get this error follow the <a href="http://blog.donnfelker.com/2007/11/19/FrustratingMSTestIssueBlahBlahBlahIsNotTrusted.aspx">process on Don Felker&rsquo;s blog</a> to fix it This blog post seems to have been removed, in summary the post said:</p>
</li>
<li>
<p><em>If you download a DLL from the Internet, or get it in an email or where ever and you saved it to your disk (including a DLL in a zip too) it has some extra info attached to it called an &ldquo;AES&rdquo; file.</em></p>
</li>
<li>
<p><em>To fix this annoying issue, go to the DLL in Windows Explorer, right click to view the properties and then click the &ldquo;Unblock&rdquo; button at the bottom of the General panel</em></p>
</li>
<li>
<p>Add an HTML file to the Test Project and in it create the Fit test tables.</p>
</li>
<li>
<p>Set the HTML file properties so it is copied to the output directory.</p>
</li>
<li>
<p>Create a new Test Method to run all the tests in a the HTML file as shown below (obviously you will need to make sure the paths are right to the HTML files and assemblies for your project).</p>
</li>
</ul>
<blockquote>
<p>[TestMethod]<br>
public void RunFitnessTests()<br>
{<br>
   fit.Runner.FolderRunner runner = new    fit.Runner.FolderRunner(new fit.Runner.ConsoleReporter());<br>
   var errorCount = runner.Run(new string[] {<br>
       &ldquo;-i&rdquo;,@&ldquo;userstory1.htm&rdquo;,<br>
       &ldquo;-a&rdquo;,@&ldquo;MyAssembly.dll&rdquo;,<br>
       &ldquo;-o&rdquo;,@&ldquo;results&rdquo;});<br>
    Assert.AreEqual(0, errorCount,runner.Results);<br>
}</p></blockquote>
<ul>
<li>So now you can make sure this test works using <a href="http://www.testdriven.net/">TestDriven.NET</a> to run this test, this checks the test works when the application is built to the standard bindebug directory.</li>
<li>However this does not mean it will work with MSTest under the Visual Studio test runner. This is because, by default, the HTML file that holds the test will not be deployed to the TestResults directory (also the actual application assembly under test will not be copied). This is critical, as you have to be careful here, as if the HTML file is missing you get a false success. The Runner.Run method will return 0 as no test failed (as there were none defined) so the assert passes. To get round this you need to add the HTML file (and any other files required) to the list of files to deploy prior to testing. This is done by editing the deployment file list via the menu option <strong>Test | Edit Test Run Configuration | Deployment tab</strong>. An option to guard against this issue is to add an extra assert to make sure at least some tests were run and we don&rsquo;t have four zeros in the results string, a bit of RegEx will do (this line is not in the sample project by the way)</li>
</ul>
<p>           Assert.AreEqual(false,Regex.IsMatch(runner.Results, &ldquo;^0.+?0.+?0.+?0.+?$&rdquo;),&ldquo;No tests appear to have been run&rdquo;);</p>
<ul>
<li>Once this is all done you can run the test via the Visual Studio Test menu option and you should see the test run and get a simple pass or fail result with a count of any failures in the error message. (Another possible option would be a make better use of the detailed HTML results file, at present we just use the simple result string for test results details)</li>
</ul>
<p>So where do we end up?  I think we have a workable solution. We can run Fitnesse tests as easily as any unit test, and as with unit tests the Fitnesse test files are under TFS source control and kept in sync with the application they test. All without the need for the WIKI.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Registration opened for SQLBits III (Cubed)</title>
      <link>https://blog.richardfennell.net/posts/registration-opened-for-sqlbits-iii-cubed/</link>
      <pubDate>Fri, 18 Jul 2008 08:59:41 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/registration-opened-for-sqlbits-iii-cubed/</guid>
      <description>&lt;p&gt;I see that the &lt;a href=&#34;http://www.sqlbits.com/&#34;&gt;registration&lt;/a&gt; (and hence session voting) has opened for the free community conference SQLBits III,&lt;/p&gt;
&lt;p&gt;I can&amp;rsquo;t make it as it is the same day as &lt;a href=&#34;http://altdotnet.org/events/5&#34;&gt;Alt.Net&lt;/a&gt;. However, I really do recommend this event if you ever work with data (some of you do that don&amp;rsquo;t you?)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I see that the <a href="http://www.sqlbits.com/">registration</a> (and hence session voting) has opened for the free community conference SQLBits III,</p>
<p>I can&rsquo;t make it as it is the same day as <a href="http://altdotnet.org/events/5">Alt.Net</a>. However, I really do recommend this event if you ever work with data (some of you do that don&rsquo;t you?)</p>
]]></content:encoded>
    </item>
    <item>
      <title>New Team System Power Tools</title>
      <link>https://blog.richardfennell.net/posts/new-team-system-power-tools/</link>
      <pubDate>Thu, 17 Jul 2008 10:19:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-team-system-power-tools/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?FamilyID=00803636-1d16-4df1-8a3d-ef1ad4f4bbab&amp;amp;displaylang=en&#34;&gt;July edition of the Team System power tools&lt;/a&gt; have just been released.&lt;/p&gt;
&lt;p&gt;To pick out just one of a great set of tools - have a look at the Alert Editor, a really easy way to wire up external events handler to do things when TFS events occur.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=00803636-1d16-4df1-8a3d-ef1ad4f4bbab&amp;displaylang=en">July edition of the Team System power tools</a> have just been released.</p>
<p>To pick out just one of a great set of tools - have a look at the Alert Editor, a really easy way to wire up external events handler to do things when TFS events occur.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Leeds SQL Usergroup</title>
      <link>https://blog.richardfennell.net/posts/leeds-sql-usergroup/</link>
      <pubDate>Thu, 17 Jul 2008 09:32:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/leeds-sql-usergroup/</guid>
      <description>&lt;p&gt;Thanks to everyone one who attended my session at the Leeds SQL User group on Visual Studio Database Edition and to UPCO for hosting the event.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/2008/Leeds%20SQL%20UG%20-%20SQL%20DevelopmentLife%20Cycle%20using%20Visual%20Studio%20Team%20Edition.ppt&#34;&gt;slide stack is available&lt;/a&gt; on the Black Marble web site&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone one who attended my session at the Leeds SQL User group on Visual Studio Database Edition and to UPCO for hosting the event.</p>
<p>The <a href="http://www.blackmarble.co.uk/ConferencePapers/2008/Leeds%20SQL%20UG%20-%20SQL%20DevelopmentLife%20Cycle%20using%20Visual%20Studio%20Team%20Edition.ppt">slide stack is available</a> on the Black Marble web site</p>
]]></content:encoded>
    </item>
    <item>
      <title>And as predicted....</title>
      <link>https://blog.richardfennell.net/posts/and-as-predicted/</link>
      <pubDate>Sat, 12 Jul 2008 08:49:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-as-predicted/</guid>
      <description>&lt;p&gt;The Alt.net summer conference is now full after about 24 hours and that is with a larger capacity venue than the last one.&lt;/p&gt;
&lt;p&gt;Told it would be fast&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The Alt.net summer conference is now full after about 24 hours and that is with a larger capacity venue than the last one.</p>
<p>Told it would be fast</p>
]]></content:encoded>
    </item>
    <item>
      <title>I&#39;ve registered have you?</title>
      <link>https://blog.richardfennell.net/posts/ive-registered-have-you/</link>
      <pubDate>Fri, 11 Jul 2008 08:07:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ive-registered-have-you/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://altdotnet.org/events/5&#34;&gt;Registration has opened&lt;/a&gt; for the &lt;a href=&#34;http://blog.benhall.me.uk/2008/07/altnet-uk-summer-conference-dates.html&#34;&gt;Alt.UK Summer conference&lt;/a&gt;. If the &lt;a href=&#34;http://www.altnetpedia.com/London%20Alt.Net.UK%202nd%20Feb%202008.ashx&#34;&gt;first UK Alt.net&lt;/a&gt; conference event is anything to go by it will fill up fast, so don&amp;rsquo;t hang around if you are interested&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/altdotnet_2.jpg&#34;&gt;&lt;img alt=&#34;altdotnet&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/altdotnet_thumb.jpg&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://altdotnet.org/events/5">Registration has opened</a> for the <a href="http://blog.benhall.me.uk/2008/07/altnet-uk-summer-conference-dates.html">Alt.UK Summer conference</a>. If the <a href="http://www.altnetpedia.com/London%20Alt.Net.UK%202nd%20Feb%202008.ashx">first UK Alt.net</a> conference event is anything to go by it will fill up fast, so don&rsquo;t hang around if you are interested</p>
<p><a href="/wp-content/uploads/sites/2/historic/altdotnet_2.jpg"><img alt="altdotnet" loading="lazy" src="/wp-content/uploads/sites/2/historic/altdotnet_thumb.jpg"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Publishing a ASP.NET WCF service encryption error</title>
      <link>https://blog.richardfennell.net/posts/publishing-a-asp-net-wcf-service-encryption-error/</link>
      <pubDate>Tue, 08 Jul 2008 09:30:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/publishing-a-asp-net-wcf-service-encryption-error/</guid>
      <description>&lt;p&gt;I was trying to publish a .NET 3.5 WCF service to a network share e.g.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;\\myserversharefolder\_to\_holdservice&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;and got the error &amp;ldquo;The specified file could not be encrypted&amp;rdquo; for all the files.&lt;/p&gt;
&lt;p&gt;I changed to a publish to the local disk it published fine, so what caused that?&lt;/p&gt;
&lt;p&gt;As part of some security impact modelling I have been doing development with my local source directories encrypted using standard Vista security (FYI does not seem to cause any significant performance impact)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was trying to publish a .NET 3.5 WCF service to a network share e.g.</p>
<p><code>\\myserversharefolder\_to\_holdservice</code></p>
<p>and got the error &ldquo;The specified file could not be encrypted&rdquo; for all the files.</p>
<p>I changed to a publish to the local disk it published fine, so what caused that?</p>
<p>As part of some security impact modelling I have been doing development with my local source directories encrypted using standard Vista security (FYI does not seem to cause any significant performance impact)</p>
<p>So when I copied the published files to the network share manually I got an message &lsquo;do I want to copy without encryption&rsquo; which I answered yes to and all was OK. Shame VS publish method does not have a away to answer this question</p>
]]></content:encoded>
    </item>
    <item>
      <title>Next XP Club meeting</title>
      <link>https://blog.richardfennell.net/posts/next-xp-club-meeting/</link>
      <pubDate>Mon, 07 Jul 2008 13:26:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/next-xp-club-meeting/</guid>
      <description>&lt;p&gt;This months meeting of the &lt;a href=&#34;http://xpclub.erudine.com/&#34;&gt;XPClub is on Wednesday&lt;/a&gt;. It is a general group chat about refactoring, branching and merging.&lt;/p&gt;
&lt;p&gt;This is a free event, usually with a few free beers, at the usual time &amp;amp; location, 7pm at the &lt;a href=&#34;http://maps.google.co.uk/maps?ie=UTF-8&amp;amp;oe=utf-8&amp;amp;rls=org.mozilla:en-GB:official&amp;amp;client=firefox-a&amp;amp;um=1&amp;amp;q=victoria&amp;#43;hotel&amp;#43;pub&amp;#43;&amp;amp;near=Leeds&amp;amp;fb=1&amp;amp;view=text&amp;amp;latlng=53800862,-1550393,8912007649988110241&#34;&gt;Victoria Hotel in Leeds&lt;/a&gt;, so hope to see you there&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This months meeting of the <a href="http://xpclub.erudine.com/">XPClub is on Wednesday</a>. It is a general group chat about refactoring, branching and merging.</p>
<p>This is a free event, usually with a few free beers, at the usual time &amp; location, 7pm at the <a href="http://maps.google.co.uk/maps?ie=UTF-8&amp;oe=utf-8&amp;rls=org.mozilla:en-GB:official&amp;client=firefox-a&amp;um=1&amp;q=victoria&#43;hotel&#43;pub&#43;&amp;near=Leeds&amp;fb=1&amp;view=text&amp;latlng=53800862,-1550393,8912007649988110241">Victoria Hotel in Leeds</a>, so hope to see you there</p>
]]></content:encoded>
    </item>
    <item>
      <title>Bug tracking with TFS</title>
      <link>https://blog.richardfennell.net/posts/bug-tracking-with-tfs/</link>
      <pubDate>Sun, 06 Jul 2008 22:37:25 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/bug-tracking-with-tfs/</guid>
      <description>&lt;p&gt;I have posted in the past about my efforts to write a user facing bug tracking interface for TFS to integrate with our SharePoint based customer portal. I have had some &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/12/19/tfs-webpart-for-viewing-workitems-in-sharepoint-2007.aspx&#34;&gt;mixed success&lt;/a&gt;, but the end point is that I am just not happy with what I have written.&lt;/p&gt;
&lt;p&gt;Historically we have used our own home grown call tracking system (started as an Access DB, went via VB6 to ASP then ASP.NET, now is web service based) which our clients know (and love?). This give a far richer audit trail for the actions performed on a support call than is possible with a work item in TFS. In the end this simple fact is what has forced me to conclude that TFS work items are not the thing to expose to end user/help desk staff for bug tracking.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have posted in the past about my efforts to write a user facing bug tracking interface for TFS to integrate with our SharePoint based customer portal. I have had some <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/12/19/tfs-webpart-for-viewing-workitems-in-sharepoint-2007.aspx">mixed success</a>, but the end point is that I am just not happy with what I have written.</p>
<p>Historically we have used our own home grown call tracking system (started as an Access DB, went via VB6 to ASP then ASP.NET, now is web service based) which our clients know (and love?). This give a far richer audit trail for the actions performed on a support call than is possible with a work item in TFS. In the end this simple fact is what has forced me to conclude that TFS work items are not the thing to expose to end user/help desk staff for bug tracking.</p>
<p>My new plan is to add a new feature to our existing call tracking system (oh and of course port it into a SharePoint webpart based UI) that allows a call to be escalated into TFS when it becomes change request or requires a code bug fix. This means all the initial triage can be handed by the support desk in a their call tracking system and TFS only gains a work item when it is a task for the development team, a product backlog item in Scrum terminology.</p>
<p>This seems a more sensible approach, much like the <a href="http://visuallounge.techsmith.com/2007/05/new_snagit_output_for_microsoft_visual_studio_team_system.html">Snagit add-in for TFS</a>, I will report back as to how it goes.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD7 Session Submissions</title>
      <link>https://blog.richardfennell.net/posts/ddd7-session-submissions/</link>
      <pubDate>Thu, 03 Jul 2008 12:22:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd7-session-submissions/</guid>
      <description>&lt;p&gt;It really good to see so many &lt;a href=&#34;http://www.developerday.co.uk/ddd/agendaddd7.asp&#34;&gt;submissions for the DDD7&lt;/a&gt;, and many new names, always a good sign of a the community working well.&lt;/p&gt;
&lt;p&gt;I just got round to putting one in myself on integrating testing into MSBuild.&lt;/p&gt;
&lt;p&gt;There is still time for more though. So have a think and put in a session proposal - trust me it is great fun.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It really good to see so many <a href="http://www.developerday.co.uk/ddd/agendaddd7.asp">submissions for the DDD7</a>, and many new names, always a good sign of a the community working well.</p>
<p>I just got round to putting one in myself on integrating testing into MSBuild.</p>
<p>There is still time for more though. So have a think and put in a session proposal - trust me it is great fun.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Wow I&#39;m an MVP</title>
      <link>https://blog.richardfennell.net/posts/wow-im-an-mvp/</link>
      <pubDate>Wed, 02 Jul 2008 08:42:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/wow-im-an-mvp/</guid>
      <description>&lt;p&gt;Found out last night that I have just been made an MVP for Team System, is that cool or what!&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/mvp_2.png&#34;&gt;&lt;img alt=&#34;mvp&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/mvp_thumb.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Found out last night that I have just been made an MVP for Team System, is that cool or what!</p>
<p><a href="/wp-content/uploads/sites/2/historic/mvp_2.png"><img alt="mvp" loading="lazy" src="/wp-content/uploads/sites/2/historic/mvp_thumb.png"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Team Build compile details</title>
      <link>https://blog.richardfennell.net/posts/tfs-team-build-compile-details/</link>
      <pubDate>Mon, 23 Jun 2008 20:31:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-team-build-compile-details/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://www.woodwardweb.com/gadgets/000434.html&#34;&gt;Martin Woodward posted a few weeks ago about his build bunny&lt;/a&gt;. Now this is something I had tried a &lt;a href=&#34;http://www.blackmarble.com/ConferencePapers/DDD4%20Presentation%20-%20%27But%20it%20works%20on%20my%20PC%27%20or%20Continuous%20Integration%20to%20improve%20quality.ppt&#34;&gt;while ago for DDD4&lt;/a&gt;, but hit the same problem Martin had that the old &lt;a href=&#34;http://api.nabaztag.com/docs/home.html&#34;&gt;Nabaztag API&lt;/a&gt; was too slow and messages could take hours to arrive, making it useless in the real world. Inspired by Martin and the new faster API I have been working on a new Team Build status monitor for the office.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://www.woodwardweb.com/gadgets/000434.html">Martin Woodward posted a few weeks ago about his build bunny</a>. Now this is something I had tried a <a href="http://www.blackmarble.com/ConferencePapers/DDD4%20Presentation%20-%20%27But%20it%20works%20on%20my%20PC%27%20or%20Continuous%20Integration%20to%20improve%20quality.ppt">while ago for DDD4</a>, but hit the same problem Martin had that the old <a href="http://api.nabaztag.com/docs/home.html">Nabaztag API</a> was too slow and messages could take hours to arrive, making it useless in the real world. Inspired by Martin and the new faster API I have been working on a new Team Build status monitor for the office.</p>
<p>Getting most of it going is straight forward as Martin said, the TFS API event model is easy to use. However I did have a problem getting the details of the build. I could see if it built or not, but I could not get any count of warning, code analysis errors or test results etc.</p>
<p>It seems the problem is that when the <strong>BuildQueue_StatusChanged</strong> event fires after a build completes it does not have all the details in the <strong>IBuildDetail</strong> object you would expect. However, if you re-query the TFS server you can get the information.</p>
<p>I added the following to the end of the <strong>BuildQueue_StatusChanged</strong> event handler:</p>
<blockquote>
<p>// first get the handle to the queue (you might have this stored already)<br>
TeamFoundationServer tfs = new TeamFoundationServer(&ldquo;http://myserver:8080&rdquo;);<br>
IBuildServer buildServer = (IBuildServer)tfs.GetService(typeof(IBuildServer));<br>
IQueuedBuildsView buildQueue = buildServer.CreateQueuedBuildsView(&ldquo;MyProject&rdquo;);</p></blockquote>
<blockquote>
<p>// Now re query using the project build URL to get the number of errors and warning<br>
// the array of string list what details we want<br>
IBuildDetail buildDetails = buildServer.GetBuild(buildQueue.QueuedBuilds<br>
         [lastItem].Build.Uri, new string[] {<br>
              TeamFoundation.Build.Common.InformationTypes.ConfigurationSummary,<br>
              TeamFoundation.Build.Common.InformationTypes.TestSummary,<br>
              TeamFoundation.Build.Common.InformationTypes.CodeCoverageSummary,<br>
              TeamFoundation.Build.Common.InformationTypes.CompilationSummary},<br>
              QueryOptions.None);</p></blockquote>
<blockquote>
<p>// Now use the new IBuildDetail object to update the screen associated display<br>
// user control<br>
this.display.UpdateStatus(buildDetails);</p>
<p>Now in my display usercontrol all I need to go is to extract compiler summary details using a standard TFS  <strong>InformationNodeConverter</strong> (and you could do the same for test or code coverage).</p>
<p>var buildSummaries = InformationNodeConverters.GetConfigurationSummaries(build Details);<br>
if (buildSummaries.Count &gt; 0)<br>
{<br>
  this.lblCompileReport.Text = &ldquo;[Errors:&rdquo; + buildSummaries[0].TotalCompilationErrors +<br>
  &ldquo;] [Warnings:&rdquo; + buildSummaries[0].TotalCompilationWarnings +<br>
  &ldquo;] [Static Analysis Errors:&rdquo; + buildSummaries[0].TotalStaticAnalysisErrors +<br>
  &ldquo;] [Static Analysis Warnings:&rdquo; + buildSummaries[0].TotalStaticAnalysisWarnings +&quot;]&quot;;<br>
}</p></blockquote>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Leeds SQL User group</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-leeds-sql-user-group/</link>
      <pubDate>Sat, 21 Jun 2008 21:38:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-leeds-sql-user-group/</guid>
      <description>&lt;p&gt;For those who are interested, I am speaking on Visual Studio for Database Professionals at the Leeds SQL user group on the 16th July. This is a free event, f&lt;a href=&#34;http://www.developerfusion.co.uk/show/7793/&#34;&gt;or more details see the event web site&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For those who are interested, I am speaking on Visual Studio for Database Professionals at the Leeds SQL user group on the 16th July. This is a free event, f<a href="http://www.developerfusion.co.uk/show/7793/">or more details see the event web site</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>CA0055 error in FXCop</title>
      <link>https://blog.richardfennell.net/posts/ca0055-error-in-fxcop/</link>
      <pubDate>Fri, 20 Jun 2008 16:29:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ca0055-error-in-fxcop/</guid>
      <description>&lt;p&gt;I have been setting up a new Team Build server today. All our projects are being set to do code analysis (FXCop) after the build. For one project this worked on the developer PC but failed on the build machine.&lt;/p&gt;
&lt;p&gt;The CA0055 error means &amp;lsquo;file not found&amp;rsquo; or &amp;lsquo;could not load&amp;rsquo; the assembly to be analysed. Firstly I suspected there was a problem with path names being over 256 characters (both the assemblies and solutions names were long) which can be a problem MSBuild, but this was not the case.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been setting up a new Team Build server today. All our projects are being set to do code analysis (FXCop) after the build. For one project this worked on the developer PC but failed on the build machine.</p>
<p>The CA0055 error means &lsquo;file not found&rsquo; or &lsquo;could not load&rsquo; the assembly to be analysed. Firstly I suspected there was a problem with path names being over 256 characters (both the assemblies and solutions names were long) which can be a problem MSBuild, but this was not the case.</p>
<p>In the end I looked in the *.CodeAnalysisLog.xml file in the build directory. This gave the inner exception and I found it was a missing DLL that was referenced by a DLL the main DLL referenced - so two hops away! I added this DLL to the project references to make sure it was in the build directory and all was OK.</p>
<p><strong>TIP</strong>: read the detailed FXCop log file - it shows far more than the summary in the IDE</p>
]]></content:encoded>
    </item>
    <item>
      <title>XPClub Meeting about Opera</title>
      <link>https://blog.richardfennell.net/posts/xpclub-meeting-about-opera/</link>
      <pubDate>Thu, 12 Jun 2008 11:40:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/xpclub-meeting-about-opera/</guid>
      <description>&lt;p&gt;Good session at the Yorkshire Extreme Programming Club last night. Chris Mills of Opera spoke on the mobile web.&lt;/p&gt;
&lt;p&gt;If this, or the general issue of standards in web development are of interest &lt;a href=&#34;http://dev.opera.com&#34;&gt;http://dev.opera.com&lt;/a&gt; is well worth a look.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Good session at the Yorkshire Extreme Programming Club last night. Chris Mills of Opera spoke on the mobile web.</p>
<p>If this, or the general issue of standards in web development are of interest <a href="http://dev.opera.com">http://dev.opera.com</a> is well worth a look.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Build server and running MSTest - directory creation error</title>
      <link>https://blog.richardfennell.net/posts/tfs-build-server-and-running-mstest-directory-creation-error/</link>
      <pubDate>Thu, 12 Jun 2008 11:37:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-build-server-and-running-mstest-directory-creation-error/</guid>
      <description>&lt;p&gt;When you create a build type for a team project on TFS you can enable testing using MSTest by saying &amp;lsquo;run any tests that are found in a given DLL&amp;rsquo;. I used this today to create a CI build for a project, I am looking at using Team Build as opposed to CruiseCrontrol as we have done historically.&lt;/p&gt;
&lt;p&gt;I hit a problem that the tests were running but the build was failing (or in Team build speak - partial succeeding i.e. compiling but not passed testing). On looking in the build log I saw the error was:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When you create a build type for a team project on TFS you can enable testing using MSTest by saying &lsquo;run any tests that are found in a given DLL&rsquo;. I used this today to create a CI build for a project, I am looking at using Team Build as opposed to CruiseCrontrol as we have done historically.</p>
<p>I hit a problem that the tests were running but the build was failing (or in Team build speak - partial succeeding i.e. compiling but not passed testing). On looking in the build log I saw the error was:</p>
<p><em>The results directory &ldquo;\servertfsdropRel1.0.0TestProject_20080612.1TestResults&rdquo; could not be created for publishing</em></p>
<p>So this looked like a rights issue, but my DomainTFSBuild user (which the build process runs under) has full permissions to the drop directory (and it&rsquo;s associated share) and it was happily creating the other directories for the drop share.</p>
<p>Turns out the answer was to also give the DomainTFSService (used by the Appliction Teir) full access to the drop share. Once this is done all is OK</p>
]]></content:encoded>
    </item>
    <item>
      <title>Rescheduled XPClub Meeting</title>
      <link>https://blog.richardfennell.net/posts/rescheduled-xpclub-meeting/</link>
      <pubDate>Mon, 09 Jun 2008 20:49:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/rescheduled-xpclub-meeting/</guid>
      <description>&lt;p&gt;This Wednesday, the 11th, is the &lt;a href=&#34;http://xpclub.erudine.com/&#34;&gt;rescheduled XPClub&lt;/a&gt; meeting &amp;lsquo;Exploring Mobile Web Development&amp;rsquo; with Chris Mills developer relations manager for Opera&lt;/p&gt;
&lt;p&gt;This is a free event and open to all, hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This Wednesday, the 11th, is the <a href="http://xpclub.erudine.com/">rescheduled XPClub</a> meeting &lsquo;Exploring Mobile Web Development&rsquo; with Chris Mills developer relations manager for Opera</p>
<p>This is a free event and open to all, hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Interesting post on testing</title>
      <link>https://blog.richardfennell.net/posts/interesting-post-on-testing/</link>
      <pubDate>Mon, 09 Jun 2008 20:27:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/interesting-post-on-testing/</guid>
      <description>&lt;p&gt;Interesting post from &lt;a href=&#34;http://blog.benhall.me.uk/2008/06/community-call-to-action-where-are-all.html&#34;&gt;Ben Hall&lt;/a&gt; on the role of the tester, something I have &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/02/03/oh-to-be-a-tester.aspx&#34;&gt;posted on in the past&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;As Ben asks what are other people views?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Interesting post from <a href="http://blog.benhall.me.uk/2008/06/community-call-to-action-where-are-all.html">Ben Hall</a> on the role of the tester, something I have <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/02/03/oh-to-be-a-tester.aspx">posted on in the past</a>.</p>
<p>As Ben asks what are other people views?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Go south young man.......and read a book on a SmartPhone</title>
      <link>https://blog.richardfennell.net/posts/go-south-young-man-and-read-a-book-on-a-smartphone/</link>
      <pubDate>Mon, 09 Jun 2008 20:23:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/go-south-young-man-and-read-a-book-on-a-smartphone/</guid>
      <description>&lt;p&gt;I have got into reading books off my HTC smart phone using the &lt;a href=&#34;www.microsoft.com/Reader/&#34;&gt;Microsoft Reader&lt;/a&gt;. It means you always have a book with you (as well as a web browser, &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/04/29/first-release-of-blogwriter-for-smart-devices.aspx&#34;&gt;blog writer&lt;/a&gt;, phone etc&amp;hellip;..)&lt;/p&gt;
&lt;p&gt;The problem has been getting books in a suitable format, yes I know that you can buy electronic books but there are so many out of copyright classic&amp;rsquo;s I have not read yet. You can download many from &lt;a href=&#34;http://www.gutenberg.org/wiki/Main_Page&#34;&gt;Project Gutenberg&lt;/a&gt; (and convert them to the right format using the &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?FamilyID=a9d2b0d4-a29e-4954-9641-8b079220d16c&#34;&gt;add-in for Word&lt;/a&gt;) so why buy newer ones?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have got into reading books off my HTC smart phone using the <a href="www.microsoft.com/Reader/">Microsoft Reader</a>. It means you always have a book with you (as well as a web browser, <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/04/29/first-release-of-blogwriter-for-smart-devices.aspx">blog writer</a>, phone etc&hellip;..)</p>
<p>The problem has been getting books in a suitable format, yes I know that you can buy electronic books but there are so many out of copyright classic&rsquo;s I have not read yet. You can download many from <a href="http://www.gutenberg.org/wiki/Main_Page">Project Gutenberg</a> (and convert them to the right format using the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=a9d2b0d4-a29e-4954-9641-8b079220d16c">add-in for Word</a>) so why buy newer ones?</p>
<p>So recently I have been re-reading one of my favourite books, a good one to dip in and out of, &ldquo;<a href="http://www.gutenberg.org/etext/14363">The Worst Journey in the World</a>&rdquo; about Scott&rsquo;s last expedition which <a href="http://en.wikipedia.org/wiki/Paul_Theroux">Paul Theroux</a> describes it as the best adventure book he’s ever read. This obviously lead me to a book on the site I have not read before &ldquo;<a href="http://www.gutenberg.org/etext/6540">The South Pole</a>&rdquo; by <a href="http://en.wikipedia.org/wiki/Roald_Amundsen">Roald Amundsen</a>.</p>
<p>The first thing I have to say is that in my opinion <a href="http://en.wikipedia.org/wiki/Apsley_Cherry-Garrard">Apsley Cherry-Garrard</a> is a far better writer than Amundsen (this maybe the translation but I doubt it). Irrespective of the style the two books read very differently Amundsen just makes it sound so matter of fact almost easy. Much of this I think is down to being bought up in a snowy land - being prepared. Scott, as has been much written, made some strange decisions often based on his poor past experiences (such as the use of dogs) and was certainly unlucky as well. Scott&rsquo;s is a story of the Edwardian gentleman amateur.</p>
<p>However, it is also interesting to see the all Edwardians not just the English had a similar view of the world - to travel, find new creatures, kill them then eat and/or stuff them.</p>
<p>So how do I like reading using the SmartPhone form factor? Well I find it fine usually. The only problems are that the HTC is useless direct sunlight and on the cheap short haul airlines you can&rsquo;t switch your phone on even in flight mode. So buy a magazine at the airport.</p>
]]></content:encoded>
    </item>
    <item>
      <title>On the train home</title>
      <link>https://blog.richardfennell.net/posts/on-the-train-home/</link>
      <pubDate>Thu, 05 Jun 2008 16:07:34 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/on-the-train-home/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my session on DataDude in Edinburgh yesterday. I hope you found it useful. For those based in Yorkshire I will be doing the same session for the &lt;a href=&#34;http://www.developerfusion.co.uk/show/7793/&#34;&gt;local SQL user group on the 16th July&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The slides I used yesterday were virtually identical to the ones I used at &lt;a href=&#34;http://www.blackmarble.com/ConferencePapers/SQLBits%20II%20Presentation%20-Development%20Life%20Cycle%20using%20Visual%20Studio%20Team%20Edition.ppt&#34;&gt;SQLBits II and can be found on our server&lt;/a&gt;. The only major change was a bit about yesterdays announcement of the GDR release that I wrote on the train up after watching the &lt;a href=&#34;http://channel9.msdn.com/posts/briankel/New-GDR-Announced-for-Visual-Studio-Team-System-2008-Database-Edition/&#34;&gt;Channel9 video&lt;/a&gt; - but &lt;a href=&#34;http://blogs.msdn.com/gertd/&#34;&gt;Gert&amp;rsquo;s blog&lt;/a&gt; is a better source information for this.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my session on DataDude in Edinburgh yesterday. I hope you found it useful. For those based in Yorkshire I will be doing the same session for the <a href="http://www.developerfusion.co.uk/show/7793/">local SQL user group on the 16th July</a>.</p>
<p>The slides I used yesterday were virtually identical to the ones I used at <a href="http://www.blackmarble.com/ConferencePapers/SQLBits%20II%20Presentation%20-Development%20Life%20Cycle%20using%20Visual%20Studio%20Team%20Edition.ppt">SQLBits II and can be found on our server</a>. The only major change was a bit about yesterdays announcement of the GDR release that I wrote on the train up after watching the <a href="http://channel9.msdn.com/posts/briankel/New-GDR-Announced-for-Visual-Studio-Team-System-2008-Database-Edition/">Channel9 video</a> - but <a href="http://blogs.msdn.com/gertd/">Gert&rsquo;s blog</a> is a better source information for this.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New CTP Release for VS2008 Database Edition</title>
      <link>https://blog.richardfennell.net/posts/new-ctp-release-for-vs2008-database-edition/</link>
      <pubDate>Tue, 03 Jun 2008 19:55:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-ctp-release-for-vs2008-database-edition/</guid>
      <description>&lt;p&gt;Typical I am giving a session on &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/06/02/speaking-in-edinburgh-this-week.aspx&#34;&gt;VS2008 DataDude tomorrow&lt;/a&gt; and Microsoft go and release a new version!&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;http://blogs.msdn.com/gertd/archive/2008/06/03/vsts-2008-database-edition-gdr-june-ctp.aspx&#34;&gt;Gert Drapers blog&lt;/a&gt; for the full details or &lt;a href=&#34;http://channel9.msdn.com/posts/briankel/New-GDR-Announced-for-Visual-Studio-Team-System-2008-Database-Edition/&#34;&gt;Channel 9 for a video&lt;/a&gt; which I will be watch in the train tomorrow I guess.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Typical I am giving a session on <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/06/02/speaking-in-edinburgh-this-week.aspx">VS2008 DataDude tomorrow</a> and Microsoft go and release a new version!</p>
<p>See <a href="http://blogs.msdn.com/gertd/archive/2008/06/03/vsts-2008-database-edition-gdr-june-ctp.aspx">Gert Drapers blog</a> for the full details or <a href="http://channel9.msdn.com/posts/briankel/New-GDR-Announced-for-Visual-Studio-Team-System-2008-Database-Edition/">Channel 9 for a video</a> which I will be watch in the train tomorrow I guess.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking in Edinburgh this week</title>
      <link>https://blog.richardfennell.net/posts/speaking-in-edinburgh-this-week/</link>
      <pubDate>Mon, 02 Jun 2008 19:18:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-in-edinburgh-this-week/</guid>
      <description>&lt;p&gt;I am speaking at the &lt;a href=&#34;http://www.sqlserverfaq.com/portalmain.aspx?&amp;amp;EID=115&#34;&gt;Scottish SQL user group on Wednesday about VS2008 for DB Professionals&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking at the <a href="http://www.sqlserverfaq.com/portalmain.aspx?&amp;EID=115">Scottish SQL user group on Wednesday about VS2008 for DB Professionals</a>.</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Developer Edition for Sharepoint</title>
      <link>https://blog.richardfennell.net/posts/developer-edition-for-sharepoint/</link>
      <pubDate>Fri, 30 May 2008 09:37:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/developer-edition-for-sharepoint/</guid>
      <description>&lt;p&gt;There has been a good deal of discussion on Blogs and Forums as to if there is a need for a developer edition of Sharepoint (WSS3.0) so you can develop against it under Vista. Historically your options were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Run W2K3 on your client PC - licensing issues and maybe drivers if using a laptop&lt;/li&gt;
&lt;li&gt;Do all development in a VPC (or VMware) - is be a bit slow.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Now some people argue the separation between client and server this causes is good - and it is a strong argument. However I want to get the best performance out of my development PC, so a WSS3.0 on Vista would be really useful to me. I know it won&amp;rsquo;t address all my development issues (I will still need a VPC from time to time) but it will do 90% of my day to day work and run at least 50% faster than a VPC.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There has been a good deal of discussion on Blogs and Forums as to if there is a need for a developer edition of Sharepoint (WSS3.0) so you can develop against it under Vista. Historically your options were:</p>
<ul>
<li>Run W2K3 on your client PC - licensing issues and maybe drivers if using a laptop</li>
<li>Do all development in a VPC (or VMware) - is be a bit slow.</li>
</ul>
<p>Now some people argue the separation between client and server this causes is good - and it is a strong argument. However I want to get the best performance out of my development PC, so a WSS3.0 on Vista would be really useful to me. I know it won&rsquo;t address all my development issues (I will still need a VPC from time to time) but it will do 90% of my day to day work and run at least 50% faster than a VPC.</p>
<p>Well the waiting is over <a href="http://community.bamboosolutions.com/blogs/bambooteamblog/archive/2008/05/21/how-to-install-windows-sharepoint-services-3-0-sp1-on-vista-x64-x86.aspx">Bamboo Solutions have released a tool to community to allow you to install WSS3.0 on Vista</a>.</p>
<p>Well done&hellip;&hellip;.</p>
]]></content:encoded>
    </item>
    <item>
      <title>and another point...</title>
      <link>https://blog.richardfennell.net/posts/and-another-point/</link>
      <pubDate>Wed, 28 May 2008 13:44:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-another-point/</guid>
      <description>&lt;p&gt;I forgot to say in the &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/05/26/an-idle-break.aspx&#34;&gt;last post&lt;/a&gt; the thing I noticed most about Spain - the quality of the roads and the courtesy of the drivers. Both excellent. As a cyclist in the UK I am used to pot-holes and being cut up all the time. None of this is Spain, drivers seem happy to wait for cycles and overtake safely giving loads of room. All this on super smooth roads.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I forgot to say in the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/05/26/an-idle-break.aspx">last post</a> the thing I noticed most about Spain - the quality of the roads and the courtesy of the drivers. Both excellent. As a cyclist in the UK I am used to pot-holes and being cut up all the time. None of this is Spain, drivers seem happy to wait for cycles and overtake safely giving loads of room. All this on super smooth roads.</p>
<p>I very pleasant change.</p>
<p><strong>[Update 30 May]</strong> Done one commute on the bike since I got back and had three near death experiences - welcome home!</p>
]]></content:encoded>
    </item>
    <item>
      <title>An Idle break</title>
      <link>https://blog.richardfennell.net/posts/an-idle-break/</link>
      <pubDate>Mon, 26 May 2008 17:49:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/an-idle-break/</guid>
      <description>&lt;p&gt;I have had a week off on holiday at a triathlon club training camp (some holiday some might say, 26 hours of training sessions in 6 days). We were staying at &lt;a href=&#34;http://www.idlebreaks.com/&#34; title=&#34;http://www.idlebreaks.com/&#34;&gt;Idle Breaks&lt;/a&gt; just outside Malaga. A location which I cannot recommend highly enough, great location, facilities, food and owners.&lt;/p&gt;
&lt;p&gt;However, I have to say you do get some strange looks as you get ready for a swim in the Mediterranean.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have had a week off on holiday at a triathlon club training camp (some holiday some might say, 26 hours of training sessions in 6 days). We were staying at <a href="http://www.idlebreaks.com/" title="http://www.idlebreaks.com/">Idle Breaks</a> just outside Malaga. A location which I cannot recommend highly enough, great location, facilities, food and owners.</p>
<p>However, I have to say you do get some strange looks as you get ready for a swim in the Mediterranean.</p>
<p> <a href="http://www.flickr.com/photos/26924801@N04/"><img alt="LBT Camp" loading="lazy" src="/wp-content/uploads/sites/2/historic/LBT%20Camp_3.png"></a></p>
<p>I wondered what state I would come back in as I was entered for the <a href="http://www.pdsportsmanagement.co.uk/">Wetherby Triathlon</a> the day after I got back; would I be trained to perfection or exhausted? Well I raced OK, heavy legs on a windy bike section made worse by me being to lazy the day before to refit my <a href="http://en.wikipedia.org/wiki/Triathlon_equipment#Aerobars">aero-bars</a> when I re-assembled my bike from the flight, but an OK time overall.</p>
<p>The Wetherby Triathlon, though a smallish field, is well worth a look at for next year. Even though it is early in the season the River Wharfe was no colder than the Med and it is a really well thought out course and well organised race.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New TFS stuff that has just been published</title>
      <link>https://blog.richardfennell.net/posts/new-tfs-stuff-that-has-just-been-published/</link>
      <pubDate>Tue, 13 May 2008 20:11:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-tfs-stuff-that-has-just-been-published/</guid>
      <description>&lt;p&gt;Yesterday we had the first drop of the VS2008 SP1 Beta, well there is also &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?FamilyID=dcb535be-c32e-474c-9f64-282a2849acc5&amp;amp;DisplayLang=en&#34;&gt;one for TFS2008&lt;/a&gt;. As with the main VS2008 service pack there are loads of fixes and some new features.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Support for Windows Server 2008.&lt;/li&gt;
&lt;li&gt;Support for SQL Server Codename Katmai CTP6.&lt;/li&gt;
&lt;li&gt;The Add to Source Control dialogs have been improved to be easier to use and more scalable.&lt;/li&gt;
&lt;li&gt;Drag &amp;amp; Drop from Windows Explorer to add to Source Control.&lt;/li&gt;
&lt;li&gt;Support for Version control operations on files not in bound solutions.&lt;/li&gt;
&lt;li&gt;Right-click access to set Working Folder/Cloak of folders from within Source Control Explorer.&lt;/li&gt;
&lt;li&gt;Check in date/time column in Source Control Explorer.&lt;/li&gt;
&lt;li&gt;Editable path field for the Source Control Explorer.&lt;/li&gt;
&lt;li&gt;Email work items and queries to someone.&lt;/li&gt;
&lt;li&gt;A new API to download files to a stream.&lt;/li&gt;
&lt;li&gt;Links to Team System Web Access pages from notifications.&lt;/li&gt;
&lt;li&gt;Improvements to the number of projects per server.&lt;/li&gt;
&lt;li&gt;Performance and scale improvements.&lt;/li&gt;
&lt;li&gt;Improvements to the VSS converter to make it much more robust.&lt;/li&gt;
&lt;li&gt;Support for creating Team Projects from the command line.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Given my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/01/23/fun-upgrading-from-visual-studio-tfs-2008-beta2-to-rtm.aspx&#34;&gt;experiences with the VS2008 Beta&lt;/a&gt; I think I will be waiting for the real release before I put this near our main systems. As with any system critical beta only play with them in a sandbox!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Yesterday we had the first drop of the VS2008 SP1 Beta, well there is also <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=dcb535be-c32e-474c-9f64-282a2849acc5&amp;DisplayLang=en">one for TFS2008</a>. As with the main VS2008 service pack there are loads of fixes and some new features.</p>
<ul>
<li>Support for Windows Server 2008.</li>
<li>Support for SQL Server Codename Katmai CTP6.</li>
<li>The Add to Source Control dialogs have been improved to be easier to use and more scalable.</li>
<li>Drag &amp; Drop from Windows Explorer to add to Source Control.</li>
<li>Support for Version control operations on files not in bound solutions.</li>
<li>Right-click access to set Working Folder/Cloak of folders from within Source Control Explorer.</li>
<li>Check in date/time column in Source Control Explorer.</li>
<li>Editable path field for the Source Control Explorer.</li>
<li>Email work items and queries to someone.</li>
<li>A new API to download files to a stream.</li>
<li>Links to Team System Web Access pages from notifications.</li>
<li>Improvements to the number of projects per server.</li>
<li>Performance and scale improvements.</li>
<li>Improvements to the VSS converter to make it much more robust.</li>
<li>Support for creating Team Projects from the command line.</li>
</ul>
<p>Given my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/01/23/fun-upgrading-from-visual-studio-tfs-2008-beta2-to-rtm.aspx">experiences with the VS2008 Beta</a> I think I will be waiting for the real release before I put this near our main systems. As with any system critical beta only play with them in a sandbox!</p>
<p>Also Brian Harry has published a revised <a href="http://www.microsoft.com/downloads/details.aspx?FamilyId=CE194742-A6E8-4126-AA30-5C4E969AF2A3&amp;displaylang=en">licensing white paper</a>, this might well help to explain some of the common problems we all have in this area.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Extreme Programming Club Meeting</title>
      <link>https://blog.richardfennell.net/posts/extreme-programming-club-meeting/</link>
      <pubDate>Tue, 13 May 2008 19:57:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/extreme-programming-club-meeting/</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;14 May UPDATE&lt;/em&gt;&lt;/strong&gt; - Due t illness the speaker cannot make it , so this session is being rescheduled to next month. Check the &lt;a href=&#34;http://xpclub.erudine.com/&#34;&gt;XP Club site&lt;/a&gt; for what is on in its place&lt;/p&gt;
&lt;p&gt;Remember tomorrow is monthly meeting of the Yorkshire Extreme Programming Club at the Victoria Hotel in Leeds.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&#34;http://xpclub.erudine.com/2008/04/next-meeting-exploring-mobile-web.html&#34;&gt;session is by Chris Mills&lt;/a&gt; a developer relations manager for  development for the Opera Browser.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><strong><em>14 May UPDATE</em></strong> - Due t illness the speaker cannot make it , so this session is being rescheduled to next month. Check the <a href="http://xpclub.erudine.com/">XP Club site</a> for what is on in its place</p>
<p>Remember tomorrow is monthly meeting of the Yorkshire Extreme Programming Club at the Victoria Hotel in Leeds.</p>
<p>The <a href="http://xpclub.erudine.com/2008/04/next-meeting-exploring-mobile-web.html">session is by Chris Mills</a> a developer relations manager for  development for the Opera Browser.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Developer Day Scotland</title>
      <link>https://blog.richardfennell.net/posts/developer-day-scotland/</link>
      <pubDate>Sun, 11 May 2008 21:11:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/developer-day-scotland/</guid>
      <description>&lt;p&gt;Thanks to everyone who attended my session in Glasgow yesterday, I hope you found it useful.&lt;/p&gt;
&lt;p&gt;The slides will appear on the &lt;a href=&#34;http://www.developerdayscotland.com/&#34;&gt;DDS site&lt;/a&gt; at some point but you can find them now on the &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/Developer%20Day%20Scotland%20-%20%27But%20it%20works%20on%20my%20PC%27%20or%20Continuous%20Integration%20to%20improve%20quality.ppt&#34;&gt;Black Marble site&lt;/a&gt;. There is also a web cast of a virtual identical presentation on the &lt;a href=&#34;http://xpclub.erudine.com/2008/03/continuous-integration-webcast.html&#34;&gt;Extreme Programming Club site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The noticeable difference at this event from DDD in Reading was what was going on at lunch time. The grok talks seemed far better attended than at TVP, also there was interesting &lt;a href=&#34;http://altnetpedia.com/&#34;&gt;Alt.Net&lt;/a&gt; Open Spaces style session where we had a general chat on tools and libraries which I attended.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to everyone who attended my session in Glasgow yesterday, I hope you found it useful.</p>
<p>The slides will appear on the <a href="http://www.developerdayscotland.com/">DDS site</a> at some point but you can find them now on the <a href="http://www.blackmarble.co.uk/ConferencePapers/Developer%20Day%20Scotland%20-%20%27But%20it%20works%20on%20my%20PC%27%20or%20Continuous%20Integration%20to%20improve%20quality.ppt">Black Marble site</a>. There is also a web cast of a virtual identical presentation on the <a href="http://xpclub.erudine.com/2008/03/continuous-integration-webcast.html">Extreme Programming Club site</a>.</p>
<p>The noticeable difference at this event from DDD in Reading was what was going on at lunch time. The grok talks seemed far better attended than at TVP, also there was interesting <a href="http://altnetpedia.com/">Alt.Net</a> Open Spaces style session where we had a general chat on tools and libraries which I attended.</p>
<p>Why did this seem to work better than at TVP?</p>
<ul>
<li>Was it the event was smaller?</li>
<li>The layout of the venue meant it was easy to circulate?</li>
</ul>
<p>I don&rsquo;t know the answer but I would like to thank the organisers for putting on such as successful event.</p>
<p>I look forward to the next one and hope to see a few of you  in <a href="http://www.sqlserverfaq.com/?eid=115">Edinburgh on the 4th of June</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Fixed Sprint Burndown chart for eScrum on TFS 2008</title>
      <link>https://blog.richardfennell.net/posts/fixed-sprint-burndown-chart-for-escrum-on-tfs-2008/</link>
      <pubDate>Fri, 09 May 2008 14:01:56 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fixed-sprint-burndown-chart-for-escrum-on-tfs-2008/</guid>
      <description>&lt;p&gt;Since updating to TFS 2008 we have lost our Sprint Burndown chart in eScrum, not a major problem as we use the cumulative flow in its place. However, I have eventually got round to fixing it.&lt;/p&gt;
&lt;p&gt;It turns out the problem is down to the the way the dates for the start and end of the Sprint are converted to measure names for the main MDX query. There were both regional date format issues (mm/dd/yy as opposed to dd/mm/yy) and the fact that the MDX query was very particular over leading zeros for the end date of the range e.g. 14/4/2008 did not work but 14/04/2008 did (but this was not the case for the start date!)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since updating to TFS 2008 we have lost our Sprint Burndown chart in eScrum, not a major problem as we use the cumulative flow in its place. However, I have eventually got round to fixing it.</p>
<p>It turns out the problem is down to the the way the dates for the start and end of the Sprint are converted to measure names for the main MDX query. There were both regional date format issues (mm/dd/yy as opposed to dd/mm/yy) and the fact that the MDX query was very particular over leading zeros for the end date of the range e.g. 14/4/2008 did not work but 14/04/2008 did (but this was not the case for the start date!)</p>
<p>The solution was to handle the dates as strings which I could force for format on, as opposed to dates. I am not sure how well this will work for others depending on regional date format. You might need to edit the RDL file a bit. However, my guess is it will cure many of the problems with <a href="http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1910393&amp;SiteID=1">missing eScrum burndown charts</a> that forums have been putting down to warehouse corruption.</p>
<p>I also added a table at the bottom to help with seeing what is going on.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Registry Access Errors with the TFS API</title>
      <link>https://blog.richardfennell.net/posts/registry-access-errors-with-the-tfs-api/</link>
      <pubDate>Fri, 09 May 2008 09:45:02 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/registry-access-errors-with-the-tfs-api/</guid>
      <description>&lt;p&gt;If you are using the TFS API within a WebApp with impersonation there is a good chance you will see the error below when you run the web site on IIS&lt;/p&gt;
&lt;p&gt;System.Security.SecurityException: Requested registry access is not allowed.    &lt;br&gt;
at System.ThrowHelper.ThrowSecurityException(ExceptionResource resource)    &lt;br&gt;
at Microsoft.Win32.RegistryKey.OpenSubKey(String name, Boolean writable)    &lt;br&gt;
at Microsoft.TeamFoundation.Client.RegisteredServers.OpenCurrentUser(Boolean writable, Boolean shouldCreate)    &lt;br&gt;
at Microsoft.TeamFoundation.Client.RegisteredServers.GetUriForServer(String serverName)    &lt;br&gt;
at Microsoft.TeamFoundation.Client.RegisteredServers.GetServerKeyForServer(String serverName, String subKey, Boolean writable, Boolean shouldCreate)    &lt;/p&gt;
&lt;p&gt;If you google for this forums tells you to add read access for the impersonated user to&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are using the TFS API within a WebApp with impersonation there is a good chance you will see the error below when you run the web site on IIS</p>
<p>System.Security.SecurityException: Requested registry access is not allowed.    <br>
at System.ThrowHelper.ThrowSecurityException(ExceptionResource resource)    <br>
at Microsoft.Win32.RegistryKey.OpenSubKey(String name, Boolean writable)    <br>
at Microsoft.TeamFoundation.Client.RegisteredServers.OpenCurrentUser(Boolean writable, Boolean shouldCreate)    <br>
at Microsoft.TeamFoundation.Client.RegisteredServers.GetUriForServer(String serverName)    <br>
at Microsoft.TeamFoundation.Client.RegisteredServers.GetServerKeyForServer(String serverName, String subKey, Boolean writable, Boolean shouldCreate)    </p>
<p>If you google for this forums tells you to add read access for the impersonated user to</p>
<p>HKEY_CURRENT_USERSoftwareMicrosoftVisualStudio9.0TeamFoundationServers</p>
<p>However, this did not fix the problem. So after much fiddling and re-reading <a href="http://blogs.msdn.com/narend/archive/2006/07/29/682032.aspx">Naren&rsquo;s Blog on configuring WIT</a> I looked further down the error log and saw</p>
<p>The Zone of the assembly that failed was:    <br>
MyComputer    <br>
Access to the path &lsquo;MicrosoftTeam Foundation2.0Cache&rsquo; is denied.    </p>
<p>So I created a cache directory and added the following</p>
<configuration>  
…  
<appSettings>  
<add key="WorkItemTrackingCacheRoot" value="E:FolderForCache" />  
</appSettings>  
</configuration>
<p>And it leapt into life, even with the added rights in the registry removed!</p>
<p>So it seems the first error is a red herring.</p>
]]></content:encoded>
    </item>
    <item>
      <title>PerformanceCounters not showing up in PerfMon</title>
      <link>https://blog.richardfennell.net/posts/performancecounters-not-showing-up-in-perfmon/</link>
      <pubDate>Thu, 08 May 2008 20:49:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/performancecounters-not-showing-up-in-perfmon/</guid>
      <description>&lt;p&gt;I have just adding some Performance Counters to instrument some code and had a few issues that are worth knowing about.&lt;/p&gt;
&lt;p&gt;I created two categories of counters using the following code:&lt;/p&gt;
&lt;p&gt;//Create a category with a single counter&lt;br&gt;
PerformanceCounterCategory.Create(&lt;br&gt;
               &amp;ldquo;categoryName&amp;rdquo;, &amp;ldquo;categoryDescription&amp;rdquo;,&lt;br&gt;
               PerformanceCounterCategoryType.SingleInstance,&lt;br&gt;
               &amp;ldquo;counterName&amp;rdquo;, &amp;ldquo;counterDescription&amp;rdquo;);&lt;/p&gt;
&lt;p&gt;//Create a category with more than one counter&lt;br&gt;
System.Diagnostics.CounterCreationDataCollection counterCollection System.Diagnostics.CounterCreationDataCollection();&lt;br&gt;
counterCollection.Add(new System.Diagnostics.CounterCreationData(&lt;br&gt;
     &amp;ldquo;Name1,&lt;br&gt;
     &amp;ldquo;Description1&amp;rdquo;,&lt;br&gt;
     PerformanceCounterType.NumberOfItems64));&lt;br&gt;
counterCollection.Add(new System.Diagnostics.CounterCreationData(&lt;br&gt;
     &amp;ldquo;Name2,&lt;br&gt;
     &amp;ldquo;Description2&amp;rdquo;,&lt;br&gt;
     PerformanceCounterType.NumberOfItems64));&lt;br&gt;
// Create the category and pass the collection to it.&lt;br&gt;
System.Diagnostics.PerformanceCounterCategory.Create(&lt;br&gt;
            &amp;ldquo;multicategoryName&amp;rdquo;,&lt;br&gt;
            &amp;ldquo;multicategoryDescription&amp;rdquo;,   &lt;br&gt;
            PerformanceCounterCategoryType.SingleInstance,&lt;br&gt;
            counterCollection);&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just adding some Performance Counters to instrument some code and had a few issues that are worth knowing about.</p>
<p>I created two categories of counters using the following code:</p>
<p>//Create a category with a single counter<br>
PerformanceCounterCategory.Create(<br>
               &ldquo;categoryName&rdquo;, &ldquo;categoryDescription&rdquo;,<br>
               PerformanceCounterCategoryType.SingleInstance,<br>
               &ldquo;counterName&rdquo;, &ldquo;counterDescription&rdquo;);</p>
<p>//Create a category with more than one counter<br>
System.Diagnostics.CounterCreationDataCollection counterCollection System.Diagnostics.CounterCreationDataCollection();<br>
counterCollection.Add(new System.Diagnostics.CounterCreationData(<br>
     &ldquo;Name1,<br>
     &ldquo;Description1&rdquo;,<br>
     PerformanceCounterType.NumberOfItems64));<br>
counterCollection.Add(new System.Diagnostics.CounterCreationData(<br>
     &ldquo;Name2,<br>
     &ldquo;Description2&rdquo;,<br>
     PerformanceCounterType.NumberOfItems64));<br>
// Create the category and pass the collection to it.<br>
System.Diagnostics.PerformanceCounterCategory.Create(<br>
            &ldquo;multicategoryName&rdquo;,<br>
            &ldquo;multicategoryDescription&rdquo;,   <br>
            PerformanceCounterCategoryType.SingleInstance,<br>
            counterCollection);</p>
<p>The problem I had was the new performance counters did not show up correctly in PerfMon, but seemed OK from Visual Studio. In PerfMon I was seeing the single counter category OK but the multi counter category was only showing the first counter.</p>
<p>If I sent data to one of the multi category counters it seemed to go OK but in Perfmon I saw nothing. If I sent to the single category counter it showed up on both the single and multi graph in PerfMon.</p>
<p>The fix was simple, unload PerfMon  completely between tests and all is OK. Just loading the Add Counter dialog IS NOT ENOUGH.</p>
<p>Also it is worth calling</p>
<p>System.Diagnostics.PerformanceCounter.CloseSharedResources();</p>
<p>when adding or deleting categories as this removes stale data that can also be a problem.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why aren&#39;t there more plays about quantum physics?</title>
      <link>https://blog.richardfennell.net/posts/why-arent-there-more-plays-about-quantum-physics/</link>
      <pubDate>Sun, 04 May 2008 19:36:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/why-arent-there-more-plays-about-quantum-physics/</guid>
      <description>&lt;p&gt;Saw an excellent play last night, &lt;a href=&#34;http://www.wyplayhouse.com/events/event_details.asp?event_ID=602&#34;&gt;Hapgood&lt;/a&gt; at the West Yorkshire Playhouse. For those who have no heard if it, it is a Tom Stoppard play about spies and physics set in the late 80s.&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;Promotional image for West Yorkshire Playhouse production of Hapgood, Josie Lawerence as Mrs Hapgood&#34; loading=&#34;lazy&#34; src=&#34;http://www.wyplayhouse.com/assets/production_images/0108_hapgood.jpg&#34;&gt;&lt;/p&gt;
&lt;p&gt;I know of two plays that take their theme from quantum uncertainly, this one and &lt;a href=&#34;http://en.wikipedia.org/wiki/Copenhagen_%28play%29&#34;&gt;Copenhagen&lt;/a&gt;, both nights out I have really enjoyed. They are plays that keep you guessing all the way as to what is really the truth (if such a thing can ever be known).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Saw an excellent play last night, <a href="http://www.wyplayhouse.com/events/event_details.asp?event_ID=602">Hapgood</a> at the West Yorkshire Playhouse. For those who have no heard if it, it is a Tom Stoppard play about spies and physics set in the late 80s.</p>
<p><img alt="Promotional image for West Yorkshire Playhouse production of Hapgood, Josie Lawerence as Mrs Hapgood" loading="lazy" src="http://www.wyplayhouse.com/assets/production_images/0108_hapgood.jpg"></p>
<p>I know of two plays that take their theme from quantum uncertainly, this one and <a href="http://en.wikipedia.org/wiki/Copenhagen_%28play%29">Copenhagen</a>, both nights out I have really enjoyed. They are plays that keep you guessing all the way as to what is really the truth (if such a thing can ever be known).</p>
<p>I have always liked stories within stories, one of my favourite books is the <a href="http://www.amazon.co.uk/Arabian-Nightmare-Robert-Irwin/dp/1873982739">Arabian Nightmare</a> a story in which you never truly know who is the narrator until the very end due to the layer nature of story telling.</p>
<p>All these go to show you never can know as much as you would hope for.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I want a TFS 64bit API</title>
      <link>https://blog.richardfennell.net/posts/i-want-a-tfs-64bit-api/</link>
      <pubDate>Tue, 29 Apr 2008 13:26:19 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-want-a-tfs-64bit-api/</guid>
      <description>&lt;p&gt;The lack of 64Bit TFS API DLLs is becoming a real pain for me. We have committed to a 64Bit server architecture for all our IIS and hence MOSS servers; both of these seem unable to WOW64 the 32bit TFS DLLs (though Cassini can!) so I cannot load any web front ends that use TFS such as eScrum or anything home grown on my main servers.&lt;/p&gt;
&lt;p&gt;My only option is to run 32it servers as well for the primary systems. This is not too bad for IIS/ASP.NET bits, but it is not recommended to have a mixed 32/64bit frontend for MOSS.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The lack of 64Bit TFS API DLLs is becoming a real pain for me. We have committed to a 64Bit server architecture for all our IIS and hence MOSS servers; both of these seem unable to WOW64 the 32bit TFS DLLs (though Cassini can!) so I cannot load any web front ends that use TFS such as eScrum or anything home grown on my main servers.</p>
<p>My only option is to run 32it servers as well for the primary systems. This is not too bad for IIS/ASP.NET bits, but it is not recommended to have a mixed 32/64bit frontend for MOSS.</p>
<p>I have to ask why is there no 64bit version of these TFS DLLs,  what the hell do they do that is processor type related?</p>
<p>If my understanding is correct they are just a set of wrappers that make the TFS WebServices easier to use. The TFS WebServices are not designed for third party access and I was strongly warned off trying to use them by <a href="http://feeds.feedburner.com/MartinWoodward">Martin Wooward</a> at <a href="http://imtc.firstport.ie/">IMTC</a> and he should know as he is part of the team that wrote the <a href="http://www.teamprise.com/products/">Java/Eclipse client for TFS</a>. He said the webservices do not provided atomic functions, so the API does a lot of work to make sure you don&rsquo;t corrupt the TFS system. However as it in the end the API uses the TFS WebServices it must just be making SOAP calls so why can&rsquo;t we have a 64Bit set of DLLs?</p>
<p>So is this a problem for just me? Has anyone else <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/01/04/tfs-webparts-in-a-64bit-sharepoint-environment.aspx">got interesting workarounds</a>?</p>
]]></content:encoded>
    </item>
    <item>
      <title>First Release of BlogWriter for Smart Devices</title>
      <link>https://blog.richardfennell.net/posts/first-release-of-blogwriter-for-smart-devices/</link>
      <pubDate>Tue, 29 Apr 2008 10:08:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-release-of-blogwriter-for-smart-devices/</guid>
      <description>&lt;p&gt;I have just &lt;a href=&#34;http://blogs.blackmarble.co.uk/files/samples&#34;&gt;uploaded the first release of my BlogWriter for Smart Devices&lt;/a&gt; which allows you to post new messages to blog servers that uses the &lt;a href=&#34;http://www.xmlrpc.com/metaWeblogApi&#34;&gt;MetaBlog API&lt;/a&gt;. My aim was to provide a &lt;a href=&#34;http://windowslivewriter.spaces.live.com/&#34;&gt;LiveWriter&lt;/a&gt; like application for devices like my HTC PDA.&lt;/p&gt;
&lt;p&gt;In the zip file you will find these instructions and a .CAB file. The installation process is as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Make sure you Smart Device has &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?FamilyID=e3821449-3c6b-42f1-9fd9-0041345b3385&amp;amp;displaylang=en&#34;&gt;.NET Compact Framework 3.5&lt;/a&gt; installed&lt;/li&gt;
&lt;li&gt;Copy the the CAB file to your smart device &lt;/li&gt;
&lt;li&gt;On the smart device double click on the CAB – you will get a message about unknown publisher, say OK. The EXE and the CAB are digitally signed but Black Marble is not a known publisher (we don&amp;rsquo;t have &lt;a href=&#34;http://msdn.microsoft.com/mobility/windowsmobile/partners/mobile2market/faq.aspx&#34;&gt;Mobile2Market certificate&lt;/a&gt; which is the only type that the installer can check publishers against).&lt;/li&gt;
&lt;li&gt;Answer the questions as to where you want to install the application.&lt;/li&gt;
&lt;li&gt;Once the program has been installed there should be a icon on program menu for Blog Writer, click it to run&lt;/li&gt;
&lt;li&gt;You see a splash screen, this should disappear after a short while and leave an empty page (this is because the blog server has not been configured yet)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_4.png&#34;&gt;&lt;img alt=&#34;image&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/image_thumb_1.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just <a href="http://blogs.blackmarble.co.uk/files/samples">uploaded the first release of my BlogWriter for Smart Devices</a> which allows you to post new messages to blog servers that uses the <a href="http://www.xmlrpc.com/metaWeblogApi">MetaBlog API</a>. My aim was to provide a <a href="http://windowslivewriter.spaces.live.com/">LiveWriter</a> like application for devices like my HTC PDA.</p>
<p>In the zip file you will find these instructions and a .CAB file. The installation process is as follows:</p>
<ul>
<li>Make sure you Smart Device has <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=e3821449-3c6b-42f1-9fd9-0041345b3385&amp;displaylang=en">.NET Compact Framework 3.5</a> installed</li>
<li>Copy the the CAB file to your smart device </li>
<li>On the smart device double click on the CAB – you will get a message about unknown publisher, say OK. The EXE and the CAB are digitally signed but Black Marble is not a known publisher (we don&rsquo;t have <a href="http://msdn.microsoft.com/mobility/windowsmobile/partners/mobile2market/faq.aspx">Mobile2Market certificate</a> which is the only type that the installer can check publishers against).</li>
<li>Answer the questions as to where you want to install the application.</li>
<li>Once the program has been installed there should be a icon on program menu for Blog Writer, click it to run</li>
<li>You see a splash screen, this should disappear after a short while and leave an empty page (this is because the blog server has not been configured yet)</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_4.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_1.png"></a></p>
<ul>
<li>On the tools menu select Options</li>
<li>Enter the URL of the community server&rsquo;s MetaBlog page (or any other type of blog server that support MetaBlog). In the case of CS it will be something like <em><a href="http://www.mydomain.co.uk/blogs/metablog.ashx">http://www.mydomain.co.uk/blogs/metablog.ashx</a></em></li>
<li>Enter the blog name. In CS it is what is after the <em>/blogs/</em> in the URL, so mine is rfennell</li>
<li>Enter your login and password.</li>
<li>Finally you can set if the application should auto-connect to the server on start-up. I would suggest not to auto refresh until you are happy it is working, there is a manual refresh option on the tools menu.</li>
<li>Once all is setup and saved use tools/refresh option and you should see a list of your last 20 posts.</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_6.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_2.png"></a></p>
<ul>
<li>To view a post click on it or click new post button to create a new one</li>
</ul>
<p><a href="/wp-content/uploads/sites/2/historic/image_8.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb_3.png"></a></p>
<ul>
<li>On the details page you get a browser preview, and a text editor. As I have not found a good html editor for .NET Compact framework I just use a textbox and replace a newline with a <p> when you save.</li>
<li>You also get a categories list which is pulled from the server. You can assign a post to any category you want.</li>
<li>When you are happy use the publish option on the post actions menu.</li>
<li>Note: you also have an option to delete a post if you want.</li>
</ul>
<p><strong>So what is missing?</strong></p>
<ul>
<li>A better HTML editor is the critical thing, I might have to write one!</li>
<li>No context menus or help.</li>
<li>The ability to add new categories.</li>
<li>Image support - maybe linked to the PDA camera to you can blog a photo in one step.</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Enigma &amp;amp; Friends</title>
      <link>https://blog.richardfennell.net/posts/enigma-friends/</link>
      <pubDate>Thu, 24 Apr 2008 08:35:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/enigma-friends/</guid>
      <description>&lt;p&gt;I went to very interesting &lt;a href=&#34;http://www.theiet.org/local/uk/yorks/north/enigma.cfm&#34;&gt;IET meeting&lt;/a&gt; last night entitled &amp;lsquo;Enigma &amp;amp; Friends&amp;rsquo; given by John Alexander. He is a private collector of encryption machines; the bulk of his collection is currently house in the &lt;a href=&#34;http://www.bletchleypark.org.uk/content/visit/attractions.rhtm&#34;&gt;Block-B exhibit at Bletchley Park&lt;/a&gt; and is open to the public.&lt;/p&gt;
&lt;p&gt;The difference between going to the museum and seeing his presentation is threefold. Firstly the you get his extensive knowledge of the subject, but I think even more interestingly hear of the adventures it takes to obtain what must be remembered were (or still are) top secret machines. And finally you get a chance to handle  the machines, probably something I will not get the chance to do again.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I went to very interesting <a href="http://www.theiet.org/local/uk/yorks/north/enigma.cfm">IET meeting</a> last night entitled &lsquo;Enigma &amp; Friends&rsquo; given by John Alexander. He is a private collector of encryption machines; the bulk of his collection is currently house in the <a href="http://www.bletchleypark.org.uk/content/visit/attractions.rhtm">Block-B exhibit at Bletchley Park</a> and is open to the public.</p>
<p>The difference between going to the museum and seeing his presentation is threefold. Firstly the you get his extensive knowledge of the subject, but I think even more interestingly hear of the adventures it takes to obtain what must be remembered were (or still are) top secret machines. And finally you get a chance to handle  the machines, probably something I will not get the chance to do again.</p>
<p>So what did I learn - <a href="http://en.wikipedia.org/wiki/Fialka">Russian cold war cypher machines</a> are improbably heavy, and you can buy anything on eBay!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Smart Device Blog Writer</title>
      <link>https://blog.richardfennell.net/posts/smart-device-blog-writer/</link>
      <pubDate>Sun, 20 Apr 2008 21:08:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/smart-device-blog-writer/</guid>
      <description>&lt;p&gt;This post has been written with a blog writer (same idea as Live Writer) that I have written for a Smart Device using the .NET Compact Framework 3.5&lt;/p&gt;
&lt;p&gt;The reason I wrote it was I find Community Server a bit awkward to use in a web browser on a small form factor device like my HTC Cruise phone. Being able to create a post offiline just seemed an easier option.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>This post has been written with a blog writer (same idea as Live Writer) that I have written for a Smart Device using the .NET Compact Framework 3.5</p>
<p>The reason I wrote it was I find Community Server a bit awkward to use in a web browser on a small form factor device like my HTC Cruise phone. Being able to create a post offiline just seemed an easier option.</p>
<p>Lets see if that is true.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Outlook Mobile Sync Issues over 3G</title>
      <link>https://blog.richardfennell.net/posts/outlook-mobile-sync-issues-over-3g/</link>
      <pubDate>Thu, 17 Apr 2008 20:20:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/outlook-mobile-sync-issues-over-3g/</guid>
      <description>&lt;p&gt;I posted a while ago about my new &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/03/11/after-months-on-backorder-my-htc-cruise-arrives-was-it-worth-the-wait.aspx&#34; title=&#34;After months on backorder my HTC Cruise arrives, was it worth the wait-&#34;&gt;HTC Cruise&lt;/a&gt;, well I am still really happy with it but I have come across a problem. I have found that from time to time I get problems trying to do a Send/Receive in Outlook Mobile via the 3G mobile phone network. Well today after a chat with &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/twardill&#34; title=&#34;Work Life - like park life, but paid better&#34;&gt;Tom&lt;/a&gt; I spotted the pattern.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted a while ago about my new <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/03/11/after-months-on-backorder-my-htc-cruise-arrives-was-it-worth-the-wait.aspx" title="After months on backorder my HTC Cruise arrives, was it worth the wait-">HTC Cruise</a>, well I am still really happy with it but I have come across a problem. I have found that from time to time I get problems trying to do a Send/Receive in Outlook Mobile via the 3G mobile phone network. Well today after a chat with <a href="http://blogs.blackmarble.co.uk/blogs/twardill" title="Work Life - like park life, but paid better">Tom</a> I spotted the pattern.</p>
<p>If I boot the phone I can pickup email via 3G/Internet no problems. However, if I USB active-sync the phone any further attempt to pickup email via 3G just gets a &lsquo;Connecting&rsquo; status message then nothing.</p>
<p>It seems that the active-sync (which also does an update of the email) leaves Outlook mobile in such a state that it cannot connect via 3G, maybe URL/IP Address cache issue? Interestingly a Web Browser has no problem getting to the mail server&rsquo;s URL via 3G so it looks to be a problem in Outlook Mobile and not the operating system.</p>
<p>So the simple fix is just reboot the phone, something I have not had to do much with the HTC running Windows Mobile 6.0. My old QTek had to be rebooted a on regular basis</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at Scottish SQL Usergroup</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-scottish-sql-usergroup/</link>
      <pubDate>Thu, 17 Apr 2008 16:46:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-scottish-sql-usergroup/</guid>
      <description>&lt;p&gt;I will be speaking on Visual Studio for Database Professionals to the Scottish SQL usergroup on the 4th June in Edinburgh.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt; The event booking site is now available at &lt;a href=&#34;http://www.sqlserverfaq.com/?eid=115&#34;&gt;http://www.sqlserverfaq.com/?eid=115&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking on Visual Studio for Database Professionals to the Scottish SQL usergroup on the 4th June in Edinburgh.</p>
<p><strong>Update</strong> The event booking site is now available at <a href="http://www.sqlserverfaq.com/?eid=115">http://www.sqlserverfaq.com/?eid=115</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>VBUG Newcastle: VS2008, SQL Server 2008 and Win 2008 Launch Event</title>
      <link>https://blog.richardfennell.net/posts/vbug-newcastle-vs2008-sql-server-2008-and-win-2008-launch-event/</link>
      <pubDate>Tue, 15 Apr 2008 12:45:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vbug-newcastle-vs2008-sql-server-2008-and-win-2008-launch-event/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.vbug.com/Events/April-2008/VBUG-Newcastle-Heroes-Happen-Here--VS08-SQL08-Server08-Launch-Event.aspx&#34;&gt;details are out the for VBug&lt;/a&gt; event I am speaking at, it is going to be at &lt;a href=&#34;http://www.thecastlegate.co.uk/&#34;&gt;The Castlegate&lt;/a&gt;, Melbourne Street, Newcastle upon Tyne, NE1 2JQ&lt;/p&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.vbug.com/Events/April-2008/VBUG-Newcastle-Heroes-Happen-Here--VS08-SQL08-Server08-Launch-Event.aspx">details are out the for VBug</a> event I am speaking at, it is going to be at <a href="http://www.thecastlegate.co.uk/">The Castlegate</a>, Melbourne Street, Newcastle upon Tyne, NE1 2JQ</p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Community Launch Events</title>
      <link>https://blog.richardfennell.net/posts/community-launch-events/</link>
      <pubDate>Mon, 07 Apr 2008 13:28:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/community-launch-events/</guid>
      <description>&lt;p&gt;A busy time in the next few weeks for free community launch events for &lt;a href=&#34;http://www.microsoft.com/heroeshappenhere/&#34;&gt;Windows 2008, Visual Studio 2008 and SQL 2008&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This week we at Black Marble are hosting a series of three evening events (&lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events&#34;&gt;See online booking&lt;/a&gt;) where you can see virtually the whole set of Black Marble speakers.&lt;/p&gt;
&lt;p&gt;On the 30th of April there will be a VBug hosted &lt;a href=&#34;http://www.vbug.co.uk/events/default.aspx&#34;&gt;launch event&lt;/a&gt; in Newcastle where I, &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx&#34;&gt;Rik&lt;/a&gt; and &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/iangus/default.aspx&#34;&gt;Iain&lt;/a&gt; will be speaking.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A busy time in the next few weeks for free community launch events for <a href="http://www.microsoft.com/heroeshappenhere/">Windows 2008, Visual Studio 2008 and SQL 2008</a>.</p>
<p>This week we at Black Marble are hosting a series of three evening events (<a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events">See online booking</a>) where you can see virtually the whole set of Black Marble speakers.</p>
<p>On the 30th of April there will be a VBug hosted <a href="http://www.vbug.co.uk/events/default.aspx">launch event</a> in Newcastle where I, <a href="http://blogs.blackmarble.co.uk/blogs/rhepworth/default.aspx">Rik</a> and <a href="http://blogs.blackmarble.co.uk/blogs/iangus/default.aspx">Iain</a> will be speaking.</p>
<p>Hope to see you one or more of these events.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Back home from the IMTC</title>
      <link>https://blog.richardfennell.net/posts/back-home-from-the-imtc/</link>
      <pubDate>Sat, 05 Apr 2008 13:31:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/back-home-from-the-imtc/</guid>
      <description>&lt;p&gt;After an uneventful journey I have got home from the &lt;a href=&#34;http://imtc.firstport.ie/default.aspx&#34;&gt;IMTC conference&lt;/a&gt; in Dublin. Thanks to the organiser for running the event, it seemed to go well.&lt;/p&gt;
&lt;p&gt;I hope I managed to answer any question my sessions on SQL BI and Scrum raised, if not send a message via this blog. I suspect there might be a few from the Scrum session as it seems the cinema (where the conference was held) had a movie scheduled to start at 6pm, the same time I was scheduled to finish so not much chance to stand around for a Q&amp;amp;A and chat.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After an uneventful journey I have got home from the <a href="http://imtc.firstport.ie/default.aspx">IMTC conference</a> in Dublin. Thanks to the organiser for running the event, it seemed to go well.</p>
<p>I hope I managed to answer any question my sessions on SQL BI and Scrum raised, if not send a message via this blog. I suspect there might be a few from the Scrum session as it seems the cinema (where the conference was held) had a movie scheduled to start at 6pm, the same time I was scheduled to finish so not much chance to stand around for a Q&amp;A and chat.</p>
<p>Keep an eye on the <a href="http://imtc.firstport.ie/default.aspx">IMTC conference</a> web site for webcasts of the sessions and other materials. Also I am sure photo&rsquo;s and video will appear on other speakers blogs.</p>
]]></content:encoded>
    </item>
    <item>
      <title>The youth of today</title>
      <link>https://blog.richardfennell.net/posts/the-youth-of-today/</link>
      <pubDate>Tue, 01 Apr 2008 15:05:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-youth-of-today/</guid>
      <description>&lt;p&gt;It is the school holidays, so I am sitting in a children&amp;rsquo;s indoor jungle gym sorting out my &lt;a href=&#34;http://imtc.firstport.ie/agenda.aspx&#34;&gt;slides for Ireland&lt;/a&gt; while my son rushes around.&lt;/p&gt;
&lt;p&gt;Having someone working on a laptop here seems to be a bit of novelty. I have had a number of children rush over to see what I am doing. They don&amp;rsquo;t stay watching long when they spot the VPC of SQL 2008 and a PowerPoint stack.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It is the school holidays, so I am sitting in a children&rsquo;s indoor jungle gym sorting out my <a href="http://imtc.firstport.ie/agenda.aspx">slides for Ireland</a> while my son rushes around.</p>
<p>Having someone working on a laptop here seems to be a bit of novelty. I have had a number of children rush over to see what I am doing. They don&rsquo;t stay watching long when they spot the VPC of SQL 2008 and a PowerPoint stack.</p>
<p>What does it take to excite the youth of today? I despair if data structures does not interest them; whatever will - compiler theory?</p>
]]></content:encoded>
    </item>
    <item>
      <title>More sessions in Ireland next week</title>
      <link>https://blog.richardfennell.net/posts/more-sessions-in-ireland-next-week/</link>
      <pubDate>Thu, 27 Mar 2008 14:14:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/more-sessions-in-ireland-next-week/</guid>
      <description>&lt;p&gt;Just found out that Robert and myself are both now doing two sessions at the Irish Microsoft Technologies Conference next week.&lt;/p&gt;
&lt;p&gt;**Thursday&lt;br&gt;
**13:45 - &lt;a href=&#34;http://imtc.firstport.ie/lecture.aspx?lid=128&#34;&gt;&lt;strong&gt;BI in SQL 08 (Richard Fennell)&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
16:45 - &lt;a href=&#34;http://imtc.firstport.ie/lecture.aspx?lid=141&#34;&gt;&lt;strong&gt;Microsoft Volta: Next Gen .NET Client-Server Apps (Robert Hogg)&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;**Friday&lt;br&gt;
**10:00 - &lt;a href=&#34;http://imtc.firstport.ie/lecture.aspx?lid=134&#34;&gt;&lt;strong&gt;Overview of Business Process Automation and implementing an Enterprise Service Bus using Microsoft BizTalk Server 2006 R2 (Robert Hogg)&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
16:45 - &lt;a href=&#34;http://imtc.firstport.ie/lecture.aspx?lid=140&#34;&gt;&lt;strong&gt;What is Scrum? (Richard Fennell)&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just found out that Robert and myself are both now doing two sessions at the Irish Microsoft Technologies Conference next week.</p>
<p>**Thursday<br>
**13:45 - <a href="http://imtc.firstport.ie/lecture.aspx?lid=128"><strong>BI in SQL 08 (Richard Fennell)</strong></a><br>
16:45 - <a href="http://imtc.firstport.ie/lecture.aspx?lid=141"><strong>Microsoft Volta: Next Gen .NET Client-Server Apps (Robert Hogg)</strong></a></p>
<p>**Friday<br>
**10:00 - <a href="http://imtc.firstport.ie/lecture.aspx?lid=134"><strong>Overview of Business Process Automation and implementing an Enterprise Service Bus using Microsoft BizTalk Server 2006 R2 (Robert Hogg)</strong></a><br>
16:45 - <a href="http://imtc.firstport.ie/lecture.aspx?lid=140"><strong>What is Scrum? (Richard Fennell)</strong></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>New style web applications</title>
      <link>https://blog.richardfennell.net/posts/new-style-web-applications/</link>
      <pubDate>Wed, 26 Mar 2008 20:51:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-style-web-applications/</guid>
      <description>&lt;p&gt;With the advent of Silverlight we are going to see more and more web delivered applications. A good example of what we can expect is the Microsoft &lt;strong&gt;&lt;a href=&#34;http://www.microsoft.com/virtualevents/uk/vle.aspx&#34;&gt;HEROES happen {here}&lt;/a&gt;&lt;/strong&gt; virtual launch site. If you did not make it to the event it is well worth a look with all the slides and videos of the sessions.&lt;/p&gt;
&lt;p&gt;This site does makes heavy use of multimedia and it is maybe just me but I find it hard to find things, it look great but I don&amp;rsquo;t like having to guess where the menu might be. In general I want information fast not see a intro video; but this could just be because I am a developer and not a designer. I think this is especially true of sites you go back to again and again. It strikes me that for sites like this search become the key if you are ever to find relevant material.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With the advent of Silverlight we are going to see more and more web delivered applications. A good example of what we can expect is the Microsoft <strong><a href="http://www.microsoft.com/virtualevents/uk/vle.aspx">HEROES happen {here}</a></strong> virtual launch site. If you did not make it to the event it is well worth a look with all the slides and videos of the sessions.</p>
<p>This site does makes heavy use of multimedia and it is maybe just me but I find it hard to find things, it look great but I don&rsquo;t like having to guess where the menu might be. In general I want information fast not see a intro video; but this could just be because I am a developer and not a designer. I think this is especially true of sites you go back to again and again. It strikes me that for sites like this search become the key if you are ever to find relevant material.</p>
<p>A heard about a site that bridges the old and new worlds of web delivery on the radio today, it is <a href="http://mydeco.com/" title="http://mydeco.com/">mydeco.com</a> which is from <a href="http://www.lastminute.com/">lastminute.com</a> co founder <a href="http://en.wikipedia.org/wiki/Brent_Hoberman">Brent Hoberman</a>. It is Flash based as opposed to Silverlight, but that is not the point. It allows you to visualize rooms in your own house with a vast range of home furnishings. This site seems easy to use and navigation is simple - maybe this is a better indicator of the next step in web delivered applications. Well worth a look.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New release of VSTS Scrum Process Template</title>
      <link>https://blog.richardfennell.net/posts/new-release-of-vsts-scrum-process-template/</link>
      <pubDate>Wed, 26 Mar 2008 20:29:26 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-release-of-vsts-scrum-process-template/</guid>
      <description>&lt;p&gt;I see there has been a release of the VSTS Scrum Process Template for VS2008 on &lt;a href=&#34;http://www.codeplex.com/VSTSScrum/Release/ProjectReleases.aspx?ReleaseId=11972&#34;&gt;CodePlex&lt;/a&gt;, so adding a way to run Scrum projects with the current release of TFS without the &lt;a href=&#34;http://www.sharepointblogs.com/johnwpowell/archive/2007/09/29/how-to-install-microsoft-escrum-1-0-process-template-on-tfs-2008-beta-2-quot-orcas-quot.aspx&#34;&gt;fiddling required for eScrum&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It looks good, but is missing for me the best part of eScrum, the web site that allows quick updating of the Scrum project without the need for Visual Studio. If you are going to be doing Scrum you don&amp;rsquo;t want to having to spend loads of time on the electronic scrum board administration.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I see there has been a release of the VSTS Scrum Process Template for VS2008 on <a href="http://www.codeplex.com/VSTSScrum/Release/ProjectReleases.aspx?ReleaseId=11972">CodePlex</a>, so adding a way to run Scrum projects with the current release of TFS without the <a href="http://www.sharepointblogs.com/johnwpowell/archive/2007/09/29/how-to-install-microsoft-escrum-1-0-process-template-on-tfs-2008-beta-2-quot-orcas-quot.aspx">fiddling required for eScrum</a>.</p>
<p>It looks good, but is missing for me the best part of eScrum, the web site that allows quick updating of the Scrum project without the need for Visual Studio. If you are going to be doing Scrum you don&rsquo;t want to having to spend loads of time on the electronic scrum board administration.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Dot Net Rocks - Barry Dorrans revisits Security with OpenID and Cardspace</title>
      <link>https://blog.richardfennell.net/posts/dot-net-rocks-barry-dorrans-revisits-security-with-openid-and-cardspace/</link>
      <pubDate>Thu, 20 Mar 2008 15:07:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/dot-net-rocks-barry-dorrans-revisits-security-with-openid-and-cardspace/</guid>
      <description>&lt;p&gt;Join the surreal world of Barry Dorrans on &lt;a href=&#34;http://www.dotnetrocks.com/default.aspx?showNum=325&#34;&gt;DNR&lt;/a&gt; - if you listen long enough you could even hear something about Cardspace.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Join the surreal world of Barry Dorrans on <a href="http://www.dotnetrocks.com/default.aspx?showNum=325">DNR</a> - if you listen long enough you could even hear something about Cardspace.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Installing SQL 2008 Feb CTP on a VPC</title>
      <link>https://blog.richardfennell.net/posts/installing-sql-2008-feb-ctp-on-a-vpc/</link>
      <pubDate>Sun, 16 Mar 2008 21:41:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/installing-sql-2008-feb-ctp-on-a-vpc/</guid>
      <description>&lt;p&gt;I have been trying to install the Feb CTP of SQL 2008 on an Virtual PC, but kept getting the following error&lt;/p&gt;
&lt;p&gt;&lt;em&gt;SQL Server Browser Install for feature &amp;lsquo;SQL_Browser_Redist_SqlBrowser_Cpu32&amp;rsquo; failed with exception System.InvalidOperationException: This access control list is not in canonical form and therefore cannot be modified.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Turns out the problem was that I had run &lt;a href=&#34;http://technet.microsoft.com/en-us/sysinternals/bb897418.aspx&#34;&gt;SysInterals NewSID&lt;/a&gt; on the VPC as it was based on a diff disk off our standard W2K3 test install and I wanted it not to clash with other test installs. NewSID leaves the registry in a state that causes the SQL installer problems. To get round it you need to run&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been trying to install the Feb CTP of SQL 2008 on an Virtual PC, but kept getting the following error</p>
<p><em>SQL Server Browser Install for feature &lsquo;SQL_Browser_Redist_SqlBrowser_Cpu32&rsquo; failed with exception System.InvalidOperationException: This access control list is not in canonical form and therefore cannot be modified.</em></p>
<p>Turns out the problem was that I had run <a href="http://technet.microsoft.com/en-us/sysinternals/bb897418.aspx">SysInterals NewSID</a> on the VPC as it was based on a diff disk off our standard W2K3 test install and I wanted it not to clash with other test installs. NewSID leaves the registry in a state that causes the SQL installer problems. To get round it you need to run</p>
<p><strong>secedit /configure /db setupsecurity.sdb /cfg &ldquo;c:windowssecuritytemplatessetup security.inf&rdquo; /verbose</strong></p>
<p>as detailed in <a href="http://support.microsoft.com/kb/942517" title="http://support.microsoft.com/kb/942517">http://support.microsoft.com/kb/942517</a>.</p>
<p>Once this is done the installer can be run without any problems</p>
]]></content:encoded>
    </item>
    <item>
      <title>A couple of speaking engagements</title>
      <link>https://blog.richardfennell.net/posts/a-couple-of-speaking-engagements/</link>
      <pubDate>Wed, 12 Mar 2008 09:20:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-couple-of-speaking-engagements/</guid>
      <description>&lt;p&gt;I will be speaking at the &lt;a href=&#34;http://imtc.firstport.ie/bio.aspx?sid=128&#34;&gt;Irish Microsoft Technology Conference&lt;/a&gt; in early April on the Business Intelligence features in SQL 2008.&lt;/p&gt;
&lt;p&gt;I also just found out my Continuous Integration session at &lt;a href=&#34;http://developerdayscotland.com/main/Default.aspx&#34;&gt;Developer Day Scotland&lt;/a&gt; has been successful in the vote, thanks to everyone who voted for it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking at the <a href="http://imtc.firstport.ie/bio.aspx?sid=128">Irish Microsoft Technology Conference</a> in early April on the Business Intelligence features in SQL 2008.</p>
<p>I also just found out my Continuous Integration session at <a href="http://developerdayscotland.com/main/Default.aspx">Developer Day Scotland</a> has been successful in the vote, thanks to everyone who voted for it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>CruiseControl &amp;amp; MSTest from Visual Studio 2008</title>
      <link>https://blog.richardfennell.net/posts/cruisecontrol-mstest-from-visual-studio-2008/</link>
      <pubDate>Tue, 11 Mar 2008 20:38:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/cruisecontrol-mstest-from-visual-studio-2008/</guid>
      <description>&lt;p&gt;Ages ago &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/06/14/5255.aspx&#34;&gt;I posted on using MSTest and CruiseControl .NET&lt;/a&gt; with VS2005. As I am presenting tomorrow to the &lt;a href=&#34;http://xpclub.erudine.com/2008/03/march-12th-meeting-but-it-works-on-my.html&#34;&gt;Yorkshire Extreme Programming Club&lt;/a&gt; on CC.Net I thought it a good idea to revisit this subject with VS2008.&lt;/p&gt;
&lt;p&gt;Well basically nothing has changed, the old ccnet.config I detailed still works. However, I discovered that you no longer really need the block to delete the TestProject.TRX file as it seem the 2008 MSTest.EXE can overwrite an existing test results file.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Ages ago <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/06/14/5255.aspx">I posted on using MSTest and CruiseControl .NET</a> with VS2005. As I am presenting tomorrow to the <a href="http://xpclub.erudine.com/2008/03/march-12th-meeting-but-it-works-on-my.html">Yorkshire Extreme Programming Club</a> on CC.Net I thought it a good idea to revisit this subject with VS2008.</p>
<p>Well basically nothing has changed, the old ccnet.config I detailed still works. However, I discovered that you no longer really need the block to delete the TestProject.TRX file as it seem the 2008 MSTest.EXE can overwrite an existing test results file.</p>
<p>However, we don&rsquo;t get away without any changes. The one area that has changed is the format of the MSTEST results file. The MSTestSummary.XSL and MSTestReport.XSL files shipped with ccNet 1.3 which are used to build the Web Dashboard just give blank pages. I also checked the 1.4 Beta of CCnet and this also has the same version of XSL files.</p>
<p>Now a few people have posted on this problem, but up to now there appears to have been no public XSL files for the current V9.x version of MSTest results files. To address this problem Robert Hancock at Black Marble edited the <a href="http://blogs.blackmarble.co.uk/files/samples">MSTESTSummary.xsl</a> file and using his work I managed to sort the <a href="http://blogs.blackmarble.co.uk/files/samples">MSTESTReport.xsl</a> file. Both of these files I have posted on this server in a <a href="http://blogs.blackmarble.co.uk/files/samples">single zip</a>. Once these files are copied into the <strong>CruiseControl.Netwebdashboardxsl</strong> directory the MSTest reports should leap into life. Obviously only after you have setup all the other publishing bits I dealt with in my <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2006/06/14/5255.aspx">previous post</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS &#39;Invalid File Handle&#39; when getting files</title>
      <link>https://blog.richardfennell.net/posts/tfs-invalid-file-handle-when-getting-files/</link>
      <pubDate>Tue, 11 Mar 2008 12:35:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-invalid-file-handle-when-getting-files/</guid>
      <description>&lt;p&gt;I was recently working at a client&amp;rsquo;s site where they were using TFS 2008 in a dual server setup. When getting large numbers of files (e.g. a Get Latest for the whole solution of 20+ projects) they were intermittently see &amp;lsquo;Invalid file handle&amp;rsquo; errors. However, if they selected a smaller set of files, or just retried it would often work. We could spot no major pattern other than volume of files.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was recently working at a client&rsquo;s site where they were using TFS 2008 in a dual server setup. When getting large numbers of files (e.g. a Get Latest for the whole solution of 20+ projects) they were intermittently see &lsquo;Invalid file handle&rsquo; errors. However, if they selected a smaller set of files, or just retried it would often work. We could spot no major pattern other than volume of files.</p>
<p>Interestingly remote users, via a relatively slow ADSL based VPN, did not see the problem at all.</p>
<p>Due to the VPN users working we thought the problem might be due to the Application Tier (AT) being overloaded by the high-speed local LAB; so we tried throttling the IIS IO, but this had no effect.</p>
<p>Now the one different thing about this site was that the AT was running in a virtual environment. At Black Marble we run our AT on Virtual Server with no problem, but at this site they were using <a href="http://www.parallels.com/en/products/virtuozzo/?from=homepage">Parallels&rsquo; Virtuozzo</a>, which is a somewhat different style of product to VMWare or VirtualPC.</p>
<p>The AT was showing no errors (other than the logged File Handle error) to suggest a loading problem. However when Virtuozzo was told to given more resource by default to the AT the errors were greatly reduced. I don&rsquo;t think this is a complete solution yet but certainly a step in the right direction.</p>
]]></content:encoded>
    </item>
    <item>
      <title>After months on backorder my HTC Cruise arrives, was it worth the wait?</title>
      <link>https://blog.richardfennell.net/posts/after-months-on-backorder-my-htc-cruise-arrives-was-it-worth-the-wait/</link>
      <pubDate>Tue, 11 Mar 2008 12:15:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/after-months-on-backorder-my-htc-cruise-arrives-was-it-worth-the-wait/</guid>
      <description>&lt;p&gt;What seems many months ago I ordered an &lt;a href=&#34;http://www.htc.com/product/03-product_htctouch_cruise.htm&#34;&gt;HTC Cruise&lt;/a&gt; from &lt;a href=&#34;http://www.expansys.com/p.aspx?i=159608&#34;&gt;Expansys&lt;/a&gt;. My old &lt;a href=&#34;http://www.myqtek.com/europe/products/s100.aspx&#34;&gt;QTEK S100&lt;/a&gt; was literally falling apart. The new phone was supposedly on 4 day delivery, well it took best part of 4 months to arrive (don&amp;rsquo;t you just hate online stock levels and delivery dates that are just wrong) but was the wait worth it?&lt;/p&gt;
&lt;p&gt;Well I have been using it for a couple of weeks and in general I would say yes it was worth the wait. As with any of these phone/PDA devices some features can be a compromised, they are never the best phones in the world, if you want a great phone with a long battery life get something like an old Nokia 6310 from eBay. That said, on first impressions the HTC &lt;a href=&#34;http://www.microsoft.com/windowsmobile/6/default.mspx&#34;&gt;Mobile 6&lt;/a&gt; package with their &lt;a href=&#34;http://www.htctouch.com/&#34;&gt;Touch&lt;/a&gt; interface does seem a good all rounder.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>What seems many months ago I ordered an <a href="http://www.htc.com/product/03-product_htctouch_cruise.htm">HTC Cruise</a> from <a href="http://www.expansys.com/p.aspx?i=159608">Expansys</a>. My old <a href="http://www.myqtek.com/europe/products/s100.aspx">QTEK S100</a> was literally falling apart. The new phone was supposedly on 4 day delivery, well it took best part of 4 months to arrive (don&rsquo;t you just hate online stock levels and delivery dates that are just wrong) but was the wait worth it?</p>
<p>Well I have been using it for a couple of weeks and in general I would say yes it was worth the wait. As with any of these phone/PDA devices some features can be a compromised, they are never the best phones in the world, if you want a great phone with a long battery life get something like an old Nokia 6310 from eBay. That said, on first impressions the HTC <a href="http://www.microsoft.com/windowsmobile/6/default.mspx">Mobile 6</a> package with their <a href="http://www.htctouch.com/">Touch</a> interface does seem a good all rounder.</p>
<p>For me this phone does all I currently need: WiFi, GPS and most importantly 3G. Unlike previous Windows Mobile versions it does seem easy to manage which communication features are on and off using the new Comms Manager. Also you can now make the phone a 3G USB/BlueTooth modem and with a single button press.</p>
<p>I also like the fact my old Jabra BlueTooth headset can now do voice dialling again, a feature that was not present on the S100&rsquo;s Second Edition operating system. Microsoft in their wisdom removed voice dialling as this was a PDA and not a phone!</p>
<p>I do have a niggle. This is one I have seen with all Microsoft PDAs I have had, if you put the phone on vibrate then back to normal ring it seems to reset the volume to low/off so you don&rsquo;t hear it ring. My tip, always check the volume when you come out of vibrate only mode.</p>
<p>Overall option - the HTC Cruise is well worth a look for general use PDA especially now all the mobile providers are offer competitive data packages</p>
]]></content:encoded>
    </item>
    <item>
      <title>SIlverlight 2 SDK Beta 1 - Control Samples</title>
      <link>https://blog.richardfennell.net/posts/silverlight-2-sdk-beta-1-control-samples/</link>
      <pubDate>Fri, 07 Mar 2008 16:10:54 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/silverlight-2-sdk-beta-1-control-samples/</guid>
      <description>&lt;p&gt;Interesting to see that the samples of &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?FamilyID=ea93dd89-3af2-4acb-9cf4-bfe01b3f02d4&amp;amp;displaylang=en&#34;&gt;Silverlight controls&lt;/a&gt; have been shipped with associated MSTest projects. I think this is first for Microsoft.&lt;/p&gt;
&lt;p&gt;Now this is a great way to see the intention behind the way any bit of code was designed. A good test set is far more use than most documentation, as it truly matches the code features. If you are doing TDD then it cannot get stale.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Interesting to see that the samples of <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=ea93dd89-3af2-4acb-9cf4-bfe01b3f02d4&amp;displaylang=en">Silverlight controls</a> have been shipped with associated MSTest projects. I think this is first for Microsoft.</p>
<p>Now this is a great way to see the intention behind the way any bit of code was designed. A good test set is far more use than most documentation, as it truly matches the code features. If you are doing TDD then it cannot get stale.</p>
]]></content:encoded>
    </item>
    <item>
      <title>March 2008 meeting of the Yorkshire Extreme Programming Club</title>
      <link>https://blog.richardfennell.net/posts/march-2008-meeting-of-the-yorkshire-extreme-programming-club/</link>
      <pubDate>Mon, 03 Mar 2008 18:28:11 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/march-2008-meeting-of-the-yorkshire-extreme-programming-club/</guid>
      <description>&lt;p&gt;In a change from the usual venue in Leeds, the next meeting of the &lt;a href=&#34;http://xpclub.erudine.com/&#34;&gt;Extreme Programming club&lt;/a&gt; will be at Black Marble&amp;rsquo;s office in Bradford.&lt;/p&gt;
&lt;p&gt;I will be giving a updated version of my DDD4 presentation on &lt;a href=&#34;http://confluence.public.thoughtworks.org/display/CCNET/Welcome&amp;#43;to&amp;#43;CruiseControl.NET&#34;&gt;Cruise Control .NET&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The fun starts a 7pm on Wednesday the 12th of March, it is free and open to all, see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In a change from the usual venue in Leeds, the next meeting of the <a href="http://xpclub.erudine.com/">Extreme Programming club</a> will be at Black Marble&rsquo;s office in Bradford.</p>
<p>I will be giving a updated version of my DDD4 presentation on <a href="http://confluence.public.thoughtworks.org/display/CCNET/Welcome&#43;to&#43;CruiseControl.NET">Cruise Control .NET</a>.</p>
<p>The fun starts a 7pm on Wednesday the 12th of March, it is free and open to all, see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SQLBits II</title>
      <link>https://blog.richardfennell.net/posts/sqlbits-ii/</link>
      <pubDate>Sun, 02 Mar 2008 22:10:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sqlbits-ii/</guid>
      <description>&lt;p&gt;I was at &lt;a href=&#34;http://www.sqlbits.com/default.aspx&#34;&gt;SQLBits II in Birmingham&lt;/a&gt; yesterday, a great venue and great sessions. I particularly enjoyed &lt;a href=&#34;http://www.simple-talk.com/community/blogs/andras/default.aspx&#34;&gt;András Belokosztolszki&amp;rsquo;s&lt;/a&gt; session on transient data in SQL, giving an insight on when as a developer it is appropriate to use different techniques to manage data.&lt;/p&gt;
&lt;p&gt;My session on Visual Studio for Database Professionals seemed to be well received. However, I did sense some resistance from people (who I assume were DBAs as opposed to developers) who did not like the idea of a world where DB schema control is not done from within SQL Management studio, but with a revision controlled off line environment. Maybe some DBAs don&amp;rsquo;t want to be part of the larger developer family on a project?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was at <a href="http://www.sqlbits.com/default.aspx">SQLBits II in Birmingham</a> yesterday, a great venue and great sessions. I particularly enjoyed <a href="http://www.simple-talk.com/community/blogs/andras/default.aspx">András Belokosztolszki&rsquo;s</a> session on transient data in SQL, giving an insight on when as a developer it is appropriate to use different techniques to manage data.</p>
<p>My session on Visual Studio for Database Professionals seemed to be well received. However, I did sense some resistance from people (who I assume were DBAs as opposed to developers) who did not like the idea of a world where DB schema control is not done from within SQL Management studio, but with a revision controlled off line environment. Maybe some DBAs don&rsquo;t want to be part of the larger developer family on a project?</p>
<p>I have uploaded <a href="http://www.blackmarble.co.uk/ConferencePapers/SQLBits%20II%20Presentation%20-Development%20Life%20Cycle%20using%20Visual%20Studio%20Team%20Edition.ppt">my slide stack to the Black Marble web site</a>, and a copy will also appear on the SQLBits site soon.</p>
<p>I must finally again give my thanks to the organiser of SQLBits for all their work in making this event such a success</p>
]]></content:encoded>
    </item>
    <item>
      <title>&#39;Aah, VSTO!&#39; ... cooking up an OBA solution</title>
      <link>https://blog.richardfennell.net/posts/aah-vsto-cooking-up-an-oba-solution/</link>
      <pubDate>Sat, 23 Feb 2008 17:01:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/aah-vsto-cooking-up-an-oba-solution/</guid>
      <description>&lt;p&gt;I have been doing some Visual Studio Tools for Office (VSTO) development in Word of late, not exactly a pain free experience.&lt;/p&gt;
&lt;p&gt;First thing to say is that even given all the marketing spiel, VSTO is not that different from VBA in older versions of Office.&lt;/p&gt;
&lt;p&gt;To get going I create a new VSTO 2008 Word 2007 Template project based on an import of our old Word template (to get styles, layout etc.). I then basically cut and pasted the logic from our old VBA macros into an ActionPane in the new one and it just worked.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been doing some Visual Studio Tools for Office (VSTO) development in Word of late, not exactly a pain free experience.</p>
<p>First thing to say is that even given all the marketing spiel, VSTO is not that different from VBA in older versions of Office.</p>
<p>To get going I create a new VSTO 2008 Word 2007 Template project based on an import of our old Word template (to get styles, layout etc.). I then basically cut and pasted the logic from our old VBA macros into an ActionPane in the new one and it just worked.</p>
<p>Well I said it just worked, I had out of habit I had picked a C# project, I should have chosen VB.NET (as is commonly recommenced for VSTO development) as then the code would have been virtually correct. As it was I had to spend a while adding {} and ; to do the language port.</p>
<p>So why did this work? VSTO is new(ish) any shiny and VBA is old and VB. However, they are both in effect wrapper APIs to the underlying Office COM Interop layer, so what they can do and how they do it is dependant on Office, in my case Word 2007. Ok you can do different things in Word 2007 to Word 2.0, but it is still in essence a Word processor where you manage content with ranges.</p>
<p>The reason people recommend VB.NET for VSTO development is due to the COM Interop. Visual Basic has always made it far easier to use COM objects, not least due to optional parameters and variant types.</p>
<p>Now as of yesterday some of these problems are addressed by the release of <a href="http://www.microsoft.com/downloads/details.aspx?FamilyId=46B6BF86-E35D-4870-B214-4D7B72B02BF9&amp;displaylang=en">VSTO Power Tools</a> and it COM wrapper API, but this API will certainly not address all issues.</p>
<p>Another thing that has has not changed over the years is the documentation, the MSDN references are OK but there is little is else. Especially when doing integration work with Sharepoint WebServices as I was. I have found I had to make heavy use of the debugger to investigate the XMLNode object being returned to work out that was going on.</p>
<p>In Word development I have commonly found that initially there are things I think I can&rsquo;t do because the object I need does not appear to be exposed by the object model; only to find what I am looking for in a place that is not that obvious to me. However you still have to use the object in some &lsquo;dark arts&rsquo; style to get the effect you require. A classic example of this is managing programmatically the document cover page from the building block gallery. To get this to work there are many hoops to jump though.</p>
<p>VSTO is powerfully but not as flexible as you would hope as you have to live inside the design metaphor of the Office application you are hosted in. You have to consider carefully if the functionality you are trying to add is really relevant to the application it will be hosted in.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD Ireland Agenda published</title>
      <link>https://blog.richardfennell.net/posts/ddd-ireland-agenda-published/</link>
      <pubDate>Mon, 18 Feb 2008 20:11:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-ireland-agenda-published/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.dddireland.com/agenda-1.htm&#34;&gt;agenda&lt;/a&gt; has been published  for **&lt;a href=&#34;http://www.dddireland.com/&#34;&gt;DeveloperDeveloperDeveloper! Day - Ireland&lt;/a&gt; ** which is being held on the Saturday May 3rd 2008 in &lt;a href=&#34;http://www.gmit.ie/directions-gmit-galway.html&#34;&gt;Galway&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I can&amp;rsquo;t make it to the event, but it does look well worth the trip. Keep an on on the &lt;a href=&#34;http://www.dddireland.com/rss.xml&#34;&gt;conference web site RSS feed&lt;/a&gt; to see when registration opens.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/DDDLogo_2.gif&#34;&gt;&lt;img alt=&#34;DDDLogo&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/DDDLogo_thumb.gif&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.dddireland.com/agenda-1.htm">agenda</a> has been published  for **<a href="http://www.dddireland.com/">DeveloperDeveloperDeveloper! Day - Ireland</a> ** which is being held on the Saturday May 3rd 2008 in <a href="http://www.gmit.ie/directions-gmit-galway.html">Galway</a>.</p>
<p>I can&rsquo;t make it to the event, but it does look well worth the trip. Keep an on on the <a href="http://www.dddireland.com/rss.xml">conference web site RSS feed</a> to see when registration opens.</p>
<p><a href="/wp-content/uploads/sites/2/historic/DDDLogo_2.gif"><img alt="DDDLogo" loading="lazy" src="/wp-content/uploads/sites/2/historic/DDDLogo_thumb.gif"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Sharepoint is like a shelf that is too short.....</title>
      <link>https://blog.richardfennell.net/posts/sharepoint-is-like-a-shelf-that-is-too-short/</link>
      <pubDate>Fri, 15 Feb 2008 20:52:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sharepoint-is-like-a-shelf-that-is-too-short/</guid>
      <description>&lt;p&gt;&lt;em&gt;You have all these books to put on a shelf and as you put one on at one end another one falls off at the other. You just can&amp;rsquo;t get them all on at the same time.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;What do I mean by this?&lt;/p&gt;
&lt;p&gt;Sharepoint has many features and with these come limitations due to the way features have been implemented. I am repeatedly finding that to use feature A means that you cannot use all of feature B. Historic choices can have a huge impact on what can be done in the future.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><em>You have all these books to put on a shelf and as you put one on at one end another one falls off at the other. You just can&rsquo;t get them all on at the same time.</em></p>
<p>What do I mean by this?</p>
<p>Sharepoint has many features and with these come limitations due to the way features have been implemented. I am repeatedly finding that to use feature A means that you cannot use all of feature B. Historic choices can have a huge impact on what can be done in the future.</p>
<p>A classic example is the choice for AD or Forms authentication. Many WebParts rely on one form of authentication or the other. Few will work with both; and using AD authentication as a back-end to Forms authentication (via LDAP) just raised new issues on top of the existing ones.</p>
<p>This means some careful planning and pilot studies need to be done early in any Sharepoint project to make sure that set of features you want to use are compatible with each other and you are not painting yourself into a corner for the future.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Scottish Developers</title>
      <link>https://blog.richardfennell.net/posts/scottish-developers/</link>
      <pubDate>Wed, 13 Feb 2008 14:47:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/scottish-developers/</guid>
      <description>&lt;p&gt;Thanks to &lt;a href=&#34;http://blog.colinmackay.net&#34;&gt;Colin Mackay&lt;/a&gt; for organising the Scottish Developers event in Glasgow last night; as promised I have uploaded &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/Scottish%20Developers%20Presentation%20-%20Team%20Foundation%20Server.ppt&#34;&gt;the slide on TFS&lt;/a&gt; that I presented.&lt;/p&gt;
&lt;p&gt;I am already looking forward to seeing everyone again at &lt;a href=&#34;http://developerdayscotland.com/main/Default.aspx&#34;&gt;Developer Day Scotland&lt;/a&gt; which is also being held in Glasgow at the Caledonian University.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Thanks to <a href="http://blog.colinmackay.net">Colin Mackay</a> for organising the Scottish Developers event in Glasgow last night; as promised I have uploaded <a href="http://www.blackmarble.co.uk/ConferencePapers/Scottish%20Developers%20Presentation%20-%20Team%20Foundation%20Server.ppt">the slide on TFS</a> that I presented.</p>
<p>I am already looking forward to seeing everyone again at <a href="http://developerdayscotland.com/main/Default.aspx">Developer Day Scotland</a> which is also being held in Glasgow at the Caledonian University.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Voting opens Developer Day Scotland</title>
      <link>https://blog.richardfennell.net/posts/voting-opens-developer-day-scotland/</link>
      <pubDate>Mon, 11 Feb 2008 21:45:55 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/voting-opens-developer-day-scotland/</guid>
      <description>&lt;p&gt;Vote early vote often&amp;hellip;&amp;hellip;&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://developerdayscotland.com/main/Agenda/tabid/55/Default.aspx&#34;&gt;&lt;img alt=&#34;VoteDDS-small&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/VoteDDS-small_3.png&#34;&gt;&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Vote early vote often&hellip;&hellip;</p>
<p><a href="http://developerdayscotland.com/main/Agenda/tabid/55/Default.aspx"><img alt="VoteDDS-small" loading="lazy" src="/wp-content/uploads/sites/2/historic/VoteDDS-small_3.png"></a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Next meeting of the Extreme Programming Club</title>
      <link>https://blog.richardfennell.net/posts/next-meeting-of-the-extreme-programming-club/</link>
      <pubDate>Mon, 11 Feb 2008 21:39:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/next-meeting-of-the-extreme-programming-club/</guid>
      <description>&lt;p&gt;The next meeting of the extreme programming club is going to be held on the &lt;strong&gt;Thursday the 14th of February 2008 at the Victoria Hotel, at 7.00pm, Leeds.&lt;/strong&gt; &lt;a href=&#34;http://local.google.co.uk/maps?f=q&amp;amp;hl=en&amp;amp;geocode=&amp;amp;time=&amp;amp;date=&amp;amp;ttype=&amp;amp;q=victoria&amp;#43;hotel&amp;#43;LS1&amp;amp;ie=UTF8&amp;amp;ll=53.800549,-1.548729&amp;amp;spn=0.009175,0.028539&amp;amp;z=16&amp;amp;iwloc=A&amp;amp;om=1&#34;&gt;Get Directions »&lt;/a&gt;&lt;/p&gt;
&lt;h5 id=&#34;next-meeting-is-happening-on-thursday-14th-of-february&#34;&gt;Next meeting is happening on Thursday 14th of February&lt;/h5&gt;
&lt;p&gt;There are going to have two shorter, or rather more agile talks:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&amp;ldquo;CRAP hits the fan - Change Risk Analyzer and Predictor&amp;rdquo;&lt;/strong&gt; - new kid on the block of software metrics and static code analyzers presented by Daniel Drozdzewski - Java developer at Erudine. This presentation focuses on software metrics in general or rather their failure and proposes gentle solution supported by examples in Java.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The next meeting of the extreme programming club is going to be held on the <strong>Thursday the 14th of February 2008 at the Victoria Hotel, at 7.00pm, Leeds.</strong> <a href="http://local.google.co.uk/maps?f=q&amp;hl=en&amp;geocode=&amp;time=&amp;date=&amp;ttype=&amp;q=victoria&#43;hotel&#43;LS1&amp;ie=UTF8&amp;ll=53.800549,-1.548729&amp;spn=0.009175,0.028539&amp;z=16&amp;iwloc=A&amp;om=1">Get Directions »</a></p>
<h5 id="next-meeting-is-happening-on-thursday-14th-of-february">Next meeting is happening on Thursday 14th of February</h5>
<p>There are going to have two shorter, or rather more agile talks:</p>
<p><strong>&ldquo;CRAP hits the fan - Change Risk Analyzer and Predictor&rdquo;</strong> - new kid on the block of software metrics and static code analyzers presented by Daniel Drozdzewski - Java developer at Erudine. This presentation focuses on software metrics in general or rather their failure and proposes gentle solution supported by examples in Java.</p>
<p><strong>&ldquo;Zero tolerance for bugs&rdquo;</strong> - Ralph Williams one of very few agile testing gurus out there with many years of experience as a test manager in various IT companies is going to present his view on bugs and promises to support it with plenty of examples and anecdotes from the industry.</p>
]]></content:encoded>
    </item>
    <item>
      <title>New Banner Image</title>
      <link>https://blog.richardfennell.net/posts/new-banner-image/</link>
      <pubDate>Fri, 08 Feb 2008 17:35:35 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/new-banner-image/</guid>
      <description>&lt;p&gt;I felt it was time for a new banner image, my triathlon photo had been up a while. So I have used one of the new cartoons our designer Lauren has done for all the staff at Black Marble.&lt;/p&gt;
&lt;p&gt;Commemorative limited edition collectors plate will soon be available.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I felt it was time for a new banner image, my triathlon photo had been up a while. So I have used one of the new cartoons our designer Lauren has done for all the staff at Black Marble.</p>
<p>Commemorative limited edition collectors plate will soon be available.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Another declarative programming language to try to test</title>
      <link>https://blog.richardfennell.net/posts/another-declarative-programming-language-to-try-to-test/</link>
      <pubDate>Thu, 07 Feb 2008 09:39:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/another-declarative-programming-language-to-try-to-test/</guid>
      <description>&lt;p&gt;At the &lt;a href=&#34;http://www.altnetpedia.com/London%20Alt.Net.UK%202nd%20Feb%202008.ashx&#34;&gt;Alt.NET UK conference&lt;/a&gt; there was a good deal of discussion about tactics to test declarative languages such as XAML in WPF applications. Well it seems we have another one coming out of the &lt;a href=&#34;http://www.microsoft.com/soa/products/oslo.aspx&#34;&gt;Oslo&lt;/a&gt; project - &amp;lsquo;&lt;a href=&#34;http://blogs.zdnet.com/microsoft/?p=1159&#34;&gt;D&lt;/a&gt;&amp;rsquo;.&lt;/p&gt;
&lt;p&gt;Looks like there is no escape, testing of declarative languages is going to be a ripe area to develop tools and techniques.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>At the <a href="http://www.altnetpedia.com/London%20Alt.Net.UK%202nd%20Feb%202008.ashx">Alt.NET UK conference</a> there was a good deal of discussion about tactics to test declarative languages such as XAML in WPF applications. Well it seems we have another one coming out of the <a href="http://www.microsoft.com/soa/products/oslo.aspx">Oslo</a> project - &lsquo;<a href="http://blogs.zdnet.com/microsoft/?p=1159">D</a>&rsquo;.</p>
<p>Looks like there is no escape, testing of declarative languages is going to be a ripe area to develop tools and techniques.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Oh to be a tester....</title>
      <link>https://blog.richardfennell.net/posts/oh-to-be-a-tester/</link>
      <pubDate>Sun, 03 Feb 2008 21:34:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/oh-to-be-a-tester/</guid>
      <description>&lt;p&gt;Whilst at the &lt;a href=&#34;http://www.altnetuk.com/&#34;&gt;Alt.net conference&lt;/a&gt; it was pointed out that I have a different view of the role of a tester in a software development team to many other people. It seems a tester, to many people, is viewed as a person who follows manual test scripts and/or monitors automated systems, they are really part of the QA process not part of development.&lt;/p&gt;
&lt;p&gt;Now to me this is just wrong. I started, a good while ago, in electronics testing and yes we did have people who sat with test gear and checked circuit boards gave the right voltages etc.; but we also had a test development team who built the test harnesses, scripts  and tools. It is this second group in my option that equate to software testers - they are developers who write code to enable testing. They might do some manual testing but as much as possible this should be automated; we have computers available to us, so make them do the repetitive test work whenever possible.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst at the <a href="http://www.altnetuk.com/">Alt.net conference</a> it was pointed out that I have a different view of the role of a tester in a software development team to many other people. It seems a tester, to many people, is viewed as a person who follows manual test scripts and/or monitors automated systems, they are really part of the QA process not part of development.</p>
<p>Now to me this is just wrong. I started, a good while ago, in electronics testing and yes we did have people who sat with test gear and checked circuit boards gave the right voltages etc.; but we also had a test development team who built the test harnesses, scripts  and tools. It is this second group in my option that equate to software testers - they are developers who write code to enable testing. They might do some manual testing but as much as possible this should be automated; we have computers available to us, so make them do the repetitive test work whenever possible.</p>
<p>This all takes some explaining when recruiting as many people do see testing as a bit second rate, just something to do until you get a &lsquo;real&rsquo; development job. As you might guess I refute this idea, to me a test role (or should I call it test development role) is where you get to play with the cool tools. If you are doing main line of business development your life is all datasets, standard UI controls and web services. Most line of business work does not thrive on innovation so you are bound to be doing repetitive work.</p>
<p>But the test developers get to use reflection, writing development tool extensions, play in edge conditions and innovate using things like generic algorithms and open source tools to test the boundary of what is possible. They get to think outside the box.</p>
<p>Also there is a great chance to show what you can do in the community. I would say community test tools and frameworks are one the most active areas.</p>
<p>There is also the satisfaction of seeing the whole picture of a project, by the very nature of the build process and integration testing the test developer gets to play in every area.</p>
<p>Is this a view of testing I am alone in holding?</p>
<p>I have convinced you all to become a test developer?</p>
]]></content:encoded>
    </item>
    <item>
      <title>First Alt.Net.UK Conference thoughts</title>
      <link>https://blog.richardfennell.net/posts/first-alt-net-uk-conference-thoughts/</link>
      <pubDate>Sat, 02 Feb 2008 20:19:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/first-alt-net-uk-conference-thoughts/</guid>
      <description>&lt;p&gt;I am sitting on the train heading north. After deciding trying to stream the England/Wales match from &lt;a href=&#34;http://www.bbc.co.uk/iplayer/&#34;&gt;iPlayer&lt;/a&gt; on the train WiFi was probably not a good idea, especially after just reading the score, I though I should write about the &lt;a href=&#34;http://www.altnetuk.com/&#34;&gt;Alt.Net UK Conference&lt;/a&gt;; a very interesting experience. It is great to meet so many people who are so enthusiastic over their jobs and learning how to do it better. There were some faces I knew from &lt;a href=&#34;http://www.developerday.co.uk/ddd/default.asp&#34;&gt;DDD&lt;/a&gt; and other community events, but also many new faces. I was surprised by the distance some people had travelled, I met a good few from around Europe, but I think &lt;a href=&#34;http://weblogs.asp.net/rosherove/&#34;&gt;Roy&lt;/a&gt; won the prize with a trip from Israel.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am sitting on the train heading north. After deciding trying to stream the England/Wales match from <a href="http://www.bbc.co.uk/iplayer/">iPlayer</a> on the train WiFi was probably not a good idea, especially after just reading the score, I though I should write about the <a href="http://www.altnetuk.com/">Alt.Net UK Conference</a>; a very interesting experience. It is great to meet so many people who are so enthusiastic over their jobs and learning how to do it better. There were some faces I knew from <a href="http://www.developerday.co.uk/ddd/default.asp">DDD</a> and other community events, but also many new faces. I was surprised by the distance some people had travelled, I met a good few from around Europe, but I think <a href="http://weblogs.asp.net/rosherove/">Roy</a> won the prize with a trip from Israel.</p>
<p>One of the ideas of Alt.net, being an <a href="http://www.co-intelligence.org/P-Openspace.html">open spaces conference</a>, is to <em>be</em> <em>surprised</em> and I have certainly come away with things to think about, and a good set of tools to try. Anyone who was here will have heard me bang on about how Sharepoint is a pig to test and deploy; but I may have seen a distant light at the end of the tunnel (maybe an on-coming train I admit) and think I can apply <a href="http://en.wikipedia.org/wiki/Model-view-controller">MVC</a> to SP webparts (and hence <a href="http://en.wikipedia.org/wiki/Inversion_of_control">IOC</a>) which should make my testing life easier. Expect a post on this soon.</p>
<p>Another surprising thing for me was that people we vaguely interested in the <a href="http://www.codeplex.com/guitester">GUITester</a> framework I wrote a while go that allows you to express GUI functional tests as attributes on WinForm GUI controls, thus trying to address some of the brittleness of GUI testing (a common discussion today i.e. when you refactor code - if you alter a control definition at least the attributes are close by to edit).</p>
<p>Today for me was focused on best practice like <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/11/07/teched-barcelona-update.aspx">so many events I have been to in the past year</a>. To me this is a great sign of maturity in our industry, people want to talk about how to start projects so they have the best chance to succeed and continue to improve as they go on. However, the question does remain how to engender this attitude the (below?) average development team who is doing 9 to 5 and does not want to learn because that ended at University.</p>
<p>For more notes on what went on today, have a look at <a href="http://altnetpedia.com/" title="http://altnetpedia.com/">http://altnetpedia.com/</a>, attendees should be adding their notes over the next few days.</p>
<p>Anyway I reaching Doncaster so I have to change to a non-WiFi enabled local train now, more later.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Registration for SQLBits II</title>
      <link>https://blog.richardfennell.net/posts/registration-for-sqlbits-ii/</link>
      <pubDate>Thu, 31 Jan 2008 23:04:27 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/registration-for-sqlbits-ii/</guid>
      <description>&lt;p&gt;It seems there are still some places left for &lt;a href=&#34;http://www.sqlserverfaq.com/sqlbits/&#34;&gt;SQLBits II&lt;/a&gt; in Birmingham on the 1st of March. If you want to go I suggest you register fast as these community events do fill up fast.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>It seems there are still some places left for <a href="http://www.sqlserverfaq.com/sqlbits/">SQLBits II</a> in Birmingham on the 1st of March. If you want to go I suggest you register fast as these community events do fill up fast.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Presenting at the first meeting of the Peterborough VBug group</title>
      <link>https://blog.richardfennell.net/posts/presenting-at-the-first-meeting-of-the-peterborough-vbug-group/</link>
      <pubDate>Thu, 31 Jan 2008 22:18:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/presenting-at-the-first-meeting-of-the-peterborough-vbug-group/</guid>
      <description>&lt;p&gt;I enjoyed presenting on TFS at the inaugural meeting of the P&lt;a href=&#34;http://www.google.co.uk/search?hl=en&amp;amp;sa=X&amp;amp;oi=spell&amp;amp;resnum=0&amp;amp;ct=result&amp;amp;cd=1&amp;amp;q=peterborough&amp;amp;spell=1&#34;&gt;eterborough&lt;/a&gt; VBug group last night. Thanks to &lt;a href=&#34;http://www.vbug.co.uk/info/Chairman.aspx&#34;&gt;Jyoti Majithia (VBug regional administrator&lt;/a&gt;) and &lt;a href=&#34;http://www.andrewwestgarth.co.uk/Blog&#34;&gt;Andy Westgarth (VBug chairman)&lt;/a&gt; for organising it. Hopefully this will be the start of another thriving user group community.&lt;/p&gt;
&lt;p&gt;If anyone wants a copy of the slide stack, there were no major changes between the one I used last night and the &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/DDD5%20Presentation%20-%20Team%20Foundation%20Server.ppt&#34;&gt;DDD5 one our main web site&lt;/a&gt;. There is also a screencast of &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/DDD6%20Demo%20-%20EScum%20ScreenCast.wmv&#34;&gt;demo of eScrum&lt;/a&gt; in the same archive.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I enjoyed presenting on TFS at the inaugural meeting of the P<a href="http://www.google.co.uk/search?hl=en&amp;sa=X&amp;oi=spell&amp;resnum=0&amp;ct=result&amp;cd=1&amp;q=peterborough&amp;spell=1">eterborough</a> VBug group last night. Thanks to <a href="http://www.vbug.co.uk/info/Chairman.aspx">Jyoti Majithia (VBug regional administrator</a>) and <a href="http://www.andrewwestgarth.co.uk/Blog">Andy Westgarth (VBug chairman)</a> for organising it. Hopefully this will be the start of another thriving user group community.</p>
<p>If anyone wants a copy of the slide stack, there were no major changes between the one I used last night and the <a href="http://www.blackmarble.co.uk/ConferencePapers/DDD5%20Presentation%20-%20Team%20Foundation%20Server.ppt">DDD5 one our main web site</a>. There is also a screencast of <a href="http://www.blackmarble.co.uk/ConferencePapers/DDD6%20Demo%20-%20EScum%20ScreenCast.wmv">demo of eScrum</a> in the same archive.</p>
<p>I was asked a couple of questions that I could not fully answer off the top of my head:</p>
<ul>
<li>If you have a Novell NDS based LAN how can you install TFS? The answer is TFS does not support NDS or LDAP for authentication. Also you cannot do a Dual Server TFS install as this requires an Active Directory. However a Single Server install will work if all the PCs (clients and servers) are in a Windows Workgroup (as well as the NDS domain).</li>
<li>Can I access TFS from inside the Microsoft Expression products or Macromedia products such as Dreamweaver ? For Microsoft Expression the answer is strangely no - this seems a big oversight. User would have to check files out via Team explorer then load Expressions. There is the <a href="http://www.microsoft.com/downloads/details.aspx?FamilyId=87E1FFBD-A484-4C3A-8776-D560AB1E6198&amp;displaylang=en">MSSCCI</a> add-in to allow third party tools to connect to TFS, <a href="http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=738771&amp;SiteID=1">but this does  not appear to work</a> with DreamWeaver as it has its own means of connection to source control such as VSS.</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Fun upgrading from Visual Studio TFS 2008 Beta2 to RTM</title>
      <link>https://blog.richardfennell.net/posts/fun-upgrading-from-visual-studio-tfs-2008-beta2-to-rtm/</link>
      <pubDate>Wed, 23 Jan 2008 21:31:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fun-upgrading-from-visual-studio-tfs-2008-beta2-to-rtm/</guid>
      <description>&lt;p&gt;A while ago I made the classic mistake of installing the Beta2 of TFS on our live servers after seeing posts that it was reliable enough for production use. I had stupidly assumed that there would be an upgrade path to the RTM; there usually is from the last beta, but not this time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;TIP: Don&amp;rsquo;t ever do this yourself, only use beta&amp;rsquo;s in places you can throw them away without any issues.&lt;/strong&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>A while ago I made the classic mistake of installing the Beta2 of TFS on our live servers after seeing posts that it was reliable enough for production use. I had stupidly assumed that there would be an upgrade path to the RTM; there usually is from the last beta, but not this time.</p>
<p><strong>TIP: Don&rsquo;t ever do this yourself, only use beta&rsquo;s in places you can throw them away without any issues.</strong></p>
<p>So to fix this problem you have to remove the beta before you can install the RTM. We run a dual server configuration with the application tier on a VPC so I backed up the TFS DBs on our central SQL server, made a copy of the VPC and enabled the undo disk. I then tried to remove the beta 2, and it failed with a message <em>DepCheck indicates Microsoft Visual Studio 2008 Team Foundation Server Beta 2 - ENU is not installed</em>.</p>
<p>I had a search and found some similar reference to the error in <a href="http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2048187&amp;SiteID=1">http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2048187&amp;SiteID=1</a> that suggest running</p>
<p>&ldquo;C:Program FilesMicrosoft Visual Studio 2008 Team Foundation ServerMicrosoft Visual Studio 2008 Team Foundation Server Beta 2 - ENUGetCurrentTfsProperties.exe&rdquo; &ldquo;%TEMP%TfsProperties.ini&rdquo; &ldquo;Config&rdquo;</p>
<p>Which as expected reported it could not find all the properties required.</p>
<p>So I had a problem, I could not de-install the old version to put the new one on. But then I got to thinking - I am trying to get this server back to a clean empty state - why not just create a new one? Just like a disaster recovery situation.</p>
<p>So I&hellip;</p>
<ul>
<li>Built a new server and added to our domain.</li>
<li>I install WSS3 manually so that I could set the DB location to our dedicated SQL server.</li>
<li>I installed SQL reporting services and patched it, again pointing it to our SQL server.</li>
<li>I then ran the TFS 2008 installer, giving it the location of our DB server, as if just adding a new application tier.</li>
</ul>
<p>The problems then started. I got a <strong>TF220065 RS WMI RPC Server is unavailable error</strong> - which sounds like a reporting server install problem (which it is I suppose). When you check the logs it turns out the installer has looked on the db tier for the URL of the AT reporting services location. The db still thought I was using the old AT server, so was getting an error the old server could not be found (because it was off). You can this fix by hand by editing the entry in the <strong>tfsintergration.tbl_service_interface</strong> table on the DB server. Once this is done the install runs OK until you get <strong>TF53007 &amp; TF50801</strong> errors. I guess there is a manual fix for this, but the better solution is to use the proper tool for registering the new server name, which fixes the follow-up errors as well. You can run this from the old TFS AT server, or I guess any PC.</p>
<blockquote>
<p><em>TfsAdminUtil activateAT <New AT COMPUTERNAME></em></p></blockquote>
<p>Once this was run the setup ran to completion.</p>
<p>Now this meant I had a new AT pointing at our old DBs, so I could see all our old team projects, I could access work items,reports and source control. However, for each team project I could not access the site portals or document repositories, but I expected this as I had recreated the whole of the WSS system with a new content DB. This was not an issue for us as we had chosen to use our main company MOSS 2007 system for documents, not the WSS install within TFS (the main reason for this choice being for TFS2005 the portal was a WSS2.0 install and we had already moved to SP2007).</p>
<p>If you want to recreate the TFS portal sites go into the WSS Central Admin and create a site with the right name e.g. <a href="http://mysserver/site/project1">http://mysserver/site/project1</a> and an appropriate template. You then have to give the new site appropriate user rights and all will be OK i.e. you can see the portal and documents from within team explorer. Of course you could also restore a backup of your old install if you had important data.</p>
<p>So I was quite pleased with myself by this point, I had worked all this out by myself. I thought it was all working until I tried to create a new Team project, this failed at creating the SharePoint site point of the wizard. It failed irrespective of the template chosen. The error on the wizard dialog was TF30217 but checking the detailed log it showed the real error was</p>
<p>TF30267: Exception: System.Web.Services.Protocols.SoapException: Exception of type &lsquo;Microsoft.SharePoint.SoapServer.SoapServerException&rsquo; was thrown.Event Description: TF30162: Task &ldquo;SharePointPortal&rdquo; from Group &ldquo;Portal&rdquo; failed<br>
Exception Type: Microsoft.TeamFoundation.Client.PcwException<br>
Exception Message: The Project Creation Wizard encountered an error while uploading documents to the Windows SharePoint Services server on [my server]<br>
Exception Details: The Project Creation Wizard encountered a problem while uploading documents to the Windows SharePoint Services server on [my server]. The reason for the failure cannot be determined at this time. Because the operation failed, the wizard was not able to finish</p>
<p>Now this proved to be a nightmare to fix involving many hours of fiddling and calls to Microsoft support. As with so many of this type of problem the fix was trivial when we spotted it. The problem turned out to be due to alternate access mappings in SharePoint. In effect when TFS creates a new site this is equivalent to the STSADM command</p>
<blockquote>
<p>stsadm -o createsite -url &lt;A href=&ldquo;https://%3cserver%3e/sites/projectsite&rdquo; mce_href=&ldquo;https:///sites/projectsite&rdquo;&gt;https://<server>/sites/projectsite -ownerlogin DOMAINuser -owneremail <a href="mailto:user@domain.com">user@domain.com</a> -sitetemplate _GLOBAL_#1</p></blockquote>
<p>The key issue was if I used the url <a href="https://realservername/sites/projectsite">https://realservername/sites/projectsite</a> it work, but if I use the full alias url (which I need so the correct SSL wildcard certificate can be used) e.g.  <a href="https://tfs.domain.co.uk/sites/projectsite">https://tfs.domain.co.uk/sites/projectsite</a> it failed even though <strong>tfs</strong> is a cname alias for the host <strong>realservername</strong> on our DNS server. They should be equivalent.</p>
<p>Now I knew SharePoint/IIS needs to know about any extra names you give to servers. I had put an alternate access mapping in SharePoint to cover just this issue (via Central Admin, Operations), but I had made a mistake. I had added the alternate mapping as <a href="http://tfs.domain.co.uk/">http://tfs.domain.co.uk</a> - oops should have been <strong>https</strong>. Once this tiny change was made it leapt into life.</p>
<p>I have to say the error in the TFS create log were not that helpful, neither was the output of the new TFS Best Practice Analyzer. In the end it was found by creating many sites with STSADM.</p>
<p>I must say thanks to the persistence of Microsoft Developer Support in both India and the USA in getting this last issue sorted.</p>
<p>So now I have a fully working TFS 2008 install, I can start to have a look at the <a href="http://www.codeplex.com/pstfsconnector">TFS to Project Server connector</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SQLBits II (the sql)</title>
      <link>https://blog.richardfennell.net/posts/sqlbits-ii-the-sql/</link>
      <pubDate>Wed, 23 Jan 2008 10:25:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sqlbits-ii-the-sql/</guid>
      <description>&lt;p&gt;Just heard my session &lt;strong&gt;Development Life Cycle using Visual Studio Team Edition for DB Professionals&lt;/strong&gt; has been accepted for &lt;a href=&#34;http://www.sqlbits.com/&#34;&gt;SQLBits II on the 1st of March&lt;/a&gt; in Birmingham.&lt;/p&gt;
&lt;p&gt;Thanks to everyone who voted for it, and I hope to see you there.&lt;/p&gt;
&lt;p&gt;![](&lt;a href=&#34;http://www.sqlbits.com/images/SQLBIts&#34;&gt;http://www.sqlbits.com/images/SQLBIts&lt;/a&gt; II LogoWithDate.png)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just heard my session <strong>Development Life Cycle using Visual Studio Team Edition for DB Professionals</strong> has been accepted for <a href="http://www.sqlbits.com/">SQLBits II on the 1st of March</a> in Birmingham.</p>
<p>Thanks to everyone who voted for it, and I hope to see you there.</p>
<p>![](<a href="http://www.sqlbits.com/images/SQLBIts">http://www.sqlbits.com/images/SQLBIts</a> II LogoWithDate.png)</p>
]]></content:encoded>
    </item>
    <item>
      <title>SharePoint, TRACE.WriteLine and DebugView</title>
      <link>https://blog.richardfennell.net/posts/sharepoint-trace-writeline-and-debugview/</link>
      <pubDate>Tue, 22 Jan 2008 13:48:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sharepoint-trace-writeline-and-debugview/</guid>
      <description>&lt;p&gt;I have been debugging some SharePoint 2007 webparts and have had to resort to TRACE output and the &lt;a href=&#34;http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx&#34;&gt;SysInternal DebugView&lt;/a&gt;; it is like ASP pages in the 90s all over again!&lt;/p&gt;
&lt;p&gt;Anyway had a weird problem that only alternate TRACE.WriteLine commands were appearing in the view. I have not found a reason why or a solution as yet, I suspect a buffer flushing issue, but you can work round it once you know it is happening.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been debugging some SharePoint 2007 webparts and have had to resort to TRACE output and the <a href="http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx">SysInternal DebugView</a>; it is like ASP pages in the 90s all over again!</p>
<p>Anyway had a weird problem that only alternate TRACE.WriteLine commands were appearing in the view. I have not found a reason why or a solution as yet, I suspect a buffer flushing issue, but you can work round it once you know it is happening.</p>
<p>Update : <strong>Trace.Autoflush= true</strong> is the fix - another user too stupid error!</p>
]]></content:encoded>
    </item>
    <item>
      <title>Running the TFS Best Practice Analyzer and 2005 Team Explorer</title>
      <link>https://blog.richardfennell.net/posts/running-the-tfs-best-practice-analyzer-and-2005-team-explorer/</link>
      <pubDate>Wed, 16 Jan 2008 13:12:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/running-the-tfs-best-practice-analyzer-and-2005-team-explorer/</guid>
      <description>&lt;p&gt;If you run the &lt;a href=&#34;http://msdn2.microsoft.com/en-us/tfs2008/bb980963.aspx&#34;&gt;2008 Power Tools BPA&lt;/a&gt; against a TFS 2008 install that has the 2005 Team Explorer also installed on it you will get virtually no reports generated. It runs to completion but the reports are empty.&lt;/p&gt;
&lt;p&gt;This is due to some clash between the BPA and the TFS 2005 DLLs (V8.0). Removed the 2005 Team Explorer and it works as expected.&lt;/p&gt;
&lt;p&gt;Now you are probably asking why would I have 2005 Team Explorer on a 2008 install? It is because a number of the third party TFS add-ins, such as eScrum, require the 2005 DLLs either to install or to run. As in theory the 2005 and 2008 installs are site side by side so it should not be an issue - but it is for the BPA.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you run the <a href="http://msdn2.microsoft.com/en-us/tfs2008/bb980963.aspx">2008 Power Tools BPA</a> against a TFS 2008 install that has the 2005 Team Explorer also installed on it you will get virtually no reports generated. It runs to completion but the reports are empty.</p>
<p>This is due to some clash between the BPA and the TFS 2005 DLLs (V8.0). Removed the 2005 Team Explorer and it works as expected.</p>
<p>Now you are probably asking why would I have 2005 Team Explorer on a 2008 install? It is because a number of the third party TFS add-ins, such as eScrum, require the 2005 DLLs either to install or to run. As in theory the 2005 and 2008 installs are site side by side so it should not be an issue - but it is for the BPA.</p>
<p>In the case of eScrum the answer is to place the 2005 DLLs in the eScrum web site bin directory (as opposed to the GAC) and all seems to be fine - the BPA works and so does the eScrum web site.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Presenting in Glasgow</title>
      <link>https://blog.richardfennell.net/posts/presenting-in-glasgow/</link>
      <pubDate>Mon, 14 Jan 2008 23:16:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/presenting-in-glasgow/</guid>
      <description>&lt;p&gt;I will be doing a TFS and Continuous Integration session for &lt;a href=&#34;http://www.scottishdevelopers.com/modules/extcal/event.php?event=39&#34;&gt;Scottish Developers on the 12th of February&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be doing a TFS and Continuous Integration session for <a href="http://www.scottishdevelopers.com/modules/extcal/event.php?event=39">Scottish Developers on the 12th of February</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>CCNet WebDashboard getting assembly does not allow partially trusted callers exception</title>
      <link>https://blog.richardfennell.net/posts/ccnet-webdashboard-getting-assembly-does-not-allow-partially-trusted-callers-exception/</link>
      <pubDate>Sun, 13 Jan 2008 22:33:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ccnet-webdashboard-getting-assembly-does-not-allow-partially-trusted-callers-exception/</guid>
      <description>&lt;p&gt;Whilst installing a TFS &amp;amp; CCNet demo system I got an exception&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;System.Security.SecurityException: That assembly does not allow partially trusted callers&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;when I tried to load the CCNet WebDashboard.&lt;/p&gt;
&lt;p&gt;The problem was that default CCNet installer had created the WebDashboard on the default web site as the virtual directory&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href=&#34;http://localhost/ccnet&#34;&gt;http://localhost/ccnet&lt;/a&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;As this was also a TFS server the default web site was a WSS3 server. Bascially the SharePoint did not like (trust) the CCNet virtual directory&amp;rsquo;s DLLs.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst installing a TFS &amp; CCNet demo system I got an exception</p>
<blockquote>
<p><em>System.Security.SecurityException: That assembly does not allow partially trusted callers</em></p></blockquote>
<p>when I tried to load the CCNet WebDashboard.</p>
<p>The problem was that default CCNet installer had created the WebDashboard on the default web site as the virtual directory</p>
<blockquote>
<p><a href="http://localhost/ccnet">http://localhost/ccnet</a></p></blockquote>
<p>As this was also a TFS server the default web site was a WSS3 server. Bascially the SharePoint did not like (trust) the CCNet virtual directory&rsquo;s DLLs.</p>
<p>The fix I used was to create a new website and point it as the WebDashBoard directory so it could be accessed without effecting the SharePoint e.g.</p>
<blockquote>
<p><a href="http://localhost:81">http://localhost:81</a></p></blockquote>
<p>I could also have edited the SharePoint&rsquo;s web.config to trust the CCNet DLLs</p>
]]></content:encoded>
    </item>
    <item>
      <title>TF220066 Error installing TFS</title>
      <link>https://blog.richardfennell.net/posts/tf220066-error-installing-tfs/</link>
      <pubDate>Thu, 10 Jan 2008 15:46:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tf220066-error-installing-tfs/</guid>
      <description>&lt;p&gt;I was installing a Dual Tier Team Foundation Server 2008 at a clients today and got a problem that when installing the Application Tier (AT) I entered the name of the Data Tier server (DT) and it said the server could not be found.&lt;/p&gt;
&lt;p&gt;Unlike TFS 2005, 2008 does not require you to to do a separate install on the DT to create the empty DBs, so I assumed it was just a connectivity problem. However there was none to be found, so I checked the detailed TFS install error log and saw I had a TS220066 error that also mentioned SQLRS - I guessed at a reporting services issue.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I was installing a Dual Tier Team Foundation Server 2008 at a clients today and got a problem that when installing the Application Tier (AT) I entered the name of the Data Tier server (DT) and it said the server could not be found.</p>
<p>Unlike TFS 2005, 2008 does not require you to to do a separate install on the DT to create the empty DBs, so I assumed it was just a connectivity problem. However there was none to be found, so I checked the detailed TFS install error log and saw I had a TS220066 error that also mentioned SQLRS - I guessed at a reporting services issue.</p>
<p>I turns out, stupidly, I had forgotten install SQL Reporting Services on the AT - after all I have previously posted about following the walk-thru for the install!</p>
<p>Once this was installed all was OK; so if you cannot see the DT during the AT install it may not be the remote connectivity issues the dialog says, check the error for more details.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS WebParts in a 64bit SharePoint environment</title>
      <link>https://blog.richardfennell.net/posts/tfs-webparts-in-a-64bit-sharepoint-environment/</link>
      <pubDate>Fri, 04 Jan 2008 16:20:50 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-webparts-in-a-64bit-sharepoint-environment/</guid>
      <description>&lt;p&gt;In my post on using &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/12/19/tfs-webpart-for-viewing-workitems-in-sharepoint-2007.aspx&#34;&gt;TFS WebParts in Sharepoint&lt;/a&gt; I provided a proof of concept set of source to allow work items to be viewed and edited within 32bit SharePoint 2007. However I hit a problem that our live SharePoint is running 64bit and the TFS API is only available for 32bit, so the WebParts  could not be loaded.&lt;/p&gt;
&lt;p&gt;To get round this problem I have built a version of the WebParts that move all the 32bit TFS API calls into a separate web service. This allows the web service to be hosted on a 32bit box whilst the WebParts are still run within the 64bit SharePoint environment.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>In my post on using <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/12/19/tfs-webpart-for-viewing-workitems-in-sharepoint-2007.aspx">TFS WebParts in Sharepoint</a> I provided a proof of concept set of source to allow work items to be viewed and edited within 32bit SharePoint 2007. However I hit a problem that our live SharePoint is running 64bit and the TFS API is only available for 32bit, so the WebParts  could not be loaded.</p>
<p>To get round this problem I have built a version of the WebParts that move all the 32bit TFS API calls into a separate web service. This allows the web service to be hosted on a 32bit box whilst the WebParts are still run within the 64bit SharePoint environment.</p>
<p>I have used a simple design model in that I just move all the TFS based methods in my previous example to the WebService and passed all the URLs, authentication details and other parameter each time a WebMethod is called. It does the job to show how the system can work, but there are many other options depending on how you want to manage the user IDs the SharePoint users authenticate to TFS as, see my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/tags/VSTS/default.aspx">previous posts</a> for a longer discussion of the authentication issues.</p>
<p>In addition to the WebPart setting detailed in the <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/12/19/tfs-webpart-for-viewing-workitems-in-sharepoint-2007.aspx">previous post</a>, there is one additional parameter, this is the web service URL e.g.</p>
<blockquote>
<p><a href="http://www.domain.com:8091/TFSWrapperWS.asmx">http://www.domain.com:8091/TFSWrapperWS.asmx</a> </p></blockquote>
<p>This should point to wherever you have installed the web service (using standard means of publishing web sites). The web service does not have to be on the TFS server, as this web service only acts as a pass through. Also so there is no need to edit any details in the web service&rsquo;s web.config.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Next meeting of the Yorkshire Extreme Programming Club</title>
      <link>https://blog.richardfennell.net/posts/next-meeting-of-the-yorkshire-extreme-programming-club/</link>
      <pubDate>Fri, 04 Jan 2008 11:43:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/next-meeting-of-the-yorkshire-extreme-programming-club/</guid>
      <description>&lt;p&gt;The next meeting of the extreme programming club is going to be held on the Thursday the 10th of January 2008 at the &lt;a href=&#34;http://local.google.co.uk/maps?f=q&amp;amp;hl=en&amp;amp;geocode=&amp;amp;time=&amp;amp;date=&amp;amp;ttype=&amp;amp;q=victoria&amp;#43;hotel&amp;#43;LS1&amp;amp;ie=UTF8&amp;amp;ll=53.800549,-1.548729&amp;amp;spn=0.009175,0.028539&amp;amp;z=16&amp;amp;iwloc=A&amp;amp;om=1&#34;&gt;Victoria Hotel&lt;/a&gt;, at 7.00pm, Leeds. I believe the session is on &lt;a href=&#34;http://www.crap4j.org/&#34;&gt;CRAP (Change Risk Analyzer and Predictor) code metrics&lt;/a&gt; but this still has to be confirmed.&lt;/p&gt;
&lt;p&gt;This is your chance to discuss the future for the user group as I mentioned in my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/01/03/future-for-the-yorkshire-xp-club.aspx&#34;&gt;previous post&lt;/a&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The next meeting of the extreme programming club is going to be held on the Thursday the 10th of January 2008 at the <a href="http://local.google.co.uk/maps?f=q&amp;hl=en&amp;geocode=&amp;time=&amp;date=&amp;ttype=&amp;q=victoria&#43;hotel&#43;LS1&amp;ie=UTF8&amp;ll=53.800549,-1.548729&amp;spn=0.009175,0.028539&amp;z=16&amp;iwloc=A&amp;om=1">Victoria Hotel</a>, at 7.00pm, Leeds. I believe the session is on <a href="http://www.crap4j.org/">CRAP (Change Risk Analyzer and Predictor) code metrics</a> but this still has to be confirmed.</p>
<p>This is your chance to discuss the future for the user group as I mentioned in my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2008/01/03/future-for-the-yorkshire-xp-club.aspx">previous post</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Great Scrum Video</title>
      <link>https://blog.richardfennell.net/posts/great-scrum-video/</link>
      <pubDate>Fri, 04 Jan 2008 10:19:52 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/great-scrum-video/</guid>
      <description>&lt;p&gt;I have just had pointed out to me there is a great &lt;a href=&#34;http://video.google.com/videoplay?docid=8795214308797356840&#34;&gt;video about Google&amp;rsquo;s use of Scrum&lt;/a&gt;. It is a presentation done by &lt;a href=&#34;http://www.jeffsutherland.com/scrum&#34;&gt;Jeff Sutherland&lt;/a&gt; one of the founders of Scrum.&lt;/p&gt;
&lt;p&gt;Take a look it is well worth it.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just had pointed out to me there is a great <a href="http://video.google.com/videoplay?docid=8795214308797356840">video about Google&rsquo;s use of Scrum</a>. It is a presentation done by <a href="http://www.jeffsutherland.com/scrum">Jeff Sutherland</a> one of the founders of Scrum.</p>
<p>Take a look it is well worth it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Future for the Yorkshire XP Club</title>
      <link>https://blog.richardfennell.net/posts/future-for-the-yorkshire-xp-club/</link>
      <pubDate>Thu, 03 Jan 2008 21:10:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/future-for-the-yorkshire-xp-club/</guid>
      <description>&lt;p&gt;Times are a&amp;rsquo;changing for the &lt;a href=&#34;http://www.extremeprogrammingclub.com/&#34;&gt;Yorkshire Extreme Programming and Agile Methods Club&lt;/a&gt;, our local user group. There are plans to put the organisation on a more formal structure including a organising committee with the key aim to extend the groups appeal. In the past it has really been driven by the good works of the staff of &lt;a href=&#34;http://www.eurdine.com&#34;&gt;Erudine&lt;/a&gt;, thanks to them for all the work thus far.&lt;/p&gt;
&lt;p&gt;To facilitate these changes a new organiser Daniel Drozdzewski from Eurdine has put up a &lt;a href=&#34;http://freeonlinesurveys.com/rendersurvey.asp?sid=tdutyt4k9ilmttv380228&#34;&gt;survey&lt;/a&gt; to find out what people want and is keen to hear ideas via the forum as he says we &amp;ldquo;want to bring more interesting speakers and topics, make it more organised and appealing to wider audience.&amp;rdquo;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Times are a&rsquo;changing for the <a href="http://www.extremeprogrammingclub.com/">Yorkshire Extreme Programming and Agile Methods Club</a>, our local user group. There are plans to put the organisation on a more formal structure including a organising committee with the key aim to extend the groups appeal. In the past it has really been driven by the good works of the staff of <a href="http://www.eurdine.com">Erudine</a>, thanks to them for all the work thus far.</p>
<p>To facilitate these changes a new organiser Daniel Drozdzewski from Eurdine has put up a <a href="http://freeonlinesurveys.com/rendersurvey.asp?sid=tdutyt4k9ilmttv380228">survey</a> to find out what people want and is keen to hear ideas via the forum as he says we &ldquo;want to bring more interesting speakers and topics, make it more organised and appealing to wider audience.&rdquo;</p>
<p>Make your voice heard</p>
]]></content:encoded>
    </item>
    <item>
      <title>Getting Team Foundation Server to use a remote 64Bit SharePoint 2007 farm.</title>
      <link>https://blog.richardfennell.net/posts/getting-team-foundation-server-to-use-a-remote-64bit-sharepoint-2007-farm/</link>
      <pubDate>Wed, 02 Jan 2008 15:25:06 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/getting-team-foundation-server-to-use-a-remote-64bit-sharepoint-2007-farm/</guid>
      <description>&lt;p&gt;If you try to install the Team Foundation 2008 SharePoint extensions on a 64bit farm you get a &amp;lsquo;&lt;em&gt;SharePoint must be installed&lt;/em&gt;&amp;rsquo; error; I guess the installer is looking in the wrong directory for something to confirm SharePoint is there.&lt;/p&gt;
&lt;p&gt;However, it seems the MSI is only installing some features that are not 32/64bit specific. So you can try a manual install of these features.&lt;/p&gt;
&lt;p&gt;The notes on &lt;a href=&#34;http://msdn2.microsoft.com/en-us/teamsystem/aa718901.aspx&#34; title=&#34;http://msdn2.microsoft.com/en-us/teamsystem/aa718901.aspx&#34;&gt;http://msdn2.microsoft.com/en-us/teamsystem/aa718901.aspx&lt;/a&gt; give a basic guide, but these were written for the previous SharePoint so you have to alter a few paths, mostly 6.0 to 12.0 in the paths to the Web Server Extensions EXEs, and the templates have moved in Team System to &lt;em&gt;[Program Files]Microsoft Visual Studio 2008 Team Foundation ServerToolsTemplates&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you try to install the Team Foundation 2008 SharePoint extensions on a 64bit farm you get a &lsquo;<em>SharePoint must be installed</em>&rsquo; error; I guess the installer is looking in the wrong directory for something to confirm SharePoint is there.</p>
<p>However, it seems the MSI is only installing some features that are not 32/64bit specific. So you can try a manual install of these features.</p>
<p>The notes on <a href="http://msdn2.microsoft.com/en-us/teamsystem/aa718901.aspx" title="http://msdn2.microsoft.com/en-us/teamsystem/aa718901.aspx">http://msdn2.microsoft.com/en-us/teamsystem/aa718901.aspx</a> give a basic guide, but these were written for the previous SharePoint so you have to alter a few paths, mostly 6.0 to 12.0 in the paths to the Web Server Extensions EXEs, and the templates have moved in Team System to <em>[Program Files]Microsoft Visual Studio 2008 Team Foundation ServerToolsTemplates</em></p>
<p>This lack of 64bit support is really starting to become a pain!</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Build Agent on Team Foundation Workgroup Edition</title>
      <link>https://blog.richardfennell.net/posts/tfs-build-agent-on-team-foundation-workgroup-edition/</link>
      <pubDate>Wed, 02 Jan 2008 09:27:57 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-build-agent-on-team-foundation-workgroup-edition/</guid>
      <description>&lt;p&gt;If you see &lt;em&gt;&amp;lsquo;TF215085 local build agent not authorized to communicate with team foundation server&amp;rsquo;&lt;/em&gt; when you queue a new build via Visual Studio, you would assume you just have to add the TFSBuild user to the team projects build group as the rest of the message suggests (right click in team explorer for the project and look at the group membership).&lt;/p&gt;
&lt;p&gt;However this is not enough it seems if you are running a workgroup edition of Team Foundation server.  You also have to make sure the TFSBuild user is in the licensed users group for the team server (right click the server itself in the team explorer).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you see <em>&lsquo;TF215085 local build agent not authorized to communicate with team foundation server&rsquo;</em> when you queue a new build via Visual Studio, you would assume you just have to add the TFSBuild user to the team projects build group as the rest of the message suggests (right click in team explorer for the project and look at the group membership).</p>
<p>However this is not enough it seems if you are running a workgroup edition of Team Foundation server.  You also have to make sure the TFSBuild user is in the licensed users group for the team server (right click the server itself in the team explorer).</p>
<p>Now this could be an issue if you are pushed for users licenses, I suppose the answer is to use a normal user as the build agent&rsquo;s login user.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS 2008 Media on MSDN</title>
      <link>https://blog.richardfennell.net/posts/tfs-2008-media-on-msdn/</link>
      <pubDate>Thu, 27 Dec 2007 22:05:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-2008-media-on-msdn/</guid>
      <description>&lt;p&gt;Don&amp;rsquo;t make the mistake I did with Team Foundation Server 2008 media on MSDN downloads.&lt;/p&gt;
&lt;p&gt;The MSDN file &lt;em&gt;en_visual_studio_team_system_2008_team_foundation_server_workgroup_x86_x64wow_dvd_X14-29253.iso&lt;/em&gt; is the workgroup edition, as the file name suggests. The problem is that you cannot upgraded it to a full edition. TFS documentation says that a trial or workgroup edition can be upgraded by entering a valid CD-key when in Add remove programs, maintenance mode; however this option is greyed out if you install from this media. I checked with Microsoft and there is no way round this for this ISO image.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Don&rsquo;t make the mistake I did with Team Foundation Server 2008 media on MSDN downloads.</p>
<p>The MSDN file <em>en_visual_studio_team_system_2008_team_foundation_server_workgroup_x86_x64wow_dvd_X14-29253.iso</em> is the workgroup edition, as the file name suggests. The problem is that you cannot upgraded it to a full edition. TFS documentation says that a trial or workgroup edition can be upgraded by entering a valid CD-key when in Add remove programs, maintenance mode; however this option is greyed out if you install from this media. I checked with Microsoft and there is no way round this for this ISO image.</p>
<p>So if you want to install or upgrade to TFS 2008 full edition make sure start with the right media, else you will downgrade your installation to a 5 user workgroup.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS WebPart for viewing workitems in SharePoint 2007</title>
      <link>https://blog.richardfennell.net/posts/tfs-webpart-for-viewing-workitems-in-sharepoint-2007/</link>
      <pubDate>Wed, 19 Dec 2007 22:47:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-webpart-for-viewing-workitems-in-sharepoint-2007/</guid>
      <description>&lt;p&gt;&lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/07/13/tfs-webpart-in-moss2007.aspx&#34;&gt;I have been trying&lt;/a&gt; to get a simple means for our clients to log faults into our TFS system whilst inside our MOSS2007 based customer portal. I had been suffering from two major problems as my previous posts mentioned. However, I now have some solutions or at least workarounds:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Historically all users who connected to the TFS server needed a Team Foundation Client CAL - this issue has been addressed by &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/11/23/interesting-change-to-tfs-licensing.aspx&#34;&gt;changes in licensing&lt;/a&gt; for TFS by Microsoft; basically it is now free to look at work items and add bugs&lt;/li&gt;
&lt;li&gt;The way the SharePoint and TFS APIs handle user identity (SPUser &amp;amp; ICreditial) do not match, and are not improbable. There is no way round the user having to reenter their user credentials for each system - so my web part logs into TFS as a user set via it&amp;rsquo;s parameters, this is not the same user as the credentials used to authenticate into SharePoint, it is in effect a proxy user for accessing TFS&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So where does this leave us?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p><a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/07/13/tfs-webpart-in-moss2007.aspx">I have been trying</a> to get a simple means for our clients to log faults into our TFS system whilst inside our MOSS2007 based customer portal. I had been suffering from two major problems as my previous posts mentioned. However, I now have some solutions or at least workarounds:</p>
<ul>
<li>Historically all users who connected to the TFS server needed a Team Foundation Client CAL - this issue has been addressed by <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/11/23/interesting-change-to-tfs-licensing.aspx">changes in licensing</a> for TFS by Microsoft; basically it is now free to look at work items and add bugs</li>
<li>The way the SharePoint and TFS APIs handle user identity (SPUser &amp; ICreditial) do not match, and are not improbable. There is no way round the user having to reenter their user credentials for each system - so my web part logs into TFS as a user set via it&rsquo;s parameters, this is not the same user as the credentials used to authenticate into SharePoint, it is in effect a proxy user for accessing TFS</li>
</ul>
<p>So where does this leave us?</p>
<p><a href="/wp-content/uploads/sites/2/historic/image_2.png"><img alt="image" loading="lazy" src="/wp-content/uploads/sites/2/historic/image_thumb.png"></a></p>
<p>I have posted a <a href="http://blogs.blackmarble.co.uk/files/samples">set of sample code</a> that provides</p>
<ul>
<li>A web part that lists work items</li>
<li>A web part that shows the details of a work item (using the first webpart for all communications to the TFS server)</li>
<li>An ASP.NET test harness</li>
<li>A .WSP and batch file to install the web parts on a SharePoint Server</li>
</ul>
<p>ASP.NET Usage:</p>
<ol>
<li>Load the solution in VS2008 (if you need to use VS2005 you will need to recreate the solution file, and point at the right TFS API DLLs, but everything else should work)</li>
<li>Run the test harness project (note as we are using webparts it will have to create a local ASPNETDB.MDF files the first time it runs. The DB contains the config for the webparts so you will see nothing on the first loading until you setup the parameters)</li>
<li>In the test page select the edit mode at the top of the page, then edit the list webpart in WebPartZone1, enter the following:<br>
- TFSServerURL – the TFS server e.g. <a href="http://tfs.mydomain.com:8080">http://tfs.mydomain.com:8080</a><br>
- TFSDomain – the domain used to authenticate against e.g. mydomain<br>
- TFSUsername – the user name to connect to the TFS server as, we create a dedicated user for this webpart to login as.<br>
- TFSPassword – the password used to authenticate with (shown in clear text)<br>
- TFSAllowedWorkItemTypes – a comma separated list of work item types to be listed in the control, must match types in the [System.WorkItemType] field in the TFS DB. Depending on the process template in use the types will vary but as a start in most templates there is a ‘bug’ type.<br>
- TFSDefaultProject – the name of the default TFS project to select on loading, can be left blank<br>
- TFSPagingSize – the number of rows to show in the list of work items<br>
- TFSShowOnlyDefaultProject – if this is set only the default project is listed in the available projects – this means a single TFS user, which can see many projects, can be used for different webpages and the project shown locked down with this parameter<br>
- TFSUsePaging – set if the list of workitems should be page</li>
<li>Once this is all done and saved you should be able to a list of projects and workitems in the first webpart.</li>
<li>To wire the two webparts together select the connection mode radio button at the top of the page</li>
<li>On the web part in WebPartZone2 select the connect option</li>
<li>In the connections zone that appears create a new connection to link the two webparts</li>
<li>Once this is done you should see the detail of any given workitem when it is selected from the list. The problem is you see all the fields in the work item (useful for debugging)</li>
<li>Put web page back into edit mode and edit the settings on the details web part<br>
- TFSFieldsToShow – a comma separate list of field names to be shown.<br>
- TFSShowAllField – if checked the TFSFIeldsToShow is ignored</li>
<li>When all the configuration is done you have the option to create new bug workitems and add notes to existing ones.</li>
</ol>
<p>If you want to use the webparts in SharePoint you need to install the feature pack using the .WSP package - I assume anyone doing this will know enough about WSP files and SharePoint to get going.</p>
<p>This all said it is not as if there are not still problems and qwerks:</p>
<ul>
<li>You do need the TFC client on the server hosting the webparts, or at least the referenced DDLs - bit obvious really that one. </li>
<li>When try to connect to TFS you might get an error about not being able to access the TFS Cache - use <a href="http://technet.microsoft.com/en-gb/sysinternals/bb896642.aspx">SysInternals filemon</a> (or maybe the event logs) to check the directory being used and you will find the problem concerns the user running the hosting process (usually a member of the IIS_WPG group) not having rights to fully access the cache directory. Also it is a good idea to delete all cache files before retrying as some people report they had to rebuild the cache to clear the error.</li>
<li>Interesting point I discover which altered the design - Though the pair of webparts worked perfectly in an ASP.NET test harness, the connection options, when in SharePoint, were grey&rsquo;d out. Turns out you have to add a second parameter to the consumer declaration else the default name is used for all webparts, which confused SharePoint e.g.</li>
</ul>
<blockquote>
<p>[System.Web.UI.WebControls.WebParts.ConnectionConsumer(&ldquo;Work Item List Consumer&rdquo;, &ldquo;wilc&rdquo;)] // the second param is an ID for connection<br>
public void InitializeProvider(IWorkItemListToDetails provider)<br>
{<br>
      this.workItemListProvider = provider;<br>
}</p></blockquote>
<blockquote>
<p>However the last problem was negated by the fact that in ASP.NET you can have a pairs of connections to get bi-directional communications between web parts. In SharePoint you are only allowed a single connection between any two webparts. Hence the current design using some strange boolean flags and logic to manage call backs in the pre-render stage. I left the older code in place, commented out, as a sample.</p></blockquote>
<ul>
<li>And the killer problem for me - you only can run these webparts on a 32bit SharePoint as there there are no 64Bit TFS DLLs. A major problem for us as our SharePoint servers are 64Bit. We need to wait for <a href="http://msdn2.microsoft.com/en-us/teamsystem/bb407307.aspx">Rosario</a> it seems before TFS moves to 64bit. Even though 32bit CTPs are available for Rosario as yet there is no date for a 64Bit CTP. I also checked, and WOW64 will not help to wrapper the 32Bit DLLs for a 64Bit OS. I have checked all this with Microsoft support.</li>
</ul>
<p>So what we have here is a <a href="http://blogs.blackmarble.co.uk/files/samples">sample solution for 32bit environments</a>. I am going to modify this to work for 64bit by putting all the TFS API bits in a separate WebService to host on a 32Bit server. I will post about this went it is done</p>
]]></content:encoded>
    </item>
    <item>
      <title>And another possible session at SQLBits II</title>
      <link>https://blog.richardfennell.net/posts/and-another-possible-session-at-sqlbits-ii/</link>
      <pubDate>Wed, 19 Dec 2007 19:01:36 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/and-another-possible-session-at-sqlbits-ii/</guid>
      <description>&lt;p&gt;I also submitted a session for &lt;a href=&#34;http://www.sqlbits.com/&#34;&gt;SQLBits II in Birmingham&lt;/a&gt; on using Visual Studio to manage the life cycle of DB projects. The voting for this community conference I assume opens in the new year as submissions close this weekend.&lt;/p&gt;
&lt;p&gt;It will be interesting to see if this more central location (well central to England not the UK) attracts more attendees than Reading.&lt;/p&gt;
&lt;p&gt;Again vote early and often.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I also submitted a session for <a href="http://www.sqlbits.com/">SQLBits II in Birmingham</a> on using Visual Studio to manage the life cycle of DB projects. The voting for this community conference I assume opens in the new year as submissions close this weekend.</p>
<p>It will be interesting to see if this more central location (well central to England not the UK) attracts more attendees than Reading.</p>
<p>Again vote early and often.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Maybe speaking at Developer Day Scotland</title>
      <link>https://blog.richardfennell.net/posts/maybe-speaking-at-developer-day-scotland/</link>
      <pubDate>Wed, 19 Dec 2007 18:56:15 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/maybe-speaking-at-developer-day-scotland/</guid>
      <description>&lt;p&gt;And I may be speaking at Developer Day Scotland - it is all down to you; like other DDD style events the attendees get a chance to select what they would like to see. So keep an eye on the &lt;a href=&#34;http://developerdayscotland.com/main/Agenda/tabid/55/Default.aspx&#34;&gt;possible agenda&lt;/a&gt; and please vote as soon at submission close.&lt;/p&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;http://developerdayscotland.com/images/badges/SpeakerBadge1-small.png&#34;&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>And I may be speaking at Developer Day Scotland - it is all down to you; like other DDD style events the attendees get a chance to select what they would like to see. So keep an eye on the <a href="http://developerdayscotland.com/main/Agenda/tabid/55/Default.aspx">possible agenda</a> and please vote as soon at submission close.</p>
<p><img loading="lazy" src="http://developerdayscotland.com/images/badges/SpeakerBadge1-small.png"></p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at VBUG Peterborough</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-vbug-peterborough/</link>
      <pubDate>Wed, 19 Dec 2007 18:52:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-vbug-peterborough/</guid>
      <description>&lt;p&gt;I will be speaking at what I think is the inaugural meeting of the Peterborough VBUG User group in Team Foundation Servers.&lt;/p&gt;
&lt;p&gt;For more details see &lt;a href=&#34;http://www.vbug.co.uk/Events/January-2008/VBUG-Team-Foundation-Server-with-Richard-Fennell.aspx&#34; title=&#34;http://www.vbug.co.uk/Events/January-2008/VBUG-Team-Foundation-Server-with-Richard-Fennell.aspx&#34;&gt;http://www.vbug.co.uk/Events/January-2008/VBUG-Team-Foundation-Server-with-Richard-Fennell.aspx&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Hope to see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking at what I think is the inaugural meeting of the Peterborough VBUG User group in Team Foundation Servers.</p>
<p>For more details see <a href="http://www.vbug.co.uk/Events/January-2008/VBUG-Team-Foundation-Server-with-Richard-Fennell.aspx" title="http://www.vbug.co.uk/Events/January-2008/VBUG-Team-Foundation-Server-with-Richard-Fennell.aspx">http://www.vbug.co.uk/Events/January-2008/VBUG-Team-Foundation-Server-with-Richard-Fennell.aspx</a></p>
<p>Hope to see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Powershell and SourceSafe</title>
      <link>https://blog.richardfennell.net/posts/powershell-and-sourcesafe/</link>
      <pubDate>Sun, 09 Dec 2007 20:50:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/powershell-and-sourcesafe/</guid>
      <description>&lt;p&gt;I posted &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/12/08/using-powershell-to-remind-users-of-checked-out-files-from-tfs.aspx&#34;&gt;yesterday on using Powershell&lt;/a&gt; to email a TFS user if they had files checked out. Well, we still run a legacy set of SourceSafe databases for old projects that are under maintenance, but not major redevelopment. (Our usual practice is to migrate projects to TFS at major release points).&lt;/p&gt;
&lt;p&gt;Anyway, these SourceSafe repositories are just as likely, if not more so, to have files left checked out as TFS. The following script email a list of all checked out files in a SourceSafe DB&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I posted <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/12/08/using-powershell-to-remind-users-of-checked-out-files-from-tfs.aspx">yesterday on using Powershell</a> to email a TFS user if they had files checked out. Well, we still run a legacy set of SourceSafe databases for old projects that are under maintenance, but not major redevelopment. (Our usual practice is to migrate projects to TFS at major release points).</p>
<p>Anyway, these SourceSafe repositories are just as likely, if not more so, to have files left checked out as TFS. The following script email a list of all checked out files in a SourceSafe DB</p>
<p># to run this script without signing need to first run<br>
#  Set-ExecutionPolicy  Unrestricted<br>
# (the other option is to sign the script)<br>
#  then run it using<br>
#  .VSSstatus.ps1</p>
<p>function CheckOutVSSFileForUser(<br>
    [string]$ssdir,<br>
    [string]$to,<br>
    [string]$from,<br>
    [string]$server    )<br>
{<br>
    # get the open file list<br>
    [char]10 + &ldquo;Checking checked out file in &quot; + $ssdir</p>
<p>    # set the environment variable without this you cannot access the DB<br>
    $env:ssdir=$ssdir</p>
<p>    # we assume the logged in user has rights, as the -Yuid,pwd ss.exe<br>
    # parameter does not work due to the ,<br>
    # could used a named user that takes no password as another option<br>
    # can use the -U option to limit the user listed<br>
    $filelist = &amp;&ldquo;C:Program FilesMicrosoft Visual StudioVSSwin32ss.exe&rdquo; status $/ -R</p>
<p>    # we have the results as an array of rows, so insert some line feeds<br>
    foreach ($s in $filelist) { $emailbody = $emailbody + [char]10 + $s }<br>
    # not the strange concatenation for the email to field, not a + as I suspected!<br>
    $title = &ldquo;File currently have checked out in &quot; + $ssdir<br>
    SendEmail $to  $from $server  $title  $emailbody<br>
}</p>
<p>function SendEmail(<br>
    [string]$to,<br>
    [string]$from,<br>
    [string]$server,<br>
    [string]$title,<br>
    [string]$body    )<br>
{</p>
<p>    # send the email<br>
    $SmtpClient = new-object system.net.mail.smtpClient<br>
    $SmtpClient.host = $server<br>
    $SMTPClient.Send($from,$to,$title,$body)<br>
    &ldquo;Email sent to &quot; + $to<br>
}</p>
<p># the main body<br>
$emailServer = &ldquo;mail.domain.com&rdquo;<br>
$from = &ldquo;<a href="mailto:vss@domain.com">vss@domain.com</a> &quot;<br>
$to = &ldquo;<a href="mailto:admin@domain.com">admin@domain.com</a> &quot;<br>
$ssdir = (&rdquo;\serversourcesafe1&rdquo;,<br>
          &ldquo;\serversourcesafe2&rdquo;,<br>
          &ldquo;\serversourcesafe3&rdquo;)</p>
<p>foreach ($s in $ssdir) {<br>
    CheckOutVSSFileForUser $s $to $from $emailServer<br>
}</p>
<p><strong>Update 10-0ct-2008</strong>: Corey Furman has extended script this on his blog <a href="http://codeslabs.wordpress.com/2008/10/09/who-has-what-checked-out">http://codeslabs.wordpress.com/2008/10/09/who-has-what-checked-out</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Alt.Net.UK</title>
      <link>https://blog.richardfennell.net/posts/alt-net-uk/</link>
      <pubDate>Sun, 09 Dec 2007 20:35:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/alt-net-uk/</guid>
      <description>&lt;p&gt;I am far from the first to post about the &lt;a href=&#34;http://altnetuk.com/&#34; title=&#34;http://www.altnetuk.com/&#34;&gt;Alt.Net.UK&lt;/a&gt; conference next year. I think this is a great idea, I have posted a few times on how I am finding the most useful conferences being the ones about best practice. New technology is great, we all like new toys, but it is the engineering practices we do day to day that have the most effect in the quality of your work, not the IDE we use.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am far from the first to post about the <a href="http://altnetuk.com/" title="http://www.altnetuk.com/">Alt.Net.UK</a> conference next year. I think this is a great idea, I have posted a few times on how I am finding the most useful conferences being the ones about best practice. New technology is great, we all like new toys, but it is the engineering practices we do day to day that have the most effect in the quality of your work, not the IDE we use.</p>
<p>This is an <a href="http://www.citconf.com/openspace.php">open spaces event</a>, I have only been to one of these before but it was certainly an interesting way to work. Hopefully this should enhance the &lsquo;best practice&rsquo; nature of the conference, I am looking forward to it</p>
]]></content:encoded>
    </item>
    <item>
      <title>Using Powershell to remind users of checked out files from TFS</title>
      <link>https://blog.richardfennell.net/posts/using-powershell-to-remind-users-of-checked-out-files-from-tfs/</link>
      <pubDate>Sat, 08 Dec 2007 16:19:37 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/using-powershell-to-remind-users-of-checked-out-files-from-tfs/</guid>
      <description>&lt;p&gt;With any source control system it is possible to leave files checked out. This is especially true if your IDE does the checking out behind the scenes. This is made worse still by the fact you can have a number of workspaces on the same PC in TFS. It is too easy to forget.&lt;/p&gt;
&lt;p&gt;It is therefore a good idea to check from time to time that the files you have checked out are the ones you think you have. There is nothing worse than trying to work on a project to find a key file is checked out or locked to another user or PC.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>With any source control system it is possible to leave files checked out. This is especially true if your IDE does the checking out behind the scenes. This is made worse still by the fact you can have a number of workspaces on the same PC in TFS. It is too easy to forget.</p>
<p>It is therefore a good idea to check from time to time that the files you have checked out are the ones you think you have. There is nothing worse than trying to work on a project to find a key file is checked out or locked to another user or PC.</p>
<p>To this end I have written the following <a href="http://www.microsoft.com/windowsserver2003/technologies/management/powershell/download.mspx">Powershell</a> script to check for the files checked out by a team of developers. In this version you have to list the users by name, but I am sure it could be extended to pickup users for an AD or TFS Group</p>
<p># To run this script without signing need to first run<br>
#     Set-ExecutionPolicy  Unrestricted<br>
# If you want run it from a timer you will need to sign it<br>
# Then run it using<br>
#    .TFstatus.ps1</p>
<p>function CheckOutTFSFileForUser(<br>
    [string]$user,<br>
    [string]$domain,<br>
    [string]$server,<br>
    [string]$from     )<br>
{</p>
<h1 id="get-the-open-file-list">get the open file list,</h1>
<p>    # we put a newline at the start of the line,<br>
    # used an ASCii code as &rsquo;n did not seem to work<br>
    [char]10 + &ldquo;Checking checked out file for &quot; + $user<br>
    $filelist = &amp;&ldquo;C:Program FilesMicrosoft Visual Studio 9.0Common7IDEtf.exe&rdquo; status /user:$user /s:https://vsts.domain.com:8443</p>
<p>    # we have the results as an array of rows<br>
    # so insert some line feeds    foreach ($s in $filelist) { $emailbody = $emailbody + [char]10 + $s }<br>
    # note the strange concatenation for the email to field, not a + as I suspected being new to Powershell<br>
    $title = &ldquo;Files currently have checked out to &quot; + $user + &quot; in TFS&rdquo;<br>
    if ($user -eq &ldquo;*&rdquo; )<br>
    {<br>
        # if they have asked for all user email send to the admin/from account<br>
        $to = $from<br>
    } else<br>
    {<br>
        $to = $user + $domain<br>
    }<br>
    SendEmail $to  $from $server  $title  $emailbody<br>
}</p>
<p>function SendEmail(<br>
    [string]$to,<br>
    [string]$from,<br>
    [string]$server,<br>
    [string]$title,<br>
    [string]$body    )<br>
{</p>
<p>    # send the email<br>
    $SmtpClient = new-object system.net.mail.smtpClient<br>
    $SmtpClient.host = $server<br>
    $SMTPClient.Send($from,$to,$title,$body)<br>
    &ldquo;Email sent to &quot; + $to<br>
}</p>
<p># the main body<br>
$domain = &ldquo;@domain.com&rdquo;<br>
$emailServer = &ldquo;mail.domain.com&rdquo;<br>
$from = &ldquo;<a href="mailto:admin@domain.com">admin@domain.com</a> &quot;</p>
<p># the list of users to check, the * means all<br>
$users = (&rdquo;*&rdquo;,&ldquo;anne&rdquo;,&ldquo;bill&rdquo;,&ldquo;chris&rdquo;)<br>
# loop through the list of users<br>
foreach ($u in $users) { CheckOutTFSFileForUser $u $domain $emailServer $from }</p>
]]></content:encoded>
    </item>
    <item>
      <title>Last night I met a spaceman</title>
      <link>https://blog.richardfennell.net/posts/last-night-i-met-a-spaceman/</link>
      <pubDate>Thu, 06 Dec 2007 09:21:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/last-night-i-met-a-spaceman/</guid>
      <description>&lt;p&gt;Last night I went to a lecture by &lt;a href=&#34;http://www.bradfordcollege.ac.uk/college/facilities/ycc2006/promos/yccpromo1.htm&#34;&gt;Dr Alexander Martynov and Colonel Alexander Volkov&lt;/a&gt; organised by [Space Connections](http://pace Connections &amp;ldquo;&lt;a href=&#34;http://www.spaceconnections.net/%22&#34;&gt;http://www.spaceconnections.net/&#34;&lt;/a&gt;) on the Russian space efforts in both the Soviet and current era.&lt;/p&gt;
&lt;p&gt;The thing that struck me was the often spoken of different  between the US/NASA technology based solution and the the Russian &amp;lsquo;simple first&amp;rsquo; philosophy e.g. the cosmonaut should be able to fix it themselves with the tools to hand, a philosophy the &lt;a href=&#34;http://en.wikipedia.org/wiki/Mir&#34;&gt;Mir space station&lt;/a&gt; repeatedly showed&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Last night I went to a lecture by <a href="http://www.bradfordcollege.ac.uk/college/facilities/ycc2006/promos/yccpromo1.htm">Dr Alexander Martynov and Colonel Alexander Volkov</a> organised by [Space Connections](http://pace Connections &ldquo;<a href="http://www.spaceconnections.net/%22">http://www.spaceconnections.net/"</a>) on the Russian space efforts in both the Soviet and current era.</p>
<p>The thing that struck me was the often spoken of different  between the US/NASA technology based solution and the the Russian &lsquo;simple first&rsquo; philosophy e.g. the cosmonaut should be able to fix it themselves with the tools to hand, a philosophy the <a href="http://en.wikipedia.org/wiki/Mir">Mir space station</a> repeatedly showed</p>
<p>This can also be seen in there supposed story of the huge cost that NASA incurred designing a <a href="http://en.wikipedia.org/wiki/Space_Pen">space pen</a>, while the Russians just took a pencil. Though I have heard this is an urban myth as the free floating graphite from a pencil is problem in zero-g for electronics, so both sides used <a href="http://en.wikipedia.org/wiki/Space_Pen">grease pencil</a>.</p>
<p>This said the Russian proposed Mars mission (interesting I cannot seem to find any pages to link to here) they discussed is based on an electric engine, very clean (no nuclear) and state of the art, like the <a href="http://www.esa.int/SPECIALS/SMART-1/index.html">ESA Smart1</a>. However, when you look deeper it is still the idea of very reliable and simple replicated system designed for the long haul. The proposed &lsquo;Mar station&rsquo; making up to seven Earth to Mars and back trips over fifteen years period.</p>
<p>I have to say this all appeals to my luddite side, technology is great in its place but I do like the simple &lsquo;pencil like&rsquo; solution if possible; the simplest thing that will do the job. Now that it a very <a href="http://www.agilealliance.org/">Agile</a> way of thinking isn&rsquo;t it! if you can say such a thing over a space program.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Intellisense not working in Visual Studio 2008</title>
      <link>https://blog.richardfennell.net/posts/intellisense-not-working-in-visual-studio-2008/</link>
      <pubDate>Sat, 01 Dec 2007 22:49:30 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/intellisense-not-working-in-visual-studio-2008/</guid>
      <description>&lt;p&gt;Since I upgraded my VS2008 Beta2 to the RTM, the intellisense has not been working. I have seen a few posts about this, some suggesting you need to reset the configuration by running&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;devenv.exe /safemode&lt;/em&gt;&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;(see &lt;a href=&#34;http://msdn2.microsoft.com/en-us/library/ms241278%28VS.80%29.aspx&#34;&gt;http://msdn2.microsoft.com/en-us/library/ms241278(VS.80).aspx&lt;/a&gt;)&lt;/p&gt;
&lt;p&gt;but this did not work for me.&lt;/p&gt;
&lt;p&gt;So I had a poke about in &lt;strong&gt;Tools|Option&lt;/strong&gt; and found that on the &lt;strong&gt;Text Editor|All Languages&lt;/strong&gt; that the three checkboxes for &lt;strong&gt;Statement Completion&lt;/strong&gt; where showing neither empty or checked but a fully coloured box - which usually means an unknown settings. So a set these all to checked (a tick) and my Intellisense started working.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since I upgraded my VS2008 Beta2 to the RTM, the intellisense has not been working. I have seen a few posts about this, some suggesting you need to reset the configuration by running</p>
<blockquote>
<p><em>devenv.exe /safemode</em></p></blockquote>
<p>(see <a href="http://msdn2.microsoft.com/en-us/library/ms241278%28VS.80%29.aspx">http://msdn2.microsoft.com/en-us/library/ms241278(VS.80).aspx</a>)</p>
<p>but this did not work for me.</p>
<p>So I had a poke about in <strong>Tools|Option</strong> and found that on the <strong>Text Editor|All Languages</strong> that the three checkboxes for <strong>Statement Completion</strong> where showing neither empty or checked but a fully coloured box - which usually means an unknown settings. So a set these all to checked (a tick) and my Intellisense started working.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD6 demo - better late than never</title>
      <link>https://blog.richardfennell.net/posts/ddd6-demo-better-late-than-never/</link>
      <pubDate>Wed, 28 Nov 2007 19:29:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd6-demo-better-late-than-never/</guid>
      <description>&lt;p&gt;For those of you who attended my DDD6 session on Scrum you will remember that I had to cut my demo of eScrum short due to taking too many questions as I went along.&lt;/p&gt;
&lt;p&gt;So I have recorded &lt;a href=&#34;http://www.blackmarble.co.uk/ConferencePapers/DDD6%20Demo%20-%20EScum%20ScreenCast.wmv&#34;&gt;what I intended to show as a screencast&lt;/a&gt;, enjoy.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For those of you who attended my DDD6 session on Scrum you will remember that I had to cut my demo of eScrum short due to taking too many questions as I went along.</p>
<p>So I have recorded <a href="http://www.blackmarble.co.uk/ConferencePapers/DDD6%20Demo%20-%20EScum%20ScreenCast.wmv">what I intended to show as a screencast</a>, enjoy.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Virtual Server and MAC addresses</title>
      <link>https://blog.richardfennell.net/posts/virtual-server-and-mac-addresses/</link>
      <pubDate>Tue, 27 Nov 2007 12:18:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/virtual-server-and-mac-addresses/</guid>
      <description>&lt;p&gt;Today I had to do some work on an old VB6 system, a development environment I do not have on my Vista laptop.&lt;/p&gt;
&lt;p&gt;So I copied a Virtual PC image I had with most of the tools I needed and ran it on one of our Virtual Servers. As this VPC needed to run at the same time as the VPC I copied it from, I ran &lt;a href=&#34;http://www.microsoft.com/technet/sysinternals/Utilities/NewSid.mspx&#34;&gt;NewSid&lt;/a&gt; to change the SID and the PC names.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today I had to do some work on an old VB6 system, a development environment I do not have on my Vista laptop.</p>
<p>So I copied a Virtual PC image I had with most of the tools I needed and ran it on one of our Virtual Servers. As this VPC needed to run at the same time as the VPC I copied it from, I ran <a href="http://www.microsoft.com/technet/sysinternals/Utilities/NewSid.mspx">NewSid</a> to change the SID and the PC names.</p>
<p>All seemed good until I tried to use the network, both within our LAN and the Internet it was very intermittent, but I saw no errors.</p>
<p>To cut a long story short the problem was I had two VPC images with the same MAC address. Once I stopped the new VPC, changed the MAC address and restarted it all was fine.</p>
<p>So the Technical Tip is - if you copy a VPC image you need to run NewSID and manually alter the MAC address to avoid network clashes</p>
]]></content:encoded>
    </item>
    <item>
      <title>The day after DDD6</title>
      <link>https://blog.richardfennell.net/posts/the-day-after-ddd6/</link>
      <pubDate>Sun, 25 Nov 2007 21:26:08 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/the-day-after-ddd6/</guid>
      <description>&lt;p&gt;Got back from DDD6 late last night after our 5am start. After what seemed a short nights sleep I got up to do the &lt;a href=&#34;http://www.helptheaged.org.uk/en-gb/HowYouCanHelp/Events/Running/LeedsAbbeyDash/default.htm&#34;&gt;Abbey Dash&lt;/a&gt; 10K in Leeds this morning with 6000 other runners, posting a 47 minute time, which I suppose is OK given the complete lack training of late due to conferences of late and I was hampered by tripping over some wire left in the road near the start and cutting my knee open!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Got back from DDD6 late last night after our 5am start. After what seemed a short nights sleep I got up to do the <a href="http://www.helptheaged.org.uk/en-gb/HowYouCanHelp/Events/Running/LeedsAbbeyDash/default.htm">Abbey Dash</a> 10K in Leeds this morning with 6000 other runners, posting a 47 minute time, which I suppose is OK given the complete lack training of late due to conferences of late and I was hampered by tripping over some wire left in the road near the start and cutting my knee open!</p>
<p>This morning exertions gave me some time to reflect on the previous days events. Firstly my session on Scrum; it seemed to go OK other than I overran a little and had to rush through the demo of eScrum. You can get the <a href="http://www.blackmarble.co.uk/ConferencePapers/DDD6%20Presentation%20-%20An%20Introduction%20to%20Scrum.ppt">slides from the Black Marble web site</a> (and soon on the DDD site I guess). I have added some screen shots and notes from the shorten demo. I was asked a couple of questions that I said I would post information on:</p>
<ul>
<li>The book on software craftsmanship I could not remember the title of was <a href="http://www.amazon.co.uk/Software-Craftsmanship-Imperative-Pete-McBreen/dp/0201733862/ref=pd_bbs_sr_1?ie=UTF8&amp;s=books&amp;qid=1196022758&amp;sr=8-1">&lsquo;Software Craftsmanship: The New Imperative&rsquo; by Pete McBreen</a>. I also wrote a <a href="https://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2005/12/10/232.aspx">post on the subject</a> a while ago</li>
<li>There are formal qualification in Scrum for both the Scrum Master and Product Owner. I did my certified Scrum Master course with <a href="http://www.conchango.com/Web/Public/Content/Events/ThinkTank.aspx">Conchango</a>, the trainer being <a href="http://www.mountaingoatsoftware.com/">Mike Cohn</a> which I can heartily recommend.</li>
</ul>
<p>Unusually, I actually managed to get the three other sessions at this DDD, rare when I am speaking. They were all excellent:</p>
<ul>
<li><strong>My favourite Patterns</strong> <em>with</em> <strong><a href="http://www.garyshort.org/">Gary Short</a> -</strong> the clearest session on patterns I have seen.</li>
<li><strong>Why do I need an Inversion of Control Container?</strong> with <strong><a href="http://mikehadlow.blogspot.com/">Mike Hadlow</a> -</strong> A nice follow up to the patterns session giving a great real world way to take advantage of Inversion of Control using the <a href="http://www.castleproject.org/container/index.html">Castle Windsor Container</a> </li>
<li><strong>Testing Your Applications With MbUnit Gallio</strong> with <strong><a href="http://blog.benhall.me.uk">Ben Hall</a>  -</strong> I have been using an unholy mixture of nUnit and MSTest, after this session I have to take a long hard look at MbUnit.</li>
</ul>
<p>Maybe it is the trend of 2007 or a sign the industry is maturing that all three of the conferences I have been to recently have focused on best practice. I consider this a really good sign.</p>
<p>Anyway another great DDD, thanks to the committee for all their work putting it on. Looking forward to the growing franchise in 2008.</p>
<p>Technorati Tags: <a href="http://technorati.com/tags/ddd6">ddd6</a></p>
]]></content:encoded>
    </item>
    <item>
      <title>Interesting change to TFS licensing</title>
      <link>https://blog.richardfennell.net/posts/interesting-change-to-tfs-licensing/</link>
      <pubDate>Fri, 23 Nov 2007 21:16:16 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/interesting-change-to-tfs-licensing/</guid>
      <description>&lt;p&gt;Just seen a post on &lt;a href=&#34;http://blogs.msdn.com/bharry/archive/2007/11/23/tfs-licensing-change-for-tfs-2008.aspx&#34;&gt;Brian Harry&amp;rsquo;s blog&lt;/a&gt; that the license has been changed for TFS. You no longer need a CAL for all users who connect to the TFS server, a special case of users has been created, those who can create work items, but do little else. You now have an unlimited number of theses as standard.&lt;/p&gt;
&lt;p&gt;Why is this good? it means you can have anyone in a company connect to the TFS server to log bugs or change requests. Previously to do this means you bought a lot of CALs, or very naughtily ignored he license, so this is good sensible move.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just seen a post on <a href="http://blogs.msdn.com/bharry/archive/2007/11/23/tfs-licensing-change-for-tfs-2008.aspx">Brian Harry&rsquo;s blog</a> that the license has been changed for TFS. You no longer need a CAL for all users who connect to the TFS server, a special case of users has been created, those who can create work items, but do little else. You now have an unlimited number of theses as standard.</p>
<p>Why is this good? it means you can have anyone in a company connect to the TFS server to log bugs or change requests. Previously to do this means you bought a lot of CALs, or very naughtily ignored he license, so this is good sensible move.</p>
]]></content:encoded>
    </item>
    <item>
      <title>System Security</title>
      <link>https://blog.richardfennell.net/posts/system-security/</link>
      <pubDate>Fri, 23 Nov 2007 11:04:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/system-security/</guid>
      <description>&lt;p&gt;While I was presenting yesterday at the second of &lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events&#34;&gt;Black Marble&amp;rsquo;s events&lt;/a&gt; on Windows 2008 to a group of IT professionals, I suggested that they look at &lt;a href=&#34;http://www.amazon.co.uk/Writing-Secure-Code-M-Howard/dp/0735615888/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1195815272&amp;amp;sr=1-1&#34; title=&#34;http://www.amazon.co.uk/Writing-Secure-Code-M-Howard/dp/0735615888/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1195815272&amp;amp;sr=1-1&#34;&gt;&amp;lsquo;Writing Secure Code&amp;rsquo; by Michael Howard and David LeBlanc&lt;/a&gt; to get a good view of security in depth and risk analysis. On second thoughts, this book might be a bit too developer focused. I think Michael Howard&amp;rsquo;s new book with Steve Lipner &lt;a href=&#34;http://www.amazon.com/Security-Development-Lifecycle-Michael-Howard/dp/0735622140&#34; title=&#34;http://www.amazon.com/Security-Development-Lifecycle-Michael-Howard/dp/0735622140&#34;&gt;&amp;ldquo;The Security Development Lifecycle&amp;rdquo;&lt;/a&gt; might be a bit more appropriate read (though it does not seem to be available on Amazon UK yet, should be there soon).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>While I was presenting yesterday at the second of <a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events">Black Marble&rsquo;s events</a> on Windows 2008 to a group of IT professionals, I suggested that they look at <a href="http://www.amazon.co.uk/Writing-Secure-Code-M-Howard/dp/0735615888/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1195815272&amp;sr=1-1" title="http://www.amazon.co.uk/Writing-Secure-Code-M-Howard/dp/0735615888/ref=sr_1_1?ie=UTF8&amp;s=books&amp;qid=1195815272&amp;sr=1-1">&lsquo;Writing Secure Code&rsquo; by Michael Howard and David LeBlanc</a> to get a good view of security in depth and risk analysis. On second thoughts, this book might be a bit too developer focused. I think Michael Howard&rsquo;s new book with Steve Lipner <a href="http://www.amazon.com/Security-Development-Lifecycle-Michael-Howard/dp/0735622140" title="http://www.amazon.com/Security-Development-Lifecycle-Michael-Howard/dp/0735622140">&ldquo;The Security Development Lifecycle&rdquo;</a> might be a bit more appropriate read (though it does not seem to be available on Amazon UK yet, should be there soon).</p>
]]></content:encoded>
    </item>
    <item>
      <title>Fun with a DDD6 demo</title>
      <link>https://blog.richardfennell.net/posts/fun-with-a-ddd6-demo/</link>
      <pubDate>Sat, 17 Nov 2007 22:45:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/fun-with-a-ddd6-demo/</guid>
      <description>&lt;p&gt;I have been working on my &lt;a href=&#34;http://www.developerday.co.uk/ddd/agendaddd6lineup.asp&#34;&gt;DDD6&lt;/a&gt; demo, I intend to show &lt;a href=&#34;http://www.microsoft.com/downloads/details.aspx?FamilyID=55A4BDE6-10A7-4C41-9938-F388C1ED15E9&amp;amp;displaylang=en&#34;&gt;eScrum&lt;/a&gt; at end of my session on Scrum. I thought I would use the VPC I had from DDD5, this was based on the TFS Orcas Beta1 and had all the tools I wanted configured. To get some more realistic data in the reports I wanted to leave TFS server running for a week and on a daily basis update the work items as if the project was progressing.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been working on my <a href="http://www.developerday.co.uk/ddd/agendaddd6lineup.asp">DDD6</a> demo, I intend to show <a href="http://www.microsoft.com/downloads/details.aspx?FamilyID=55A4BDE6-10A7-4C41-9938-F388C1ED15E9&amp;displaylang=en">eScrum</a> at end of my session on Scrum. I thought I would use the VPC I had from DDD5, this was based on the TFS Orcas Beta1 and had all the tools I wanted configured. To get some more realistic data in the reports I wanted to leave TFS server running for a week and on a daily basis update the work items as if the project was progressing.</p>
<p>The problem was the underlying operating system evaluation license expired at the start of November. So I tried:</p>
<ul>
<li>a Windows 2K3R2 disk and did an in-place upgrade with a valid key, this seemed to work but when I tried to run any ASP.NET it failed, basically all the rights seemed to be lost</li>
<li>tied to force the system to ask for a new key using the <a href="http://support.microsoft.com/kb/328874">KB328874</a> procedure - this does not work on Windows 2003, just XP it seems, though some web sites seem to say it does work.</li>
<li>did a SYSPREP on the server, this allowed me to enter a new key, but after 1 hour I still got the evaluation period expired message - it seems you have to use new media to removed the evaluation time bomb.</li>
</ul>
<p>So I gave up on that and tried the Visual Studio 2008 TFS Beta2 VPC download. Now installing eScrum on this is not really supported. You have to also install the <a href="http://www.codeplex.com/CodePlex/Wiki/View.aspx?title=Obtaining%20the%20Team%20Explorer%20Client">2005 Team Foundation Client</a> and some Ajax bits as well as manually replace the SharePoint template using the one provided by <a href="http://www.sharepointblogs.com/johnwpowell/archive/2007/09/29/how-to-install-microsoft-escrum-1-0-process-template-on-tfs-2008-beta-2-quot-orcas-quot.aspx">John Powell</a>. All seemed good until I tried to add a product backlog item to the sprint on the sprint details page in eScrum where I got loads of JavaScript errors - my guess is some thing was not registered right. Interestingly my &rsquo;live&rsquo; eScrum, which has been in place upgraded to the Beta2 seems to work OK. Basically I gave up on this VPC, I will wait for a release of eScrum that supports VS2008 for new installs.</p>
<p>So this left me back at a fully working Beta1 VPC that was on an expired OS, or the option to installed a complete new system using VS2005 with &lsquo;real&rsquo; licenses, from scratch.</p>
<p>But then I thought about what &rsquo;expired&rsquo; means for a Windows 2003 evaluation install; it means after 1hr you get a  message the server has expired, and an hour later it restarts. Now this is not much use for any real application, but does all I needed, so I could have saved a good deal of messing around over the past couple of days!</p>
<p>I was still a little worried that the bi-hourly resets may mean the timed jobs that keep the TFS data warehouse up to date might not occur, so I also installed the tool by <a href="http://blogs.msdn.com/ericlee/archive/2006/07/09/660993.aspx">Eric Lee</a> to allow the update process to be triggered whenever I needed it</p>
<p>So hopefully I now have a system I can build a reasonable demo data set on over days up to DDD6.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Updating an Ajax Application</title>
      <link>https://blog.richardfennell.net/posts/updating-an-ajax-application/</link>
      <pubDate>Mon, 12 Nov 2007 21:17:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/updating-an-ajax-application/</guid>
      <description>&lt;p&gt;Our home grown work tracking system has been through many versions over the years. The current incarnation was using a pre-release version of the Microsoft AJAX extensions. Now this caused a problem when we moved the ASP.NET application to a newly rebuilt IIS server with the 1.0 release version of AJAX.&lt;/p&gt;
&lt;p&gt;We were getting errors that the correct System.Web.Extensions DLL could not be found, as you would expect. I rebuilt the application using Visual Studio 2005 with the AJAX 1.0 Release installed, and published this build. I tried to load the application and the browser went into a tight loop, not what I expected. I checked the server event log and found the issue was that in the published web.config file there was a reference to a Crystal Reports DLL (which we have not used for years). Once I removed this reference from the web.confg the site worked perfectly.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Our home grown work tracking system has been through many versions over the years. The current incarnation was using a pre-release version of the Microsoft AJAX extensions. Now this caused a problem when we moved the ASP.NET application to a newly rebuilt IIS server with the 1.0 release version of AJAX.</p>
<p>We were getting errors that the correct System.Web.Extensions DLL could not be found, as you would expect. I rebuilt the application using Visual Studio 2005 with the AJAX 1.0 Release installed, and published this build. I tried to load the application and the browser went into a tight loop, not what I expected. I checked the server event log and found the issue was that in the published web.config file there was a reference to a Crystal Reports DLL (which we have not used for years). Once I removed this reference from the web.confg the site worked perfectly.</p>
<p>So the tip: look out of old long forgotten assembly declaration in ASP.NET applications, if you change a technology (as we did from Crystal Reports to Microsoft Reporting Services) make sure you removed the references, even if the old DLLs are on your servers. They will tend to bite you on an upgrade.</p>
<p><strong>Addendum</strong> (written a couple of days later)</p>
<p>On reading this might have thought &lsquo;he is not using a very good code release model&rsquo;, but I actually have been using the Visual Studio Publish tool. I publish to a local directory then upload these files via FTP to the remote hosting site, replacing all files on a software publish. So in theory if I have removed a reference to an old DLL in Visual Studio then it should also be removed in the re-published site.</p>
<p>Now this appears not to be the case, I can find no reference to Crystal Reports in my solution, but on each publish it reappears. How strange, I will post again if I find out why.</p>
<p><strong>Addendum 2</strong></p>
<p>Found it, a bit of a user too stupid error really! The problem was not a stray reference but an extra <httpHandlers> entry. This goes some way to explaining why we did not get a could not load assembly error. As during the page we could not loaded the correct httpHandler it tried to go to the error page, which in turn could not load the handler etc. etc. etc.</p>
<p>Again it shows you cannot always trust an IDE to clean up a project files if you change technology.</p>
]]></content:encoded>
    </item>
    <item>
      <title>When you think it cannot get worse...</title>
      <link>https://blog.richardfennell.net/posts/when-you-think-it-cannot-get-worse/</link>
      <pubDate>Sat, 10 Nov 2007 11:35:48 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/when-you-think-it-cannot-get-worse/</guid>
      <description>&lt;p&gt;&amp;hellip; you end up presenting at TechEd.&lt;/p&gt;
&lt;p&gt;Yesterday was fun (of a sort) I ended up doing the demo section of the &lt;a href=&#34;http://msdn2.microsoft.com/en-us/library/bb931189.aspx&#34;&gt;ESB Guidance&lt;/a&gt; session at TechEd. This session was scheduled to be done by &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/boss/archive/2007/10/11/teched-2007-developer-here-i-come.aspx&#34;&gt;Robert Hogg (Black Marble) and Ewan Fairweather (Microsoft)&lt;/a&gt; but Ewan had to fly home early unexpectedly on Friday morning, so leaving somewhat of a gap.&lt;/p&gt;
&lt;p&gt;So you say, &amp;rsquo;that is not too bad you just step in and do the prepared and scripted demo&amp;rsquo;. Well in a perfect world you would, but about 6 hours before our session was on the formal release of  ESB 1.0 was posted to the MSDN site. As our demo was based on a CTP build, and as we knew the final release was somewhat different, we thought it only right to at least show the new documentation and file structures. So a hectic few hours were had by all.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>&hellip; you end up presenting at TechEd.</p>
<p>Yesterday was fun (of a sort) I ended up doing the demo section of the <a href="http://msdn2.microsoft.com/en-us/library/bb931189.aspx">ESB Guidance</a> session at TechEd. This session was scheduled to be done by <a href="http://blogs.blackmarble.co.uk/blogs/boss/archive/2007/10/11/teched-2007-developer-here-i-come.aspx">Robert Hogg (Black Marble) and Ewan Fairweather (Microsoft)</a> but Ewan had to fly home early unexpectedly on Friday morning, so leaving somewhat of a gap.</p>
<p>So you say, &rsquo;that is not too bad you just step in and do the prepared and scripted demo&rsquo;. Well in a perfect world you would, but about 6 hours before our session was on the formal release of  ESB 1.0 was posted to the MSDN site. As our demo was based on a CTP build, and as we knew the final release was somewhat different, we thought it only right to at least show the new documentation and file structures. So a hectic few hours were had by all.</p>
<p>I hope anyone attended the session go what they wanted out to if. I had to leave for my flight soon after the session so have heard no feedback other than the people who came to chat at the end of the session, who seemed happy. I am sure Robert will know more when he gets back, as his flight was later he had time to return to the olympian heights of the speaker lounge to once more feast upon unicorn steaks and ambrosia (well get coffee in a proper cup not a paper one at least).</p>
<p>As I made the mistake of not changing my TechEd speakers shirt before the flight home, not realizing the shirt made me look like a member of EasyJet staff, my outstanding question is - is there any future event in my life where the lovely grey/blue with orange trim TechEd speakers shirt is appropriate wear?</p>
]]></content:encoded>
    </item>
    <item>
      <title>TechEd Barcelona - update</title>
      <link>https://blog.richardfennell.net/posts/teched-barcelona-update/</link>
      <pubDate>Wed, 07 Nov 2007 10:04:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/teched-barcelona-update/</guid>
      <description>&lt;p&gt;I have not been blogging much from here have I, it is not that the sessions are not that interesting, but no single item has been giving me an huge urge to write.&lt;/p&gt;
&lt;p&gt;As I said in my last post I think this is a conference of best practice ideas and as such you tend to pick up a useful nugget here and there which you store away for future use. This is particularity relevant as at present I am reviewing our engineering process to improve our software development life cycle.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have not been blogging much from here have I, it is not that the sessions are not that interesting, but no single item has been giving me an huge urge to write.</p>
<p>As I said in my last post I think this is a conference of best practice ideas and as such you tend to pick up a useful nugget here and there which you store away for future use. This is particularity relevant as at present I am reviewing our engineering process to improve our software development life cycle.</p>
<p>Like many companies we use a variety of tools beyond Visual Studio such as <a href="http://cruisecontrol.sourceforge.net/">CruiseControl</a>, <a href="http://www.nunit.org/">nUnit</a> and our own home grown work tracking system. I have to consider when it is advantageous to swap these for the new features in Visual Studio Team System 2008. Being pragmatic this is always going to be a slow migration, these is little point investing time in moving an old project that barely still under maintenance to a new system. In fact I have chosen to only move our active projects from our old SourceSafe based system to TFS at a major release point, e.g. V1 to V2, snap-shoting it at this point and not bothering to bring over all the change history.</p>
<p>The tools round the edge is another question. If you, as we do, have an investment in nUnit and CruiseControl for projects is there any good case to rework everything to MSTEST and TFS Build? In the long term I think the answer is yes, to get a unified end to end solution, but it is hard to justify the time to do an &lsquo;instant&rsquo; swap over, so again it will be slow move. Especially when you can use a combined system e.g. have old testing nUnit and new ones in MSTEST pulling it all together with CruiseControl which can happily access the <a href="http://www.codeplex.com/TFSCCNetPlugin/Wiki/View.aspx">TFS SCC</a>, build using MSBUILD and run all the testing frameworks.</p>
<p>Anyway <a href="http://weblogs.asp.net/rosherove/">Roy Osherove</a> is tuning up his guitar for a session on testing to time to go&hellip;.</p>
]]></content:encoded>
    </item>
    <item>
      <title>No sizzle and not much sausage</title>
      <link>https://blog.richardfennell.net/posts/no-sizzle-and-not-much-sausage/</link>
      <pubDate>Mon, 05 Nov 2007 14:59:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/no-sizzle-and-not-much-sausage/</guid>
      <description>&lt;p&gt;The keynote at TechEd was as expected, we all knew about the impending release  of VS2008, and still no fixed date yet (so no sausage there) and not really anything announced product wise that was not already in the blog sphere (so no real sizzle).&lt;/p&gt;
&lt;p&gt;I think this is going to be conference on delivering on last years promises; how to get the best from the tools and technology announced last year, now that they are now really production ready and the early adopters have had a year to play with them.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The keynote at TechEd was as expected, we all knew about the impending release  of VS2008, and still no fixed date yet (so no sausage there) and not really anything announced product wise that was not already in the blog sphere (so no real sizzle).</p>
<p>I think this is going to be conference on delivering on last years promises; how to get the best from the tools and technology announced last year, now that they are now really production ready and the early adopters have had a year to play with them.</p>
<p>My suggestion to all attendees - check the sessions with the real world experience. Due to the CTP and beta programs, there are some real experts out there on these products and many of them are presenting here (or in the audiences at the number interactive sessions).</p>
]]></content:encoded>
    </item>
    <item>
      <title>How to create a community</title>
      <link>https://blog.richardfennell.net/posts/how-to-create-a-community/</link>
      <pubDate>Mon, 05 Nov 2007 13:00:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/how-to-create-a-community/</guid>
      <description>&lt;p&gt;Just come out of an interesting set of round table events for &amp;lsquo;community influencers&amp;rsquo; at TechEd. These are people who are active in both the online and face-to-face communities from all round Europe (and Australia - the reach of TechEd Europe!) attended.&lt;/p&gt;
&lt;p&gt;In the sessions I went to the general discussion was on the point I &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/10/18/a-long-day-of-seminars.aspx&#34;&gt;posted&lt;/a&gt; about a few weeks ago and that had been a running conversions on a number of UK blogs. I was refeshing (or sad?) to find the problems we have seen at home over attendance are the same around Europe:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just come out of an interesting set of round table events for &lsquo;community influencers&rsquo; at TechEd. These are people who are active in both the online and face-to-face communities from all round Europe (and Australia - the reach of TechEd Europe!) attended.</p>
<p>In the sessions I went to the general discussion was on the point I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/10/18/a-long-day-of-seminars.aspx">posted</a> about a few weeks ago and that had been a running conversions on a number of UK blogs. I was refeshing (or sad?) to find the problems we have seen at home over attendance are the same around Europe:</p>
<ul>
<li>It is hard to get people to attend events in the evening</li>
<li>It is hard to convert attendees to active community members</li>
<li>A very small percentage of people who view online forums contribute.</li>
</ul>
<p>As you would expect there is no single answer, and for most ideas there was someone to say &lsquo;we tried that and it did not work for us&rsquo;. However, it did come out that things that fail for one group work for others - there is no silver bullet. So try anything and everything to get people engaged.</p>
<p>A general it was felt &lsquo;marketing presentations&rsquo; do not draw people in, neither do events that cover what can be found on-line. Most people agreed that events, maybe in a panel or round table format, that provide real world experience or &lsquo;war stories&rsquo; as I call them are often the ones that get the most interest. Of cause it helps if the speaker presents in an engaging style, but this is mitigated if you can get the whole room involved.</p>
<p>From my experience some of the most interesting community events  have been to are technology agnostic and focus on general development for project management issues, notably in a group workshop style. Such as those at the <a href="http://en.wikipedia.org/wiki/Graffiti_%28Palm_OS%29">Extreme Programming club</a>, but even with this interesting content this group has struggled for number. As I said before the fact I like technology agnostic groups, as a NET developer I know there is much I can learn from Java developers and vice-versa, does not mean that this is right for all.</p>
<p>There was an underlying discussion of how many people in the industry were looking to the community as a means to professional development, as opposed to IT being just a job that ended at 5pm. Moving the latter group into  being hard - can you engage people who have lost the &lsquo;joy for their career&rsquo;?</p>
<p>I am sure this pre conference event will generate some online activity, keep an eye out for it.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Waiting to watch paint start to dry</title>
      <link>https://blog.richardfennell.net/posts/waiting-to-watch-paint-start-to-dry/</link>
      <pubDate>Mon, 05 Nov 2007 12:45:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/waiting-to-watch-paint-start-to-dry/</guid>
      <description>&lt;p&gt;I am now sitting in the keynote waiting for the session to start - they have graffiti artists on the stage - I wonder if it is a homarge to that great &lt;a href=&#34;http://en.wikipedia.org/wiki/Graffiti_%28Palm_OS%29&#34;&gt;Palm Pilot handwriting text entry language&lt;/a&gt;?&lt;/p&gt;
&lt;p&gt;Or is my my chance to say that that &amp;rsquo;this  keynote I could actually watch paint dry&#39;?&lt;/p&gt;
&lt;p&gt;Oh&amp;hellip; the paint fumes are starting to get to me&amp;hellip;&amp;hellip;&amp;hellip;&amp;hellip;&amp;hellip;&amp;hellip;&amp;hellip;&amp;hellip;&amp;hellip;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am now sitting in the keynote waiting for the session to start - they have graffiti artists on the stage - I wonder if it is a homarge to that great <a href="http://en.wikipedia.org/wiki/Graffiti_%28Palm_OS%29">Palm Pilot handwriting text entry language</a>?</p>
<p>Or is my my chance to say that that &rsquo;this  keynote I could actually watch paint dry'?</p>
<p>Oh&hellip; the paint fumes are starting to get to me&hellip;&hellip;&hellip;&hellip;&hellip;&hellip;&hellip;&hellip;&hellip;.</p>
]]></content:encoded>
    </item>
    <item>
      <title>If it is Monday it must be Barcelona</title>
      <link>https://blog.richardfennell.net/posts/if-it-is-monday-it-must-be-barcelona/</link>
      <pubDate>Mon, 05 Nov 2007 09:09:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/if-it-is-monday-it-must-be-barcelona/</guid>
      <description>&lt;p&gt;What an awful journey I have had since my last post, the trip back from the USA was fine; the problems started getting from the UK to Spain. Basically fog at Liverpool stopped all flights so we had to change airline and airport to get here in time for the TechEd conference. Should have been here by noon on Sunday and actually got here nearer 9pm, so no sigh seeing for me.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>What an awful journey I have had since my last post, the trip back from the USA was fine; the problems started getting from the UK to Spain. Basically fog at Liverpool stopped all flights so we had to change airline and airport to get here in time for the TechEd conference. Should have been here by noon on Sunday and actually got here nearer 9pm, so no sigh seeing for me.</p>
<p>Anyway registered and at the conference venue now, so let it begin&hellip;..</p>
]]></content:encoded>
    </item>
    <item>
      <title>Last day at SOA - day 4</title>
      <link>https://blog.richardfennell.net/posts/last-day-at-soa-day-4/</link>
      <pubDate>Fri, 02 Nov 2007 22:10:17 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/last-day-at-soa-day-4/</guid>
      <description>&lt;p&gt;Seems quiet here today at the SOA conference, less people about. I wonder how many have sneaked off early for flights? It does seem to be mostly Europeans left, judging by the languages I have heard.&lt;/p&gt;
&lt;p&gt;This does not mean parking was easy this morning, as in the same conference center Bill Gates and Bill Clinton are speaking at an MSPAC event. Now a bit of google&amp;rsquo;ing shows MSPAC is either:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Seems quiet here today at the SOA conference, less people about. I wonder how many have sneaked off early for flights? It does seem to be mostly Europeans left, judging by the languages I have heard.</p>
<p>This does not mean parking was easy this morning, as in the same conference center Bill Gates and Bill Clinton are speaking at an MSPAC event. Now a bit of google&rsquo;ing shows MSPAC is either:</p>
<ol>
<li><a href="http://www.microsoft.com/about/corporatecitizenship/citizenship/businesspractices/politicaldonations.mspx">Microsoft Political Action Committee</a> </li>
<li>Resources for <strong>Ms. Pac</strong>-Man(<a href="http://www.mspac.com">www.mspac.com</a>)</li>
<li><a href="http://ctmaple.org/whatis.htm">Maple Syrup Producers Association of Connecticut</a></li>
</ol>
<p>I really hope it is one the last two, but I doubt it.</p>
<p>I always thing the best thing you can bring away from conferences such as this are best practice and gotta&rsquo;s, to that end I saw a great session by <a href="http://www.biztalkgurus.com/blogs/biztalk/">Stephen Thomas</a> on best practice with orchestrations, you can find his sample and slides on <a href="http://www.biztalkgurus.com/soa2007/">www.biztalkgurus.com</a></p>
<p>So that&rsquo;s it for the SOA conference, off to the airport now, about 12 hours at home in the UK then off to TechEd in Barcelona. Oh what a jet set life I lead.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Channel 9 Interview with Robby Ingebretsen</title>
      <link>https://blog.richardfennell.net/posts/channel-9-interview-with-robby-ingebretsen/</link>
      <pubDate>Fri, 02 Nov 2007 16:23:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/channel-9-interview-with-robby-ingebretsen/</guid>
      <description>&lt;p&gt;When at &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/09/12/mix-07-day-2.aspx&#34;&gt;Mix07 I posted about Robby Ingebretsen&amp;rsquo;s excellent session&lt;/a&gt; on the relationship between developers and designers in the WPF/Silverlight world.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;http://channel9.msdn.com/ShowPost.aspx?PostID=353035#353035&#34;&gt;Channel9 has just posted an interview&lt;/a&gt; with him on the same subject, well worth looking at if this is the type of project you are involved in.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>When at <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/09/12/mix-07-day-2.aspx">Mix07 I posted about Robby Ingebretsen&rsquo;s excellent session</a> on the relationship between developers and designers in the WPF/Silverlight world.</p>
<p><a href="http://channel9.msdn.com/ShowPost.aspx?PostID=353035#353035">Channel9 has just posted an interview</a> with him on the same subject, well worth looking at if this is the type of project you are involved in.</p>
]]></content:encoded>
    </item>
    <item>
      <title>A trip out to a Seattle Landmark</title>
      <link>https://blog.richardfennell.net/posts/a-trip-out-to-a-seattle-landmark/</link>
      <pubDate>Fri, 02 Nov 2007 05:46:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-trip-out-to-a-seattle-landmark/</guid>
      <description>&lt;p&gt;There is more to Seattle than the &lt;a href=&#34;http://www.spaceneedle.com/&#34;&gt;Space Needle&lt;/a&gt; and flying fish at &lt;a href=&#34;http://www.pikeplacefish.com/&#34;&gt;Pike Place Market&lt;/a&gt;. At the end of the SOA conference today I persuaded Robert to embark on a tourist adventure, to brave the Seattle traffic and go to &lt;a href=&#34;http://www.mcphee.com/&#34;&gt;Archie McPhee&lt;/a&gt; &amp;ldquo;&lt;em&gt;Outfitters of Popular Culture&amp;rdquo;&lt;/em&gt; . It should have been a 20 minute drive&amp;hellip; hour and half later of nose to tail traffic we arrived.&lt;/p&gt;
&lt;p&gt;Now I had come across Archie McPhee years ago when my ex wife had been on the &lt;a href=&#34;http://www.clarionwest.org/website/&#34;&gt;Clarion West SF writers camp&lt;/a&gt;, and she came back with the strangest plastic frog in fur white rabbit suit; I always wonder what the person who made it in China think of the west?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>There is more to Seattle than the <a href="http://www.spaceneedle.com/">Space Needle</a> and flying fish at <a href="http://www.pikeplacefish.com/">Pike Place Market</a>. At the end of the SOA conference today I persuaded Robert to embark on a tourist adventure, to brave the Seattle traffic and go to <a href="http://www.mcphee.com/">Archie McPhee</a> &ldquo;<em>Outfitters of Popular Culture&rdquo;</em> . It should have been a 20 minute drive&hellip; hour and half later of nose to tail traffic we arrived.</p>
<p>Now I had come across Archie McPhee years ago when my ex wife had been on the <a href="http://www.clarionwest.org/website/">Clarion West SF writers camp</a>, and she came back with the strangest plastic frog in fur white rabbit suit; I always wonder what the person who made it in China think of the west?</p>
<p>The <a href="http://www.mcphee.com/">Archie McPhee</a> web site is just full of strange and wonderous things such as the Horrified B-Movie Victims</p>
<p><a href="http://www.mcphee.com/items/11642.html"><img loading="lazy" src="http://www.mcphee.com/pixlarge/11642.jpg"></a></p>
<p>and the Sky Diving Sigmund Freud</p>
<p><a href="http://www.mcphee.com/items/11762.html"><img loading="lazy" src="http://www.mcphee.com/pixlarge/11762.jpg"></a></p>
<p>to name but two, so I had made a mental note to pop by if in the area.</p>
<p>Ballard where they are located is different to Bellevue where we have been  staying, less empty of people and not so bland, it has some character. When you get to McPhee&rsquo;s there are now two shops next door to each other, both full of stuff. The problem is what seems a great thingamabob on the web can close up just look a bit cheap and plastically (this is mostly because they are, by there nature, a bit cheap and plastically).</p>
<p>So Robert and myself did not find anything major that we felt would add to the geek office theme for the Black Marble office, though the giant plastic tulips seemed a candidate if they had been practical to move.</p>
<p>So now I can say I have &rsquo;experienced the landmark of Seattle'.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Thoughts from the SOA conference - day 3</title>
      <link>https://blog.richardfennell.net/posts/thoughts-from-the-soa-conference-day-3/</link>
      <pubDate>Thu, 01 Nov 2007 23:55:03 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/thoughts-from-the-soa-conference-day-3/</guid>
      <description>&lt;p&gt;As with all conferences you tend to flag part way through, you start to think of the flight home and not having to sit in yet another session (no matter how interesting it sound on paper). It starts to seem all sessions are either in a cold draft or tropical atmosphere. At this point I have to say I am not looking forward to another conference next week at TechEd Barcelona, I could do with a holiday! Now I am sure some will say a conference is a holiday, but I rarely find them so, holidays do not involve PowerPoint (with maybe the exception of Triathlon training camps, but many people would say they are not holidays either)&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As with all conferences you tend to flag part way through, you start to think of the flight home and not having to sit in yet another session (no matter how interesting it sound on paper). It starts to seem all sessions are either in a cold draft or tropical atmosphere. At this point I have to say I am not looking forward to another conference next week at TechEd Barcelona, I could do with a holiday! Now I am sure some will say a conference is a holiday, but I rarely find them so, holidays do not involve PowerPoint (with maybe the exception of Triathlon training camps, but many people would say they are not holidays either)</p>
<p>Today&rsquo;s sessions have given me much to think about on in a diverse set of areas, but they could all be described as bring robustness to SOA. Now this is often lumped into the term <a href="http://en.wikipedia.org/wiki/SOA_Governance">SOA Governance</a>, a hot topic at this event.</p>
<p>Arguable the most important session I saw for future work I will be doing was that by <a href="http://blogs.msdn.com/martywaz/">Marty Waznicky</a> on <a href="http://www.microsoft.com/biztalk/solutions/soa/esb.mspx">Microsoft ESB Guidance</a> which provides patterns and practices style information and samples for best practice use of BizTalk. It in effect provides dashboards and exception catching tools (and much more) that provide SOA Governance for BizTalk and potentially any other SOA implemention. The guidance pack he (bravely) used in the session was one that was only build this morning (and should be the release version) so you can expect to see it on MSDN next week. My only complaint over this session was the volume of information he tried to cram into 1hr, at least 2hrs were needed, it must have been one of the fastest sessions I have seen at any conference. I felt dazed by about half way through and am sure I missed stuff due to the pace.</p>
<p>If you want to know more ESB and are going to TechEd Developer in Barcelona next week why not look in on the session being done by <a href="http://blogs.blackmarble.co.uk/blogs/boss/archive/2007/10/11/teched-2007-developer-here-i-come.aspx">Robert Hogg (Black Marble) and Ewan Fairweather (Microsoft)</a></p>
<p>Running a very close second to the ESB session was the session on <a href="http://msdn2.microsoft.com/en-us/library/aa480534.aspx">Web Service Software Factory</a> by <a href="http://blogs.msdn.com/donsmith/archive/2007/01/09/come-and-get-yer-service-factory.aspx">Don Smith</a>. Again this was a session using software built today; it is the V3 version of pattern and practice guidance pack. It is tools like this that allow developers to build robust supportable web service application in VS2005 (expect to see the VS2008 early next year). I would really advice any web service developers and SOA architects to look at this new and much improved release of the <a href="http://msdn2.microsoft.com/en-gb/practices/default.aspx">PnP</a> when it appears in the next few days.</p>
<p>A couple of general points to end this post on:</p>
<ol>
<li>If this is a business automation conference why are the session feedbacks on paper and not online forms?</li>
<li>Is there some problem with the attendees of this event which means they cannot find the mute/vibrate setting for their phones and laptops? Most sessions seem to have devices going off, but nobody here seems to do the usual British practice of silent tutting and a hard stare of disapproval. Is this an American thing (acceptance of devices going off) or are all just getting blind to it in this connected broadband age?</li>
</ol>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft SOA Conference - day 2</title>
      <link>https://blog.richardfennell.net/posts/microsoft-soa-conference-day-2/</link>
      <pubDate>Thu, 01 Nov 2007 02:51:40 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-soa-conference-day-2/</guid>
      <description>&lt;p&gt;I have focused on the the more developer end of SOA today. In the morning excellent sessions by &lt;a href=&#34;http://www.pluralsight.com/blogs/aaron/&#34;&gt;Aaron Skonnard&lt;/a&gt; and &lt;a href=&#34;http://www.pluralsight.com/blogs/matt/default.aspx&#34;&gt;Matt Milner&lt;/a&gt; on using WCF in BizTalk and best WF practices respectively, both provided an interesting set of gotta&amp;rsquo;s to look out for. Check their blogs if this is an area you work in.&lt;/p&gt;
&lt;p&gt;In the afternoon I went to a session on the &lt;a href=&#34;http://servicesengine&#34;&gt;Microsoft Managed Service Engine&lt;/a&gt; (MSE), a set of tools to allow versioning of services using in effect a WCF based proxy broker. Not a solution for every site, but in an ESB SOA world it could really save the day if you want an Agile development model, where you have to change WDSL contracts as a system evolves. My only worry would be how far it can go with the XML transforms to keep old clients working with new contracts. You still need a depreciation model which might be an issue - but even if this is the case a potentially very useful tool I am sure I will be revisiting.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have focused on the the more developer end of SOA today. In the morning excellent sessions by <a href="http://www.pluralsight.com/blogs/aaron/">Aaron Skonnard</a> and <a href="http://www.pluralsight.com/blogs/matt/default.aspx">Matt Milner</a> on using WCF in BizTalk and best WF practices respectively, both provided an interesting set of gotta&rsquo;s to look out for. Check their blogs if this is an area you work in.</p>
<p>In the afternoon I went to a session on the <a href="http://servicesengine">Microsoft Managed Service Engine</a> (MSE), a set of tools to allow versioning of services using in effect a WCF based proxy broker. Not a solution for every site, but in an ESB SOA world it could really save the day if you want an Agile development model, where you have to change WDSL contracts as a system evolves. My only worry would be how far it can go with the XML transforms to keep old clients working with new contracts. You still need a depreciation model which might be an issue - but even if this is the case a potentially very useful tool I am sure I will be revisiting.</p>
<p>Next I went to a session on testing BizTalk, all based around the <a href="http://www.codeplex.com/bizunit">BizUnit</a> tool. Now this looks interesting, though the definition language looks a bit nasty (all XML). I think putting the &lsquo;<em>unit&rsquo;</em>  term in the name is stretching a point if we define a unit test as being atomic. By its nature any BizTalk test tends towards an integration test - this said still a potentially vital tool for any BizTalk project. I think <a href="http://blogs.msdn.com/darrenj/">Darren Jefford</a> maybe repeating the session at TechEd in Barcelona next week, if testing is your thing go and see it, or see the write up on his blog or <a href="http://www.amazon.co.uk/Professional-BizTalk-Server-Darren-Jefford/dp/0470046422">book</a>.</p>
<p>Finally I went the session on <a href="http://biztalk.net/Default.aspx">BizTalk Services</a> which gave some more detail on stuff announced at the keynote. This has the potential to be very big, providing a unified inter domain message routing service, you can envisage a world where IM such as Windows Messenger routes via such a service, let alone more major B2B services. Calling it BizTalk Services is a typical confusing  Microsoft naming as it is not as the name implies a hosted version of any parts of BizTalk server! Looks like <a href="http://cardspace.netfx3.com/">CardSpace</a> will also figure highly in this world for federated security, though this does not answer questions over passing private business data across international borders and third parties servers. Probably the most telling part of this session was in the Q&amp;A, in that at present there is no define plan for a revenue or SLA model. This is still in it&rsquo;s early CTP days - but is open to public and Microsoft seem keen for feedback so <a href="http://biztalk.net/Default.aspx">have a look</a>.</p>
<p>A good day, but I think most of today&rsquo;s sessions suffered from being put in too shorter slot, as I said yesterday I think it would be a good idea to  have 15 minutes longer on each session and 15 minutes less on each break. From comments from a number of speakers it seems that their presentation were written for longer slots. I am not sure if this is due to other conferences having longer slots or the materials are normally presented in a classroom style normally.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Buzzword bingo 2</title>
      <link>https://blog.richardfennell.net/posts/buzzword-bingo-2/</link>
      <pubDate>Thu, 01 Nov 2007 01:55:04 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/buzzword-bingo-2/</guid>
      <description>&lt;p&gt;Not a great day for new buzzwords to me at the SOA conference today, so just the one:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;POX - &lt;a href=&#34;http://en.wikipedia.org/wiki/Plain_Old_XML&#34;&gt;Plain Old XML&lt;/a&gt;, as opposed to XML data with extra META data&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>Not a great day for new buzzwords to me at the SOA conference today, so just the one:</p>
<ul>
<li>POX - <a href="http://en.wikipedia.org/wiki/Plain_Old_XML">Plain Old XML</a>, as opposed to XML data with extra META data</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Where are all the people?</title>
      <link>https://blog.richardfennell.net/posts/where-are-all-the-people/</link>
      <pubDate>Wed, 31 Oct 2007 16:44:29 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-are-all-the-people/</guid>
      <description>&lt;p&gt;Since arriving in the Bellevue/Redmond area I have been struck by the lack of people. Wherever I have gone it seems like the place was built for at least twice the number that are present, whether it be the shopping centers or restaurants. I wondered was it because:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;the others are soon to arrive&lt;/li&gt;
&lt;li&gt;half the people left&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now after traveling in to the conference for a couple of days and watching the local news I have the answer, It is option 1. - they are all stuck in traffic.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Since arriving in the Bellevue/Redmond area I have been struck by the lack of people. Wherever I have gone it seems like the place was built for at least twice the number that are present, whether it be the shopping centers or restaurants. I wondered was it because:</p>
<ol>
<li>the others are soon to arrive</li>
<li>half the people left</li>
</ol>
<p>Now after traveling in to the conference for a couple of days and watching the local news I have the answer, It is option 1. - they are all stuck in traffic.</p>
<p>The traffic between the city in the Puget Sound area is bad, more a parking lot than a road system. I am glad our hotel is just a couple of blocks away from Microsoft Campus, 5 minutes irrespective of the time of day.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft SOA Conference - end of day 1</title>
      <link>https://blog.richardfennell.net/posts/microsoft-soa-conference-end-of-day-1/</link>
      <pubDate>Wed, 31 Oct 2007 02:51:53 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-soa-conference-end-of-day-1/</guid>
      <description>&lt;p&gt;So my thoughts at the end of day&amp;hellip;.&lt;/p&gt;
&lt;p&gt;Certainly a useful day, but the conference seems a little slow. The breaks seem long,the sessions short and the breakout sessions finished quite early in the day. Maybe I am just used to the crammed in format of TechEd that go on late into the evening.&lt;/p&gt;
&lt;p&gt;On the plus side this format does give a good chance to chat to other attendees, who seem very chatty and are from a wide variety of locations across the world; though nearly all white and male, an even less diverse group than at &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/09/12/end-of-the-first-day-mix07-uk.aspx&#34;&gt;Mix UK&lt;/a&gt;! What does this say about the IT industry or maybe more to the point who gets to go to conferences?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So my thoughts at the end of day&hellip;.</p>
<p>Certainly a useful day, but the conference seems a little slow. The breaks seem long,the sessions short and the breakout sessions finished quite early in the day. Maybe I am just used to the crammed in format of TechEd that go on late into the evening.</p>
<p>On the plus side this format does give a good chance to chat to other attendees, who seem very chatty and are from a wide variety of locations across the world; though nearly all white and male, an even less diverse group than at <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/09/12/end-of-the-first-day-mix07-uk.aspx">Mix UK</a>! What does this say about the IT industry or maybe more to the point who gets to go to conferences?</p>
<p>I did see a great session this afternoon from <a href="http://www.davidchappell.com/blog">David Chappell</a> comparing .NET to J2EE.I think I saw in effect the equivalent session he did at JavaOne in year 2000 back when Black Marble was a Java house. The key point remains the same - the choice of underlying platform once made is very hard to change, even within the J2EE family of vendors. All platforms have good and bad features so a company has to make a bet on which platform will meet their needs now and allow them develop in the future. I personally can&rsquo;t get away from view that a single vendor solution (i.e. .NET) allow better internal consistency and easy of development but still allows external improbability via <a href="http://www.ws-i.org/">ws*</a> standards (or the ISB in the future). But I would say that wouldn&rsquo;t I as I work for a Microsoft Gold Partner!</p>
<p>In the Q&amp;A session at the end of the day it was interesting to hear the Microsoft take on companies using the cloud as the ISB, which as I <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/10/30/microsoft-soa-conference-day-1.aspx">posted this morning</a> was my concern. Their view is to think of Exchange as the model, Microsoft could host it, other third parties can host it or a corporate may host their own. In many cases it is will only be a routing service and so will not actually be storing data (which could be encrypted anyway). This should go a long way to aiding acceptance.</p>
<p>But enough for today, off to the ask the experts reception now.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Mission control we have a problem</title>
      <link>https://blog.richardfennell.net/posts/mission-control-we-have-a-problem/</link>
      <pubDate>Tue, 30 Oct 2007 23:13:47 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mission-control-we-have-a-problem/</guid>
      <description>&lt;p&gt;Sitting here at the back of the Kodiak room at the Microsoft Conference Center in Redmond is somewhat similar to the &lt;a href=&#34;http://www.jpl.nasa.gov/index.cfm&#34;&gt;control room at JPL&lt;/a&gt; which I went to on my last trip to the USA.&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/soa1_2.jpg&#34;&gt;&lt;img alt=&#34;soa1&#34; loading=&#34;lazy&#34; src=&#34;https://blog.richardfennell.net/wp-content/uploads/sites/2/historic/soa1_thumb.jpg&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;All is can see is a sea of laptops and heads just visible over the rows of desks.&lt;/p&gt;
&lt;p&gt;It is nice to have a desk and easy access to power and Internet - all conferences should be this way.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Sitting here at the back of the Kodiak room at the Microsoft Conference Center in Redmond is somewhat similar to the <a href="http://www.jpl.nasa.gov/index.cfm">control room at JPL</a> which I went to on my last trip to the USA.</p>
<p><a href="/wp-content/uploads/sites/2/historic/soa1_2.jpg"><img alt="soa1" loading="lazy" src="/wp-content/uploads/sites/2/historic/soa1_thumb.jpg"></a></p>
<p>All is can see is a sea of laptops and heads just visible over the rows of desks.</p>
<p>It is nice to have a desk and easy access to power and Internet - all conferences should be this way.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SOA Buzzword Bingo Part 1</title>
      <link>https://blog.richardfennell.net/posts/soa-buzzword-bingo-part-1/</link>
      <pubDate>Tue, 30 Oct 2007 20:46:45 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/soa-buzzword-bingo-part-1/</guid>
      <description>&lt;p&gt;SOA being a conference on business process is a great place to learn new words for the game of &lt;a href=&#34;http://en.wikipedia.org/wiki/Buzzword_bingo&#34;&gt;buzzword bingo&lt;/a&gt;,  new ones to me thus far are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Onboarding&lt;/em&gt; - to hire staff&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Toast&lt;/em&gt; - information provided via a gadget on the desktop&lt;/li&gt;
&lt;/ul&gt;</description>
      <content:encoded><![CDATA[<p>SOA being a conference on business process is a great place to learn new words for the game of <a href="http://en.wikipedia.org/wiki/Buzzword_bingo">buzzword bingo</a>,  new ones to me thus far are:</p>
<ul>
<li><em>Onboarding</em> - to hire staff</li>
<li><em>Toast</em> - information provided via a gadget on the desktop</li>
</ul>
]]></content:encoded>
    </item>
    <item>
      <title>Microsoft SOA Conference Day 1</title>
      <link>https://blog.richardfennell.net/posts/microsoft-soa-conference-day-1/</link>
      <pubDate>Tue, 30 Oct 2007 20:32:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/microsoft-soa-conference-day-1/</guid>
      <description>&lt;p&gt;After a couple of days in the Washington state my body has caught up to a manageable time zone (somewhere east of Denver I think, but that is close enough) just in time for for the start of the &lt;a href=&#34;http://www.mssoaandbpconference.com/&#34;&gt;Microsoft SOA 2007 conference&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The keynote for me highlighted that Microsoft see the future in the cloud, the &lt;a href=&#34;http://www.biztalk.net/Overview.aspx&#34;&gt;Internet Service Bus&lt;/a&gt; (ISB) as opposed to silo&amp;rsquo;d Enterprise Service Buses (ESB). Now this assumes customers have gone down the &lt;a href=&#34;http://en.wikipedia.org/wiki/Service-oriented_architecture&#34;&gt;SOA&lt;/a&gt; route already, which I would say is not the case in the SME market I work in. I still see many monolithic legacy applications where migration to SOA has not been considered yet.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>After a couple of days in the Washington state my body has caught up to a manageable time zone (somewhere east of Denver I think, but that is close enough) just in time for for the start of the <a href="http://www.mssoaandbpconference.com/">Microsoft SOA 2007 conference</a>.</p>
<p>The keynote for me highlighted that Microsoft see the future in the cloud, the <a href="http://www.biztalk.net/Overview.aspx">Internet Service Bus</a> (ISB) as opposed to silo&rsquo;d Enterprise Service Buses (ESB). Now this assumes customers have gone down the <a href="http://en.wikipedia.org/wiki/Service-oriented_architecture">SOA</a> route already, which I would say is not the case in the SME market I work in. I still see many monolithic legacy applications where migration to SOA has not been considered yet.</p>
<p>Anyway that aside, I see storage in the cloud being the big question - yes Microsoft are throwing services out there such as:</p>
<ul>
<li><a href="http://www.microsoft.com/downloads/details.aspx?FamilyId=1B6F85BC-8933-4D0E-A639-934EF85ADCE1&amp;displaylang=en">Astoria</a> - the SDK for cloud data storage</li>
<li><a href="http://silverlight.live.com/">Silverlight Media Stream Service</a> - free media streaming service</li>
</ul>
<p>Time will tell what the uptake of such services will be, and I think issues of trust will be a major factor - will you trust Microsoft, or Google or Yahoo to store you personal and/or corporate data? And what about the cost? At present these services are free; I expect them to remain free for the home user, but corporate users will expect better defined SLAs and these will no doubt costs. Is this another journey down the <a href="http://en.wikipedia.org/wiki/Application_service_provider">Application Service Provider</a> style of business model?</p>
<p>To get the seamless application integration of the systems, as we see in demos, we will also need a better uptake of federated security models else you will spend all your time entering passwords. Now this can be mitigated by making the connections behind the scenes (within the ISB), however most of the demos seem to use the desktop client as the central point to access these various ISB services, which is sensible as this the point the data is needed.</p>
<p>In the keynote <a href="http://www.microsoft.com/presspass/press/2007/oct07/10-30OsloPR.mspx?rss_fdn=Press%20Releases">Oslo</a>, the next wave of BizTalk and related products, was announced. This will address some of these cross silo connectivity issues, specifically by providing BizTalk Services, a Microsoft hosted set of routing and workflow services using BizTalk V6 (vNext) that will allow easier firewall traversal between corporate silos, but even <a href="http://www.networkworld.com/weblogs/nos/010482.html">CTP</a>s of these products is a way off.</p>
<p>Anyway enough for the morning session off to lunch.</p>
<p>ps. Just done a sound bite for <a href="http://channel9.msdn.com/shows/ARCast_with_Ron_Jacobs">Ron Jacobs of Channel 9</a> on Oslo - I wonder if it makes the cut?</p>
]]></content:encoded>
    </item>
    <item>
      <title>Vista high CPU on startup</title>
      <link>https://blog.richardfennell.net/posts/vista-high-cpu-on-startup/</link>
      <pubDate>Tue, 30 Oct 2007 17:12:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/vista-high-cpu-on-startup/</guid>
      <description>&lt;p&gt;For a while I have been suffering that when I switch on my Acer Core2 Duo Vista laptop, both cores sometimes go to 85%+ utilization, so the PC is slooooow. Usually after a few reboots it seems to clear. This can happen after a resume from hibernate and complete restart; there was no obvious pattern. Task manager only says that the load is due to the &lt;strong&gt;NT Kernel&lt;/strong&gt; process - so not much help there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>For a while I have been suffering that when I switch on my Acer Core2 Duo Vista laptop, both cores sometimes go to 85%+ utilization, so the PC is slooooow. Usually after a few reboots it seems to clear. This can happen after a resume from hibernate and complete restart; there was no obvious pattern. Task manager only says that the load is due to the <strong>NT Kernel</strong> process - so not much help there.</p>
<p>After a bit of google&rsquo;ing I found other people reporting the problem after installation of <a href="http://www.microsoft.com/downloads/details.aspx?familyid=18b1d59d-f4d8-4213-8d17-2f6dde7d7aac&amp;displaylang=en">Microsoft Network Monitor 3.1</a>, which I did months ago Interestingly I have only seen the problem is recent. However, as soon as I went into the network protocol stack and disabled the network monitor protocol on my WiFi card the problems have gone away.</p>
<p>My guess is some strange race condition caused by a recent service patch; this not a major problem as you can switch the protocol on and off as required without a reboot.</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD6</title>
      <link>https://blog.richardfennell.net/posts/ddd6/</link>
      <pubDate>Fri, 26 Oct 2007 17:42:01 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd6/</guid>
      <description>&lt;p&gt;I have been on holiday for a week and I see that registration for DDD6 has opened and closed - good job my session was voted onto the agenda else I would not be going.&lt;/p&gt;
&lt;p&gt;Thanks to anyone who voted for my session, see you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have been on holiday for a week and I see that registration for DDD6 has opened and closed - good job my session was voted onto the agenda else I would not be going.</p>
<p>Thanks to anyone who voted for my session, see you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Bradford the &#39;greenest city&#39;</title>
      <link>https://blog.richardfennell.net/posts/bradford-the-greenest-city/</link>
      <pubDate>Sat, 20 Oct 2007 20:38:31 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/bradford-the-greenest-city/</guid>
      <description>&lt;p&gt;I see Bradford, where the Black Marble offices are, has been ranked the &lt;a href=&#34;http://news.bbc.co.uk/1/hi/uk/7054310.stm&#34;&gt;greenest city in the UK&lt;/a&gt;. Strange that at the &lt;a href=&#34;http://www.velocitybradford.com/&#34;&gt;technology centre&lt;/a&gt; we are in there is no recycling at all, not for paper, cans or plastic.&lt;/p&gt;
&lt;p&gt;We have asked if such a service can be laid on (developers drink a lot of can of coke), but were told that recycling is not a service offered as part of the council&amp;rsquo;s commercial refuse collection service.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I see Bradford, where the Black Marble offices are, has been ranked the <a href="http://news.bbc.co.uk/1/hi/uk/7054310.stm">greenest city in the UK</a>. Strange that at the <a href="http://www.velocitybradford.com/">technology centre</a> we are in there is no recycling at all, not for paper, cans or plastic.</p>
<p>We have asked if such a service can be laid on (developers drink a lot of can of coke), but were told that recycling is not a service offered as part of the council&rsquo;s commercial refuse collection service.</p>
<p>If Bradford is the best what are the rest like?</p>
]]></content:encoded>
    </item>
    <item>
      <title>A long day of seminars</title>
      <link>https://blog.richardfennell.net/posts/a-long-day-of-seminars/</link>
      <pubDate>Thu, 18 Oct 2007 21:50:39 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/a-long-day-of-seminars/</guid>
      <description>&lt;p&gt;Just got home from a long day of Black Marble hosted presentations. This morning I was presenting on SQL 2008, seemed to go well. This afternoon it was a very busy MSDN session by &lt;a href=&#34;http://www.martinparry.com/&#34;&gt;Martin Parry&lt;/a&gt; and &lt;a href=&#34;http://www.danielmoth.com/Blog/&#34;&gt;Daniel Moth&lt;/a&gt; from the Microsoft DPE team on VS2008 and .NET 3.5. Finally this evening there was a interesting community event on the .NET Micro Framework by &lt;a href=&#34;http://www.robmiles.com/&#34;&gt;Rob Miles&lt;/a&gt;, the same session he is doing at TechEd in Barcelona, worth getting along to if you are there, if not try the &lt;a href=&#34;http://www.amazon.com/Embedded-Programming-Microsoft-Micro-Framework/dp/0735623651/ref=sr_11_1/104-5509811-3112749?ie=UTF8&amp;amp;qid=1177055927&amp;amp;sr=11-1&#34;&gt;book&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Just got home from a long day of Black Marble hosted presentations. This morning I was presenting on SQL 2008, seemed to go well. This afternoon it was a very busy MSDN session by <a href="http://www.martinparry.com/">Martin Parry</a> and <a href="http://www.danielmoth.com/Blog/">Daniel Moth</a> from the Microsoft DPE team on VS2008 and .NET 3.5. Finally this evening there was a interesting community event on the .NET Micro Framework by <a href="http://www.robmiles.com/">Rob Miles</a>, the same session he is doing at TechEd in Barcelona, worth getting along to if you are there, if not try the <a href="http://www.amazon.com/Embedded-Programming-Microsoft-Micro-Framework/dp/0735623651/ref=sr_11_1/104-5509811-3112749?ie=UTF8&amp;qid=1177055927&amp;sr=11-1">book</a>.</p>
<p>Considering how many developers turned up for the MSDN event I was surprised how relatively few decided to stay for the evening event, even though it only meant a short wait between session and food was laid on. The numbers attending community/evening events seems to be constant issue for events in Leeds and Bradford. I am often surprised in the low number I see at evening events we host (compared to day events) but I see the same at <a href="http://www.westyorkshire.bcs.org/">BCS</a> and the <a href="http://www.extremeprogrammingclub.com">Yorkshire Extreme Programming Club</a> events, even with the high quality speakers we are seeing at all three venues.</p>
<p>I was talking on this very subject to the David Turner the chair(?) of the <a href="http://www.extremeprogrammingclub.com">Yorkshire Extreme Programming Club</a>, we both were wondering what can be done to get people enthused and networking locally with their peers. I am interested to hear other peoples views - what more does the developer on the street (or more correctly at the PC) need to get them out to events. Is what we see in Yorkshire the same in other places? I have seen good crowds at user groups I have presented at or attended around the country but maybe I have just gone on good days.</p>
<p>The attendance at local events and national one area that <a href="http://www.garyshort.org/?p=741">Gary Short touched upon in his recent blog post</a>. Do people just want to turn out for big national events? or does it seem that way because having a national catchments for attendees means any event fills up fast?</p>
<p>Any comments ?</p>
]]></content:encoded>
    </item>
    <item>
      <title>SQLBit Podcast</title>
      <link>https://blog.richardfennell.net/posts/sqlbit-podcast/</link>
      <pubDate>Sat, 13 Oct 2007 22:15:33 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sqlbit-podcast/</guid>
      <description>&lt;p&gt;Craig Murphy has just &lt;a href=&#34;http://www.craigmurphy.com/blog/?p=726&#34;&gt;posted a podcast&lt;/a&gt; we did at SQLBits last weekend.&lt;/p&gt;
&lt;p&gt;Enjoy&amp;hellip;&amp;hellip;&amp;hellip;&amp;hellip;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Craig Murphy has just <a href="http://www.craigmurphy.com/blog/?p=726">posted a podcast</a> we did at SQLBits last weekend.</p>
<p>Enjoy&hellip;&hellip;&hellip;&hellip;</p>
]]></content:encoded>
    </item>
    <item>
      <title>DDD news</title>
      <link>https://blog.richardfennell.net/posts/ddd-news/</link>
      <pubDate>Fri, 12 Oct 2007 13:23:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/ddd-news/</guid>
      <description>&lt;p&gt;The &lt;a href=&#34;http://www.developerday.co.uk/ddd/agendaddd6.asp&#34;&gt;voting for DDD6&lt;/a&gt; is now open - as I always say &lt;em&gt;vote early vote often.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I hope my submission of a proposal for a session on Scrum is of interest. I know I have had feedback on my previous sessions that people want to know more on this area.&lt;/p&gt;
&lt;p&gt;Also saw today on &lt;a href=&#34;http://blog.colinmackay.net/archive/2007/10/12/515.aspx&#34;&gt;Colin Angus Mackey&amp;rsquo;s blog that DDD7 DDD Scotland will be in May 2008 at Glasgow Caledonian University&lt;/a&gt;. Well another 3+ hour drive but in a different direction! It will be interesting to see if the change of location alters attendance&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>The <a href="http://www.developerday.co.uk/ddd/agendaddd6.asp">voting for DDD6</a> is now open - as I always say <em>vote early vote often.</em></p>
<p>I hope my submission of a proposal for a session on Scrum is of interest. I know I have had feedback on my previous sessions that people want to know more on this area.</p>
<p>Also saw today on <a href="http://blog.colinmackay.net/archive/2007/10/12/515.aspx">Colin Angus Mackey&rsquo;s blog that DDD7 DDD Scotland will be in May 2008 at Glasgow Caledonian University</a>. Well another 3+ hour drive but in a different direction! It will be interesting to see if the change of location alters attendance</p>
<p><strong>Edit note:</strong> I have just edited this post to change DDD7 to DDD Scotland as <a href="http://www.craigmurphy.com/content/">Craig Murphy</a> informs me that&hellip;.</p>
<p> &ldquo;<em>It&rsquo;s not actually DDD7, it&rsquo;s DDD Scotland!  DDD7 will be in TVP with other flavours of DDD being held elsewhere in England/Ireland (both are in the discussion stages)&rdquo;</em></p>
<p>This was completely my mistake as Colin had the correct name on his blog. I am happy to clear that one up, especially as it means we are likely to see more DDD and SQLBit like events around the country.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Agile Workshop by Clarke Ching</title>
      <link>https://blog.richardfennell.net/posts/agile-workshop-by-clarke-ching/</link>
      <pubDate>Fri, 12 Oct 2007 08:18:05 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/agile-workshop-by-clarke-ching/</guid>
      <description>&lt;p&gt;Last night went to an excellent free meeting at the &lt;a href=&#34;http://www.extremeprogrammingclub.com/&#34;&gt;Yorkshire Extreme programming Club&lt;/a&gt; given by &lt;a href=&#34;http://www.clarkeching.com/&#34;&gt;Clarke Ching&lt;/a&gt; entitled :&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Programmers may be from Mars, Customers may be from Venus &amp;hellip;&lt;br&gt;
but why does everyone think that Project Managers are from Uranus?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;From the title I was unsure of the content, other than it would be about project management. It turned out to be a workshop on using formal conflict resolution techniques to dig into problems in projects to find solutions and to expose underlying patterns - very interesting. This is set of techniques I had come across (I did got to Bradford University which has a well know &lt;a href=&#34;http://www.brad.ac.uk/acad/peace/&#34;&gt;Peace Studies&lt;/a&gt; department that studies just this area) but had not considered their use in project management.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Last night went to an excellent free meeting at the <a href="http://www.extremeprogrammingclub.com/">Yorkshire Extreme programming Club</a> given by <a href="http://www.clarkeching.com/">Clarke Ching</a> entitled :</p>
<p><em>Programmers may be from Mars, Customers may be from Venus &hellip;<br>
but why does everyone think that Project Managers are from Uranus?</em></p>
<p>From the title I was unsure of the content, other than it would be about project management. It turned out to be a workshop on using formal conflict resolution techniques to dig into problems in projects to find solutions and to expose underlying patterns - very interesting. This is set of techniques I had come across (I did got to Bradford University which has a well know <a href="http://www.brad.ac.uk/acad/peace/">Peace Studies</a> department that studies just this area) but had not considered their use in project management.</p>
<p>My only concern is that like all techniques unless used in good faith by all it could be just another tool to allow people to prove whatever they intend to prove. However, is his not the type of problems that this technique is intend to be used to avoid (maybe it has to be used recursively).</p>
<p>Certainly warrants a further look as another tool in the project management arsenal.</p>
]]></content:encoded>
    </item>
    <item>
      <title>I think I got away with it!</title>
      <link>https://blog.richardfennell.net/posts/i-think-i-got-away-with-it/</link>
      <pubDate>Sat, 06 Oct 2007 21:23:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/i-think-i-got-away-with-it/</guid>
      <description>&lt;p&gt;I have just got back from &lt;a href=&#34;http://www.sqlbits.com/&#34;&gt;SQLBits&lt;/a&gt; after giving my session on &lt;em&gt;Unit testing in SQL.&lt;/em&gt; As I said at the start of my session, at developer events I often feel a bit of an outsider when I say I am talking on testing as opposed to development; today  it was doubly so when surround by so many DBAs. I have to say some of the sessions I attended seemed to be from a completely different world to my day to day work. Hey but that is what makes IT interesting is it not?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just got back from <a href="http://www.sqlbits.com/">SQLBits</a> after giving my session on <em>Unit testing in SQL.</em> As I said at the start of my session, at developer events I often feel a bit of an outsider when I say I am talking on testing as opposed to development; today  it was doubly so when surround by so many DBAs. I have to say some of the sessions I attended seemed to be from a completely different world to my day to day work. Hey but that is what makes IT interesting is it not?</p>
<p>I think the event went really well and I would like to thank the organizing committee for all their work getting the event going and thanks to everyone who filled my session to bursting. I was surprised how much interest it generated and how well it seemed to be received.</p>
<p>For those who are interested, my slides can be found at  <a href="http://www.blackmarble.co.uk/ConferencePapers/SQLBits%20-%20Unit%20Testing%20in%20SQL.ppt" title="SQLBits - Unit Testing in SQL">http://www.blackmarble.co.uk/ConferencePapers/SQLBits - Unit testing in  SQL.ppt</a>.</p>
<p>The one point that was left unanswered from my session was that of deployment of changes to a DB from DataDude. The answer is that the build option in DataDude generates a change SQL script (in the SQL sub directory under the project folder). The deploy option runs this script against the local copy of the DB on the development PC as defined in the project properties. The point I forgot during the session was that this working development copy of the DB is not the same as the one I imported schema from, hence we could not see the changes in the original DB because this was not the DB the script was run on. In most cases you use this generated script to update a live database via a manual release or separate automated process. Hope this clears up any confusion.</p>
<p>I look forward to hearing everyone&rsquo;s feedback on the event</p>
]]></content:encoded>
    </item>
    <item>
      <title>Javascript errors with Escrum</title>
      <link>https://blog.richardfennell.net/posts/javascript-errors-with-escrum/</link>
      <pubDate>Thu, 04 Oct 2007 17:59:21 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/javascript-errors-with-escrum/</guid>
      <description>&lt;p&gt;Whilst sorting our eScrum installation I kept getting a variety of Javascript errors when selecting a project using the dialog on the top right of the main web form. I also was unable to create new product via the web site.&lt;/p&gt;
&lt;p&gt;It turned out to be two problems:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Templates&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Some the projects I had registered to be used via the Escrum web site were not using the eScrum template, they were using the Conchango Scrum one.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Whilst sorting our eScrum installation I kept getting a variety of Javascript errors when selecting a project using the dialog on the top right of the main web form. I also was unable to create new product via the web site.</p>
<p>It turned out to be two problems:</p>
<p><strong>Templates</strong></p>
<p>Some the projects I had registered to be used via the Escrum web site were not using the eScrum template, they were using the Conchango Scrum one.</p>
<p>Now there is no easy way to change a template once a project is created, but I was not that worried over the SharePoint guidance pages, so all that really mattered was that I had the right work item types and reports, it was the lack of these work item types that was causing the JavaScript errors (data binds were failing when trying to display the results of SQL queries).</p>
<p>So the fix was to follow the method at the end of <a href="http://msdn2.microsoft.com/en-us/library/ms194914%28vs.80%29.aspx" title="http://msdn2.microsoft.com/en-us/library/ms194914(vs.80).aspx">http://msdn2.microsoft.com/en-us/library/ms194914(vs.80).aspx</a> i.e export the work item types then import them. Once this was done I could select the updated project and got not errors.</p>
<p>The only down side of this method is that the project now contain the old and the new style of work items, but I can live with that.</p>
<p>You also have to import all the Reporting Services reports, if you don&rsquo;t eScrum cannot show you reports. You do this via <a href="http://localhost/reports">http://localhost/reports</a> then add the reports from the WIT export you created. Remember to add spaces to the names of the reports e.g. <em>SprintBurndown.xml</em> becomes <em>Sprint Burndown</em>. I also had to then edit each report to point it at my global data sources (Reports and OLAP) for TFS in Reporting Services</p>
<p><strong>Product Owners</strong></p>
<p>When creating a product in eScrum you have to assign the product owner (and you can also set the team members). The combo for the product owner is filled using the contributors - not the administrators, so make sure the users are in the right group.</p>
<p>If when you add a user to a group in TFS and it does not show up in the eScrum product owner dialog, do a AppPool recycle on the eScrum site to clear the cache.</p>
<p>NB The way we manage users is to add an Active Directory group to each TFS group then manage user rights to projects in the AD.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS Update to 2008 Beta2 and Escrum Admin</title>
      <link>https://blog.richardfennell.net/posts/tfs-update-to-2008-beta2-and-escrum-admin/</link>
      <pubDate>Tue, 02 Oct 2007 13:33:49 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-update-to-2008-beta2-and-escrum-admin/</guid>
      <description>&lt;p&gt;As I posted before I have been updating our TFS server to 2008 Beta2, today it was the WSS from 2.0 to 3.0. Well I followed the &lt;a href=&#34;http://blogs.msdn.com/sudhir/archive/2007/05/31/upgrade-2005-with-wss2-0-to-orcas-wss3-0.aspx&#34;&gt;process on Sudhir Hasbe&amp;rsquo;s blog&lt;/a&gt; and it just worked, so now we have WSS3.0 running the the TFS application tier. The final step will be to move the SharePoint sites to our main MOSS 2007 farm, but that is for another day.&lt;/p&gt;
&lt;p&gt;As part of today&amp;rsquo;s upgrade I was working on process templates. We use Escrum but I had noticed our Escrum web site was not showing all the projects using that template. I had completely forgotten that you had to manually add Team Projects to the Escrum web site. This is done using a URL like:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As I posted before I have been updating our TFS server to 2008 Beta2, today it was the WSS from 2.0 to 3.0. Well I followed the <a href="http://blogs.msdn.com/sudhir/archive/2007/05/31/upgrade-2005-with-wss2-0-to-orcas-wss3-0.aspx">process on Sudhir Hasbe&rsquo;s blog</a> and it just worked, so now we have WSS3.0 running the the TFS application tier. The final step will be to move the SharePoint sites to our main MOSS 2007 farm, but that is for another day.</p>
<p>As part of today&rsquo;s upgrade I was working on process templates. We use Escrum but I had noticed our Escrum web site was not showing all the projects using that template. I had completely forgotten that you had to manually add Team Projects to the Escrum web site. This is done using a URL like:</p>
<blockquote>
<p><a href="http://mysite/escrum/admin.aspx">http://mysite/escrum/admin.aspx</a></p></blockquote>
<p>The main reason I forgot this is that the admin page is not shown on any menu! A nice bit of <a href="http://en.wikipedia.org/wiki/Security_through_obscurity">security by obscurity</a>. Anyway using this admin form you can add projects to the Escrum web site. The problem is that the UI does no validation other than checking you typed a string. You have to know and type in the projects names correctly. Once the entry is made it is stored in the</p>
<blockquote>
<p>[program files]Microsoft Visual Studio 2005 Team Foundation ServerWeb ServicesEscrumRegisteredGroups.XML</p></blockquote>
<p>Useful to know the location so you can remove the rubbish you type in by mistake.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Upgrading from TFS2005 to TFS 2008 &#39;Orcas&#39; Beta 2</title>
      <link>https://blog.richardfennell.net/posts/upgrading-from-tfs2005-to-tfs-2008-orcas-beta-2/</link>
      <pubDate>Fri, 28 Sep 2007 18:59:00 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/upgrading-from-tfs2005-to-tfs-2008-orcas-beta-2/</guid>
      <description>&lt;p&gt;Today I decided to bite the bullet and upgrade our &amp;rsquo;live&amp;rsquo; TFS installation to 2008 Beta2, now there is support from Microsoft. The only reason I have delayed this long has been we have been involved in the delivery of a big project and I did not want to take the TFS server down for any reason.&lt;/p&gt;
&lt;p&gt;Our TFS installation is dual server, the data tier (DT) running on our central SQL2005 64bit server and the application tier (AT) on a virtual server as a VPC. I have kept the AT virtual as it is easy to backup.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today I decided to bite the bullet and upgrade our &rsquo;live&rsquo; TFS installation to 2008 Beta2, now there is support from Microsoft. The only reason I have delayed this long has been we have been involved in the delivery of a big project and I did not want to take the TFS server down for any reason.</p>
<p>Our TFS installation is dual server, the data tier (DT) running on our central SQL2005 64bit server and the application tier (AT) on a virtual server as a VPC. I have kept the AT virtual as it is easy to backup.</p>
<p>So the first thing of note that surprised me is that you do the upgrade only from the AT. Before I read the installation notes I had assumed you would update the DT and then the AT like the original installation.</p>
<p>Anyway I ran the <strong>setup.exe</strong> on the AT, it found a few warnings but they were all down to the fact I was running on a VPC (warnings over CPU performance, memory and disk size), so I continued with the upgrade gave it some user IDs and pressed start and it progressed OK first installing .NET 3.5 then TFS 2008.</p>
<p>During the TFS 2008 upgrade I got two errors that stopped the setup with a &lsquo;retry or cancel&rsquo; option. In both cases I managed to fix the issues and a retry worked. This is what I had to do to fix the problems:</p>
<ul>
<li>Error <strong>29109</strong> Team Foundation Report Server Configuration -  All this turned out to be was a timeout. On the AT I opened the URL <a href="http://localhost/reports">http://localhost/reports</a> in a browser (this took a long time for some reason) but I did eventually get the usual reporting services page. I then retried the step in the upgrade tool and it continued fine.</li>
<li>Error <strong>28925</strong>. TFServerStatusValidator - Basically this means the setup program cannot access <a href="http://localhost:8080/services/v1.0/ServerStatus.asmx">http://localhost:8080/services/v1.0/ServerStatus.asmx</a> on the AT. If you manually run the URL you get a generic &lsquo;page not found&rsquo; error. On looking the the event log I was that an ISAPI filter <strong>authenticationfilter.dll</strong> for the TFS web site could not be loaded. Interesting the path it was trying to loaded was for TFS2005, and of course this file no longer existed. I then remembered that we had installed this ISAPI filter when we had TFS 1.0 and were trying to get HTTPS connectivity working, so I just deleted the ISAPI filter entry, I then checked I could start the URL in a browser and then tried the step of the setup and it continued OK</li>
</ul>
<p>When all this was done the server was rebooted and it had been upgraded server.</p>
<p>The next step will be to upgrade the AT from WSS 2.0 to WSS3.0 then move all the Sharepoint bits our central MOSS 2007 server, thus making the AT little more than a web server.</p>
]]></content:encoded>
    </item>
    <item>
      <title>GUITester and CodePlex</title>
      <link>https://blog.richardfennell.net/posts/guitester-and-codeplex/</link>
      <pubDate>Mon, 24 Sep 2007 13:30:46 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/guitester-and-codeplex/</guid>
      <description>&lt;p&gt;Due to popular demand, well one person, I have uploaded my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2005/03/23/187.aspx&#34;&gt;GUITester system I presented about at DDD3&lt;/a&gt; to CodePlex. It can be found at &lt;a href=&#34;http://www.codeplex.com/guitester&#34; title=&#34;http://www.codeplex.com/guitester&#34;&gt;http://www.codeplex.com/guitester&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you are interested in moving the project forward let me know.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Due to popular demand, well one person, I have uploaded my <a href="http://blogs.blackmarble.co.uk/blogs/bm-bloggers/archive/2005/03/23/187.aspx">GUITester system I presented about at DDD3</a> to CodePlex. It can be found at <a href="http://www.codeplex.com/guitester" title="http://www.codeplex.com/guitester">http://www.codeplex.com/guitester</a>.</p>
<p>If you are interested in moving the project forward let me know.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Disaster Recovery in TFS</title>
      <link>https://blog.richardfennell.net/posts/disaster-recovery-in-tfs/</link>
      <pubDate>Thu, 20 Sep 2007 17:42:28 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/disaster-recovery-in-tfs/</guid>
      <description>&lt;p&gt;If you are having to support a TFS install there have been some excellent posts on disaster recovery on &lt;a href=&#34;http://blogs.msdn.com/sudhir/&#34;&gt;Sudhir Hasbe&amp;rsquo;s&lt;/a&gt; blog, he is a PM on the TFS team.&lt;/p&gt;
&lt;p&gt;Well worth a read.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>If you are having to support a TFS install there have been some excellent posts on disaster recovery on <a href="http://blogs.msdn.com/sudhir/">Sudhir Hasbe&rsquo;s</a> blog, he is a PM on the TFS team.</p>
<p>Well worth a read.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Speaking at NxtGenUG Birmingham</title>
      <link>https://blog.richardfennell.net/posts/speaking-at-nxtgenug-birmingham/</link>
      <pubDate>Thu, 20 Sep 2007 17:37:51 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/speaking-at-nxtgenug-birmingham/</guid>
      <description>&lt;p&gt;I will be speaking at &lt;a href=&#34;http://www.nxtgenug.net/EventList.aspx?Un=1&#34;&gt;NxtGen usergroup Birmingham&lt;/a&gt; branch on the 17th December about, you guessed it TFS.&lt;/p&gt;
&lt;p&gt;Look forward to seeing you there.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I will be speaking at <a href="http://www.nxtgenug.net/EventList.aspx?Un=1">NxtGen usergroup Birmingham</a> branch on the 17th December about, you guessed it TFS.</p>
<p>Look forward to seeing you there.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Last nights Bristol .NET Developers Network usergroup meeting.</title>
      <link>https://blog.richardfennell.net/posts/last-nights-bristol-net-developers-network-usergroup-meeting/</link>
      <pubDate>Thu, 20 Sep 2007 17:33:12 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/last-nights-bristol-net-developers-network-usergroup-meeting/</guid>
      <description>&lt;p&gt;As I sit on the train traveling north to our &lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events&#34;&gt;Black Marble hosted MSDN event&lt;/a&gt; I must write to say how much I enjoyed presenting at last night&amp;rsquo;s .&lt;a href=&#34;http://www.dotnetdevnet.com/&#34;&gt;NET Developers Network&lt;/a&gt; usergroup meeting; thanks to &lt;a href=&#34;http://www.guysmithferrier.com/&#34;&gt;Guy Smith Ferrier&lt;/a&gt; and the whole group for getting it organized.&lt;/p&gt;
&lt;p&gt;I think I managed to cover all the questions raised about TFS during the meeting. I have been through my slides to add notes to clarify any points raised. The updated set of the slide stack can be downloaded from &lt;a href=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;amp;subsection=Conference%20Papers&#34; title=&#34;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;amp;subsection=Conference%20Papers&#34;&gt;http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;amp;subsection=Conference%20Papers&lt;/a&gt;.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>As I sit on the train traveling north to our <a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Events">Black Marble hosted MSDN event</a> I must write to say how much I enjoyed presenting at last night&rsquo;s .<a href="http://www.dotnetdevnet.com/">NET Developers Network</a> usergroup meeting; thanks to <a href="http://www.guysmithferrier.com/">Guy Smith Ferrier</a> and the whole group for getting it organized.</p>
<p>I think I managed to cover all the questions raised about TFS during the meeting. I have been through my slides to add notes to clarify any points raised. The updated set of the slide stack can be downloaded from <a href="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;subsection=Conference%20Papers" title="http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;subsection=Conference%20Papers">http://www.blackmarble.co.uk/SectionDisplay.aspx?name=Publications&amp;subsection=Conference%20Papers</a>.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TV Mashup</title>
      <link>https://blog.richardfennell.net/posts/tv-mashup/</link>
      <pubDate>Sat, 15 Sep 2007 17:39:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tv-mashup/</guid>
      <description>&lt;p&gt;I have had a week of Web 2.0 &lt;a href=&#34;http://en.wikipedia.org/wiki/Mashup_%28web_application_hybrid%29&#34;&gt;mashup&lt;/a&gt; sites at the MIx07 London event so was interested in a thing on the TV last night whilst watching England&amp;rsquo;s appalling performance against South Africa in the rugby.&lt;/p&gt;
&lt;p&gt;eBay ran adverts with a virtually real time feed (20 second delay they claimed) from a live auction. Now I have never seen this before, maybe I just don&amp;rsquo;t watch ITV enough, so this is the first Internet/broadcast TV mashup I have seen. I wonder how much the sale prices of the lucky items featured in the adverts are affected?&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have had a week of Web 2.0 <a href="http://en.wikipedia.org/wiki/Mashup_%28web_application_hybrid%29">mashup</a> sites at the MIx07 London event so was interested in a thing on the TV last night whilst watching England&rsquo;s appalling performance against South Africa in the rugby.</p>
<p>eBay ran adverts with a virtually real time feed (20 second delay they claimed) from a live auction. Now I have never seen this before, maybe I just don&rsquo;t watch ITV enough, so this is the first Internet/broadcast TV mashup I have seen. I wonder how much the sale prices of the lucky items featured in the adverts are affected?</p>
<p>Back on the rugby world cup front - I have still seen nothing to change my opinion the semi finals will be all Southern hemisphere - Australia, New Zealand, South Africa and Argentina - yes Argentina.</p>
]]></content:encoded>
    </item>
    <item>
      <title>SQLBits</title>
      <link>https://blog.richardfennell.net/posts/sqlbits/</link>
      <pubDate>Fri, 14 Sep 2007 11:55:14 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/sqlbits/</guid>
      <description>&lt;p&gt;I have just heard my proposed session for the &lt;a href=&#34;http://www.sqlbits.com/&#34;&gt;SQLBits conference&lt;/a&gt; has been accepted. I will be talking on &lt;strong&gt;Unit Testing in SQL Server&lt;/strong&gt; my proposal was:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Test driven development is one of the current hot topics in software development, but how far can these principles be applied in the world of SQL? In this session I will look at the principles of TDD and other testing options using both freeware tools and Microsoft’s Visual Studio Datadude&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I have just heard my proposed session for the <a href="http://www.sqlbits.com/">SQLBits conference</a> has been accepted. I will be talking on <strong>Unit Testing in SQL Server</strong> my proposal was:</p>
<p><em>Test driven development is one of the current hot topics in software development, but how far can these principles be applied in the world of SQL? In this session I will look at the principles of TDD and other testing options using both freeware tools and Microsoft’s Visual Studio Datadude</em></p>
<p>So if you are interested I think there are still places available, register at the <a href="http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032349295&amp;Culture=en-GB">Microsoft Events site</a>. Looks like it will be an interesting day</p>
]]></content:encoded>
    </item>
    <item>
      <title>MIX 07 Day 2</title>
      <link>https://blog.richardfennell.net/posts/mix-07-day-2/</link>
      <pubDate>Wed, 12 Sep 2007 16:32:58 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/mix-07-day-2/</guid>
      <description>&lt;p&gt;See I was right, in my &lt;a href=&#34;http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/09/12/end-of-the-first-day-mix07-uk.aspx&#34;&gt;post yesterday&lt;/a&gt; I said the key role missing in most WPF projects was the &amp;lsquo;designer who can cut code&amp;rsquo; or &amp;lsquo;coder with a design eye&amp;rsquo;, the session &lt;strong&gt;&lt;em&gt;Silverlight, WPF, Expression design projects - where do we get started&lt;/em&gt;&lt;/strong&gt; today was on just this subject &lt;strong&gt;Paul Dawson&lt;/strong&gt; and &lt;strong&gt;Robby Ingebretsen&lt;/strong&gt; discussed the need for &amp;lsquo;producers&amp;rsquo; who take on this bridging role, with tips on where to find them.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>See I was right, in my <a href="http://blogs.blackmarble.co.uk/blogs/rfennell/archive/2007/09/12/end-of-the-first-day-mix07-uk.aspx">post yesterday</a> I said the key role missing in most WPF projects was the &lsquo;designer who can cut code&rsquo; or &lsquo;coder with a design eye&rsquo;, the session <strong><em>Silverlight, WPF, Expression design projects - where do we get started</em></strong> today was on just this subject <strong>Paul Dawson</strong> and <strong>Robby Ingebretsen</strong> discussed the need for &lsquo;producers&rsquo; who take on this bridging role, with tips on where to find them.</p>
<p>Also went to the <a href="http://www.iunknown.com/2007/04/introducing_iro.html">IronRuby</a> session, now this is not going to be anything you can use for business use soon, but will in the fullness of time provide a very interesting way to provide domain specific languages. It is sessions like this that I feel has given this conference a bit of PDC feel &lsquo;<em>look soon you will be able to do this&rsquo;</em> as opposed to the TechEd feel of &lsquo;<em>wow I can do this now</em>&rsquo;.</p>
<p>For me today does seem to have been a bit <a href="http://blogs.msdn.com/hugunin/archive/2007/04/30/a-dynamic-language-runtime-dlr.aspx">Dynamic Language (DLR)</a> focused. It is easy to get bogged down in the when to use Ruby or Python, but I think the key here is how easy it is to provide a domain specific scripting languages within .NET for line of business application&rsquo;s scripting.</p>
<p>As to the most interesting sneak peek it was another PDC like session, the one by <a href="http://research.microsoft.com/~simonpj/papers/stm/">Simon Peyton-Jones of Microsoft Research on Transaction memory</a> - this will be as like changing as virtual memory has been.</p>
<p>So it is the end of MIX07, I&rsquo;m staying in London for the <a href="http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032348074&amp;Culture=en-GB">Office Business Applications Architect Forum</a> tomorrow. I wonder how many other people are going to both events?</p>
]]></content:encoded>
    </item>
    <item>
      <title>End of the first day @ Mix07 UK</title>
      <link>https://blog.richardfennell.net/posts/end-of-the-first-day-mix07-uk/</link>
      <pubDate>Wed, 12 Sep 2007 10:12:32 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/end-of-the-first-day-mix07-uk/</guid>
      <description>&lt;p&gt;So we reach the end of the first day of Mix07 London, what are my thoughts? Well the conference, as conference go, is well organized and I can have no complaint over the quality of the sessions or presenters.&lt;/p&gt;
&lt;p&gt;Has it changed how I think about Silverlight? Well I have realised that 1.0 is a very different product to 1.1. Have no doubt this conference is about SilverLight 1.1, and that the 1.1 Alpha release is missing a lot of functionality at present. As a shipping product 1.1 look a long way off, at least a year (which is forever it seems in the world of Web 2.0).&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>So we reach the end of the first day of Mix07 London, what are my thoughts? Well the conference, as conference go, is well organized and I can have no complaint over the quality of the sessions or presenters.</p>
<p>Has it changed how I think about Silverlight? Well I have realised that 1.0 is a very different product to 1.1. Have no doubt this conference is about SilverLight 1.1, and that the 1.1 Alpha release is missing a lot of functionality at present. As a shipping product 1.1 look a long way off, at least a year (which is forever it seems in the world of Web 2.0).</p>
<p>As usual with this type of event the most interesting stuff tends to be in the session you don&rsquo;t expect. As a developer I had focused on Scott Guthrie&rsquo;s sessions, which were good, but fundamentally a walk through SilverLight API. So after lunch I fancied a change so went to the design track session <strong><em>ZAP!, WHAM!, KAPOW! - Killer digital  reading experience in the 21st century.</em></strong> This was about producing a digital comic and gave some nice detail on the pain points in WPF/XAML application development. The main tips were to get the data binding right and to create sensible reusable components, this might sound a bit obvious as a .NET developer but this was the design track!</p>
<p>However one of the speakers, <a href="http://blog.identitymine.com/blogs/robby_ingebretsen/">Robby Ingebretsen</a>, talked about the way WPF had allowed left and right brain people (coders and designers, is that the right way round?) to work together to create killer applications. However I do worry that tools and APIs are good but you also need people who can singularly bridge this gap i.e. coders who have a design eye, or designers who can cut code. The history of CSS/HTML web design has shown that this is a rare type of person. I think this is going to be the new resourcing pinch point for projects.</p>
<p>It was interesting that in this design track session a quick show of hands had the audience about 1/3 designers 2/3 coders  What does this say about interest in WPF/SilverLight area of the design side of this industry? Oh and by the way nearly all the attendees were white and male like most technical conferences I have attended. I had expected a more design orientated conference to have different gender mix. As an industry we do not seem to be reaching out to more diverse pool of employees.</p>
<p>So will SIlverLight <a href="http://scobleizer.com/2007/05/01/microsoft-rebooted-the-web-yesterday/">&lsquo;reboot the web&rsquo; as Robert Scoble</a> said. It will change it certainly, mashup applications look to the way forward with custom clients making &lsquo;appropriate&rsquo; use of web service based data. This also helps to address the key concern, accessibility, where single back-end system can have many clients built to target different user groups requirements such as visually impaired users requiring screen reader functionality. One single client does not have to meet the need of all clients. Hopefully SilverLight and other Web 2.0 technologies make the creation of multiple clients potentially affordable. We have yet to see if it will be socially acceptable.</p>
<p>However will we see rich applications written in SilverLight running inside browsers? I was at a Macromedia launch event some years ago for some version of Flash and they were then hailing the imminent arrival of rich browser based applications with partial post back, it all looked great, but this has not been how Flash has tended to be used. I think the difference now is we have a more mature SOA model behind the scenes and SIlverLight can leverage the power of .NET. These could be the key factors that move SilverLight from a tool to providing some design punch on web  page to being the core of the application functionality.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Where is the beer and sandwiches?</title>
      <link>https://blog.richardfennell.net/posts/where-is-the-beer-and-sandwiches/</link>
      <pubDate>Tue, 11 Sep 2007 12:41:43 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/where-is-the-beer-and-sandwiches/</guid>
      <description>&lt;p&gt;Today and tomorrow I am at the &lt;a href=&#34;http://www.microsoft.com/uk/mix07/&#34;&gt;Mix07&lt;/a&gt; conference in London which is being held at the &lt;a href=&#34;http://www.congresscentre.co.uk/&#34;&gt;Congress Centre&lt;/a&gt;, a building that I knew as the Trade Union Congress building. So given this location I expected a Harold Wilson &amp;lsquo;beer and sandwiches&amp;rsquo; style lunch, but no it was small food (very nice tapas style dishes) and as we all know - if it is small and food it is trendy. You know you are at a design orientated event when the even the food is trendy!&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>Today and tomorrow I am at the <a href="http://www.microsoft.com/uk/mix07/">Mix07</a> conference in London which is being held at the <a href="http://www.congresscentre.co.uk/">Congress Centre</a>, a building that I knew as the Trade Union Congress building. So given this location I expected a Harold Wilson &lsquo;beer and sandwiches&rsquo; style lunch, but no it was small food (very nice tapas style dishes) and as we all know - if it is small and food it is trendy. You know you are at a design orientated event when the even the food is trendy!</p>
<p>But to more major things, what of the content, it cannot all be designer fluff!</p>
<p>Most impressive thing thus far as the Tax form application in the Keynote, this should great integrated usages of technologies around an <a href="http://www.microsoft.com/whdc/xps/default.mspx">XPS</a> document. This alone as certainly get me thinking of interesting strategies for historic problems I have in long running projects.</p>
<p>But time is pressing (and battery is low) so back to the second of Scott Guthrie&rsquo;s sessions now.</p>
]]></content:encoded>
    </item>
    <item>
      <title>TFS and CruiseControl</title>
      <link>https://blog.richardfennell.net/posts/tfs-and-cruisecontrol/</link>
      <pubDate>Sun, 09 Sep 2007 20:23:10 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/tfs-and-cruisecontrol/</guid>
      <description>&lt;p&gt;I am speaking at the &lt;a href=&#34;http://www.dotnetdevnet.com/Home/tabid/36/Default.aspx&#34;&gt;.NET User group in Bristol&lt;/a&gt; next week and have been putting some final touches to my demos.&lt;/p&gt;
&lt;p&gt;One thing I will be talking about is using &lt;a href=&#34;http://www.codeplex.com/Wiki/View.aspx?ProjectName=TFSCCNetPlugin&#34;&gt;CruiseControl with TFS&lt;/a&gt;. Whilst getting this running on my demo VPC I hit a problem. I did a default install of CruiseControl .Net 1.3, this was on top of a Visual Studio TFS Orcas VPC I had built for &lt;a href=&#34;http://www.developerday.co.uk/ddd/slides.asp&#34;&gt;DDD5&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;When I tried to load the CCNet WebDashBoard (installed onto the default web site) I got an error:&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am speaking at the <a href="http://www.dotnetdevnet.com/Home/tabid/36/Default.aspx">.NET User group in Bristol</a> next week and have been putting some final touches to my demos.</p>
<p>One thing I will be talking about is using <a href="http://www.codeplex.com/Wiki/View.aspx?ProjectName=TFSCCNetPlugin">CruiseControl with TFS</a>. Whilst getting this running on my demo VPC I hit a problem. I did a default install of CruiseControl .Net 1.3, this was on top of a Visual Studio TFS Orcas VPC I had built for <a href="http://www.developerday.co.uk/ddd/slides.asp">DDD5</a>.</p>
<p>When I tried to load the CCNet WebDashBoard (installed onto the default web site) I got an error:</p>
<p><em>The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application&rsquo;s trust level in the configuration file.</em></p>
<p>After a bit of digging I found some reports that this can be seen if WSS 2.0 has previously been installed on the server, which it was as this is a requirement for TFS (until 2008 Beta2 which uses WSS 3.0). I did not therefore have the option to remove it to get round the problem.</p>
<p>So I took another route to fix it; I created a new web site on another port and pointed this at the WebDashBoard, and this worked fine. So a perfectly good workaround for the problem</p>
]]></content:encoded>
    </item>
    <item>
      <title>It&#39;s conference season</title>
      <link>https://blog.richardfennell.net/posts/its-conference-season/</link>
      <pubDate>Fri, 07 Sep 2007 11:18:23 +0000</pubDate>
      <guid>https://blog.richardfennell.net/posts/its-conference-season/</guid>
      <description>&lt;p&gt;I am off to loads of conferences and event in the next few week, you wait all year for one then they all come together&amp;hellip;..&lt;/p&gt;
&lt;p&gt;Next week&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.microsoft.com/uk/mix07/&#34;&gt;Mix07&lt;/a&gt; - London&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032348074&amp;amp;Culture=en-GB&#34;&gt;Office Business Applications Architect Forum&lt;/a&gt; - London&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This month&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=MSDN:%20Introduction%20to%20.NET%20Framework%20V3.0%20%28including%20sneak%20preview%20of%20v3.5%20changes%29%20%28Martin%20Parry%20and%20Mike%20Taulty%29&#34;&gt;MSDN: Introduction to .NET Framework V3.0&lt;/a&gt; - Leeds (hosted by Black Marble)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.mssoaandbpconference.com/&#34;&gt;Microsoft SOA and Business Process Conference 2007&lt;/a&gt; - Seattle&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Next month&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&#34;http://www.mseventseurope.com/TechEd/07/Developers/Pages/Default.aspx&#34;&gt;TechEd Developer 2007&lt;/a&gt; - Barcelona&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.blackmarble.co.uk/events.aspx?event=MSDN:%20What%e2%80%99s%20new%20in%20Visual%20Studio%202008%20and%20.NET%20Fx%203.5%20for%20the%20Web%20Developer%20%28Martin%20Parry%20and%20Daniel%20Moth%29&#34;&gt;MSDN: What’s new in Visual Studio 2008 and .NET Fx 3.5 for the Web Developer&lt;/a&gt; - Leeds (hosted by Black Marble)&lt;/li&gt;
&lt;li&gt;&lt;a href=&#34;http://www.sqlbits.com/&#34;&gt;SQLBits&lt;/a&gt; - Reading&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I aim to blog from all of them WiFi and batteries allowing&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I am off to loads of conferences and event in the next few week, you wait all year for one then they all come together&hellip;..</p>
<p>Next week</p>
<ul>
<li><a href="http://www.microsoft.com/uk/mix07/">Mix07</a> - London</li>
<li><a href="http://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032348074&amp;Culture=en-GB">Office Business Applications Architect Forum</a> - London</li>
</ul>
<p>This month</p>
<ul>
<li><a href="http://www.blackmarble.co.uk/events.aspx?event=MSDN:%20Introduction%20to%20.NET%20Framework%20V3.0%20%28including%20sneak%20preview%20of%20v3.5%20changes%29%20%28Martin%20Parry%20and%20Mike%20Taulty%29">MSDN: Introduction to .NET Framework V3.0</a> - Leeds (hosted by Black Marble)</li>
<li><a href="http://www.mssoaandbpconference.com/">Microsoft SOA and Business Process Conference 2007</a> - Seattle</li>
</ul>
<p>Next month</p>
<ul>
<li><a href="http://www.mseventseurope.com/TechEd/07/Developers/Pages/Default.aspx">TechEd Developer 2007</a> - Barcelona</li>
<li><a href="http://www.blackmarble.co.uk/events.aspx?event=MSDN:%20What%e2%80%99s%20