The Issue
Whilst refreshing an end-to-end devops demo, one I use for both Azure DevOps and GitHub, I hit a problem. The new Playwright UX Tests, that were replacing old Selenium ones, were failing on the GitHub hosted runner.
The strange thing was the same tests worked perfectly on:
- My local development machine
- The Azure DevOps hosted runner
- And strangest of all, a GitHub self hosted runner
The Solution
Adding some logging to the tests showed the actual issue was that on the GitHub hosted runner the code to count the rows in an HTML table was always returning 0.
I went down a few dead ends, looking at permissions and tooling versions, but the solution was simple. The Playwright tests were running too fast on the GitHub hosted runner.
So, at it’s simplest just adding a wait before my table locator fixed the issue.
await page.waitForTimeout(5000);
int rowCount = await Page.Locator(".dataTable").Locator("tr").CountAsync();
However, spraying waits across the test codebase is not a great solution, but it worked and gave me a starting point to refactor the test to be more robust.
A better solution was to explicitly wait for the table to be visible before counting the rows.
await Expect(Page.Locator(".dataTable")).ToBeVisibleAsync();
int rowCount = await Page.Locator(".dataTable").Locator("tr").CountAsync();
The key takeaway
I think the key here is not the quality of my UX tests, but that this issue only showed up on GitHub hosted agents even though they are built off the same image as the Azure DevOps ones and use effectively the same agent.
The only difference is that Azure DevOps and GitHub hosted runners are provisioned with different virtual hardware specifications
- Azure DevOps: 2 vCPUs, 7 GB RAM
- GitHub: 4 vCPUs, 16 GB RAM
This can be important, as I found, not just for speed of build.