Automation is a big theme of ThreadFix’s QA strategy, and almost nowhere is that more apparent than in our workflow for SCM (“source code management” for the purposes of this post).
ThreadFix has three repositories that comprise the application itself: our public GitHub repository, and our two private repositories for code specific to ThreadFix Enterprise and the ThreadFix ScanAgent functionality.
The key branches are the developer branches, with others acting as their test-based complements. The QA branches include our test suites, QA deploy scripts, test-specific REST calls, and other utilities that would not be appropriate to include in a final release. Other than these additions, it is imperative that a developer commits (with at least some degree of assurance about base performance) be synced and represented in their corresponding QA branch, and that any severely breaking code be caught and signaled early. These requirements are what automating SCM can address.
Polling the Repo
Luckily the initial kickoff of our SCM workflow is painless thanks to our Jenkins set-up. One of the options available as a build trigger is to poll the SCM for changes.
The job being triggered pulls the latest code from the dev branches for each repo, and then builds the application and runs its unit tests.
Do or Die, Pass or Fail
Here’s where things start to split in the workflow. If the unit tests pass, and the code is deemed stable enough to perform basic functionality according to the unit tests, then the subsequent merge jobs are kicked off. If it fails, then an email is sent to ThreadFix’s technical lead, support developer, and QA team, showing which module failed. This can also be easily set in Jenkins (it is advisable to check both of these boxes, as an outright build failure will not be reported by the first option alone):
(Don’t Fear) The Repo
Assuming the unit tests passed, the three sequential merge jobs are triggered. These bring the QA branches inline with their developer counterparts. To wrap-up, we run our unit tests against both the Community and the Enterprise version of our new QA code. This gives us some assurance that the merge process left us with code that can pass our original unit test expectations. If not, we’ll hear about it through email (or from Mac).
In conclusion, this process allows us to maintain some level of assurance about the quality of our branches’ code, while also keeping pretty up-to-date on the unit test status and the build’s viability. If part of the process fails, we are notified, and once the issue has been addressed, the process automatically kicks off on its own. If it takes a few tries, the emails help to keep the issue at the forefronts of our minds and to keep us honest.
On the other side of this process, we have an automated system for pulling, deploying, and integration-testing ThreadFix, but that’s a matter for another post.
Hopefully this sparks some ideas about different aspects of project QA automation environments.