There are two challenges getting into TDD:
- Why should I test upfront when I know it fails (there's this massive aversion of failure in my part of the world)?
- Setting up the whole thing.
I made peace with the first requirement using a very large monitor and a split screen, writing code and test on parallel, deviating from the ?pure teachings' for the comfort of my workflow.
The second part is trickier, There are so many moving parts. This post documents some of the insights.
TDD has the idea that you create your test first and only write code until your test passes. Then you write another failing test and start over writing code.
testin your package.json you can run any time. For a connoisseur there are tools like WallabyJS or VSCode Mocha Sidebar that run your tests as you type and/or save. The tricky part is: what testing libraries (more on that below) to use?
- In Java Maven has a default goal
validateand junit is the gold standard for tests. For automated continuous IDE testing there is Infinitest
Automated testing, after a commit to Github, GitLab or BitBucket happens once you configure a pipeline as a hook into the repository and have tests specified the pipeline can pick up. Luckily your maven and npm scripts will most likely work as a starting point.
The bigger challenge is the orchestration of various services like static testing, dependency management and reporting (and good luck if your infra guys claim, they could setup and run everything inhouse).
Some of the selections available:
- Repository: Github, GitLab or BitBucket
- Pipeline: Heroku Flow, BitBucket, Travis, Jenkins, CodeShip, CircleCI and more of them
- Testing and Reporting service: CodeClimate, SauceLab, Codacy, Coveralls, Snyk (for vulnerabilities), GreenKeeper (for dependency management), or many more - some run extra tests, some report on tests that ran in your pipeline