Part 2: Tests & Reporting
By the end of the last post, we had built an instrumenter. It takes our single-file source program and modifies it to add counters. These counters let us measure statement coverage when our tests run. The next step is to make our tests use our instrumented source code.
Let us assume that the following file comprises of the entire test-suite for our source program. Our goal now is to measure its code coverage.
Integrating a code coverage tool directly with a test framework makes it really easy for developers to start measuring coverage in their tests.
To be able to measure code coverage without modifying the tests, there are two things that we need from the framework:
- Allow for instrumentation of the source code imported by the tests.
- Allow for the collected coverage information to be shipped elsewhere.
Mocha provides both.
Let us first try to run our test-suite:
» mocha --reporter spec basic test ✓ must work as expected when the input is right ✓ must throw when percentages don't add up to 1 ✓ must throw when the amount is negative 3 passing (13ms)
Things look good. The next step is to make it use our instrumented source.
A mocha compiler plugin allows for pre-processing source files imported in tests. We define such a plugin below:
In this plugin, we explicitly force the source code of our program through our instrumenter. We let the other files pass-through.
Now, let us run the tests again after asking mocha to use our compiler:
» mocha --reporter spec --compilers js:bin/compiler.js basic test ✓ must work as expected when the input is right ✓ must throw when percentages don't add up to 1 ✓ must throw when the amount is negative 3 passing (13ms)
No difference! This shouldn’t surprise us as we know that the instrumentation process does not (and should not) affect the behavior of the source program. However, the counters should have silently done their work under the hood. So, the next step is to stash the collected coverage information somewhere.
A mocha reporter plugin is typically used to visualize the test results. For example, for our previous runs, we use the spec reporter. While the visualization function of the reporter is of little concern to us, it does provide a very nifty hook that we need: the end event. The end event is fired after all tests have run.
In this plugin, we do two things:
- We inherit from the Spec reporter. This will ensure that we retain the default behavior of a typical test reporter without re-implementing any of the logic ourselves.
- When the end event is fired i.e. after all tests have run, we read the collected coverage information from the program (remember the __coverage__ global variable that our instrumenter added?) and save it to disk as a trace file in a machine-readable format (which we will cover in the next section).
Now, let us run the tests again after asking mocha to use our reporter:
» mocha --reporter reporter.js --compilers js:bin/compiler.js basic test ✓ must work as expected when the input is right ✓ must throw when percentages don't add up to 1 ✓ must throw when the amount is negative 3 passing (13ms) » ls -1 lcov.info lcov.info
Note that the test results have been displayed the same way as before. So, we have managed to mimic the behavior of the previous reporter from our own. More importantly, we now have a file — lcov.info with the coverage data ready for analysis.
The final step is to summarize and visualize the coverage data. Code coverage tools generally ship with their own reporters. Instead of writing our own, we are going to let LCOV’s genhtml tool handle this for us. The trace file from the previous section is in a format that is understood by genhtml.
» genhtml -o html/ ./lcov.info Overall coverage rate: lines......: 92.9% (13 of 14 lines) functions..: no data found » open html/index.html
In this blog post series, I have tried to lay down the basic skeleton of a simple, functional code coverage tool. I hope that you found it useful.