Bridging Internal and External Quality with Sonar

by olivier gaudin|

    A few weeks ago, Evgeny described how Sonar can be used with its JaCoCo plugin to measure code coverage by Integration Tests. By adding this new feature to Sonar, Evgeny has actually done more than closing the most voted issue in Jira at the time : he has made a first baby step towards closing some gaps that exist in the world of Software Quality.

    Indeed the domain of Software Quality is currently divided in 2 worlds that currently have little intersection :

    On one side, the world of external quality whose main objective is to make sure that a software responds as per expectation. This regroups Integrations Tests, User Acceptance Tests, Non-Regression Tests, Performance Tests… and consists mainly in interacting with the software, observing its behavior and making sure it is as per functional specification and does not regress. Those interactions may happen manually or through one of the numerous tools that exists on the market. This is commonly described as a black box approach and serves the purpose of building the right software. The return on external quality investment is immediate.

    On the other side, the world of internal quality whose main objective is to measure how well the software has been built. This means internal inspection of the source code with static and dynamic (unit tests) analysis tools in order to review how the software performs against a set of pre-defined technical requirements. This is a white box approach aiming at making sure that we are building the software right. You can use Sonar to measure internal quality, i.e. to measure the technical debt according to the 7 deadly sins of the developer. Each failure to a sin generates technical debt that brings risk to the software and / or makes it more difficult to maitain over time. The real return on investment of assessing internal quality is medium to long term.

    To fully embrace Continuous Delivery approach, external and internal quality must be continuously assessed and so fully automated. This is already possible with Sonar to integrate the assessment of the internal quality to the Continuous Integration process. But building some bridges between internal and external quality worlds can provide even more insights on your application :

    How much is my source code covered by functional tests ?
    This use case is pretty straight forward based on the post from Evgeny. The only thing you are missing is a new metric that consolidates Integration Tests (ITs) and Unit Tests (UTs). As soon as this is added, you will be able to classify every line of code into one of the 4 following categories :

    • I can sleep well, this line is covered by ITs and UTs
    • I should be careful when changing this line. Though I will know immediately if I have broken its contract (UTs), I can not be sure that there will not be regressions in the application (missing ITs)
    • or vice versa
    • I am playing russian roulette if I change this line

    Are the newly developed source code covered with associated ITs ?
    It is important to know how much coverage you have at a given point in time, but it is even more important to make sure that when changing or adding lines of code, you cover them with appropriate tests. This is what is going to insure that you're working on a long-term philosophy. So basically, what you will want to review at the end of each sprint is whether you have the appropriate coverage (UTs and ITs) on those lines.

    Is all the code that I deploy to production really used ?
    I hear that 65% of functionality in production are never used. Is it the case on my software ? Also do I have code that is not attached to any available functionality ?

    What am I really going to impact when changing this line ?
    It is often difficult to know what other component you are going to impact when making certain changes. It would really help to have some kind of cartography to show the impact your changes are going to have on the overall software.

    Obviously, this list of questions is not exhaustive but I believe this is a pretty good start that will help to :

    • Reduce existing source code and focus on what is really being used
    • Have the big picture when assessing the risk associated to source code changes
    • Have a full functional software cartography of what is being used, at what frequency versus and how well it is tested to enable having the highest ROI

    Sonar does not yet provide support for all those use cases and I believe that this is definitely a subject that will keep us busy in 2011. While some people work to assure traceability between all kind of technical documentations, we prefer investing to insure continuous and automated traceability between executable specifications and source code.