Measuring code quality

In about six weeks I’ll attend the second edition of I T.A.K.E. Unconference. “Preparing” for it, I decided to take a look at the videos for some of the talks that I missed last year. Today, I watched The Good, the Bad and the Ugly of Dealing with Smelly Code by Radu Marinescu.

Watching it has reinforced my belief that we can (and should) try to be more specific about measuring the code quality. I meet quite often teams that only track the number of reported production bugs. This is a good metric, and one I recommend, but it shouldn’t be the only thing you measure. Why? For at least a few reasons:

  • It’s a lagging indicator. That means you can’t take action until after an unfortunate event has happened.
  • It doesn’t give much indication on where to focus your efforts. Sometimes it’s difficult to reason about the module(s) with low quality from the bugs alone, because there’s a chain of errors that led to the bug.
  • It tends to lead to arguments between coders and testers about whose fault it is that the bugs weren’t caught.

So, what is Radu suggesting that we do? Well, he argues that we should augment our toolset by adding applications which measure and visualize quality at the class and module level. This way, we can control quality.

A popular tool for code analysis is SonarQube, which supports 20 languages, but there’s a Wikipedia page that lists tens of others. Radu himself has been part of a team which develops two products for the same purpose: inFusion and inCode. You should check them out. They not only provide standards metrics like lines of code per class or cyclomatic complexity, but they have heuristic engines which can detect code smells like God Class or Feature Envy. If you use inFusion or inCode, you can also take advantage of another tool called CodeCity, which represents your code graphically, just like a city with buildings. The image above is actually generated using this tool.

Some more advice I took from Radu’s talk:

  • Often time the code is the only documentation we have. Technical specs don’t get written or they are written upfront, so by the time we finish the project they are out of date. As such, to ensure maintainability it makes a lot of sense to invest time in cleaning up our code.
  • Use multiple metrics to make decisions. A doctor correlates the various results from your blood analysis to understand if you’re healthy. Likewise, we shouldn’t focus on a single metric, like cyclomatic complexity or fan out when trying to decide where to intervene.
  • Metrics will only point you in the right direction. They are like a rough map. To really understand what’s going on, you’ll need to dig deeper. Gather context by going at the code level.
  • Whenever possible, use tools that have visualization features. “A picture is worth a thousand words”, they say. Our brains have evolved for pattern matching, that’s why we can instantly understand what a chart like the one above is telling us.

What about you? How do you ensure a high code quality? What tools are you using for code analysis? Let me know in the comments.

Also, if you’d like to learn more about the quality of your code or playing with data, don’t miss this year’s edition of I T.A.K.E. Unconference. A couple of sessions that I find interesting: Aki Salmi – Refactoring Legacy Code – a true Story and Martin Naumann – Open Data Wizardry 101.

More from the Blog

Leave a Comment

Your email address will not be published. Required fields are marked *

    Your Cart
    Your cart is empty
      Apply Coupon
      Available Coupons
      individualcspo102022 Get 87.00 off
      Unavailable Coupons
      aniscppeurope2022 Get 20.00 off
      Scroll to Top