7

SonarQube is a software product which runs various coding style rules and other metrics similar to FxCop or Re-sharper. It defines breaking the style rules as:

"MAINTAINABILITY ISSUE

This is commonly referred to as technical debt. Issues associated with maintainability are named “code smells” in our products."

https://www.sonarsource.com/why-us/code-quality/

However, I would normally think of technical debt as "A piece of code which doesn't follow the overall architectural pattern."

For example, say we have a business rules layer, but as a quick fix we put one bit of business logic in the UI layer.

Or, say we have a project which is very OOP, but we add a procedural helper class for a new feature.

The code might well meet all the style rules, have no duplication etc and be fine on its own. Its only considered technical debt because it doesn't fit the pattern of the other code in the project. In order to make the project 'nice' again we need to go back and refactor it so that everything works the same way.

I would think that this kind of difference would be hard to pick up with code analysis rule sets and it seems disingenuous to call style rule violations "technical debt".

Am I wrong or right? Do some objective rules or some class of rules make a good definition of technical debt or at least a type of technical debt?

CLARIFICATION:

this tool will tell management that a line of code

this.myvariable

Is 2min worth of tech debt. Which seems wrong.

However, it will also do the same thing for cyclomatic complexity or duplicate code. Which seems less wrong.

Ewan
  • 83,178

8 Answers8

15

Can technical debt be detected by code analysis?

This is like asking if a speedometer will make you a safer driver.

However, I would normally think of technical debt as "A piece of code which doesn't follow the overall architectural pattern."

Technical debt has many more faces than that. The metaphor is about borrowing time from your future self. Just like driving, there are many ways to get into trouble that the tool simply won't help you with. But that doesn't mean the tool is useless. Paying attention to it can help. It's just not everything.

Technical debt is a concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution.

Technical debt is commonly associated with extreme programming, especially in the context of refactoring. That is, it implies that restructuring existing code (refactoring) is required as part of the development process. Under this line of thinking refactoring is not only a result of poorly written code, but is also done based on an evolving understanding of a problem and the best way to solve that problem.

Technical debt may also be known as design debt.

techopedia.com

Uncle Bob will tell you that the quick gains you get by procrastinating design is no excuse to make a mess.

Fowler will argue that some debt is prudent, some reckless, and some accidental.

Cunningham seems satisfied as long as you pay down the debt to make room for a new feature before adding that feature.

All I care about is if you're making your fellow programmers miserable. I've been in development environments where change was fast and easy. Others where it was excruciating and glacial. The big difference had nothing to do with having, or not having, a tool that wagged it's finger at you when your design was weird.

The biggest thing missing from this tool is an ability to call out bad names. Give things bad names and you're in for a world of technical debt. Bad names are loan sharks that charge staggering interest and want you to pay every time you look at them. How's a tool going to spot that? Best thing for this is asking a fellow developer if the name makes sense.

Am I wrong or right? Do some objective rules or some class of rules make a good definition of technical debt or at least a type of technical debt?

Does obeying the speed limit make me a good driver? Would you like me to stop doing donuts on your lawn at 25 miles an hour?

Or as explained by a 5 year old: "Does this bug you? I'm not touching you."

This thing might help, but not if you think it takes care of everything.

candied_orange
  • 119,268
2

That is a hard question.

On one side Ward Cunningham originally used it to illustrate the difference between a theoretical waterfall model and an iteration-based incremental development model. If you only start writing your code once all possible requirements are known, all architectural decisions and plans are done, everything to be coded is already perfectly optimized, and so on ... you will have the faster and most efficient code writing possible, with no waste and no maintenance problems (also probably having no need for maintenance since all possible requirements were known before development started). But, at the same time, the whole process will get very-very long and in a real-life project where requirements can change over time, this might not even be possible, even on a theoretical level. On the other hand, the incremental development model lets you start coding earlier, deliver earlier, adapt to changing needs and newly discovered requirements faster. Which Cunningham argued to be a better process by leading to the most appropriate product in the shortest time. But, also has a risk if you do not maintain your code when new information becomes available, which can lead to problems.

As such he was not writing about bad code quality, he assumed developers will try to do the best possible job they can in both processes. From this point of view you would not even need to look at the code if it is not developed in a waterfall model (being somewhat of a non-real-life tool where every requirement is known upfront), but in iterative/incremental/agile/etc... ways it most probably has technical debt.

On the other hand, research on this topic has already shown, that the tools currently available use different definitions for code quality (and so technical debt), measure different properties, give results that might not correlate, the severity of found issues might not relate to their real severity and the estimated fixing times can be off by a multiplier of 20.

And to make the situation a bit more interesting, we also have to acknowledge that the code analysis tools are also software products, that are under development: on the exact same code using different versions of the same detection tool, some releases might report lower numbers thanks to bugs being fixed or detection algorithms becoming better, some releases might report larger numbers when detectors for new issue types are added.

Our research on this topic (also contains the citation of research I mentioned above): https://www.researchgate.net/publication/357875475_Reproducibility_in_the_technical_debt_domain

0

You can use CppDepend to create easily your own rules using CQLinq, the rules can concern the architecture, design and the implementation. Including the coupling between projects, types and methods. And you can specify your technical debt for each rule as described here http://www.cppdepend.com/technicaldebt

warnif count > 0 
from m in Methods
where m.CyclomaticComplexity > 10
select new { 
   m,
   m.CyclomaticComplexity,
   Debt = (3*(m.CyclomaticComplexity -10)).ToMinutes().ToDebt(),
   AnnualInterest = (m.PercentageCoverage == 100 ? 10 : 120).ToMinutes().ToAnnualInterest()
}
0

There are all sorts of things that could be technical debt. Not following coding standards is probably the least of them.

Huge files and functions are a problem, and static analysis tools can flag them.

But the thing tools won't really help with is an over- implicated and undocumented mess of code, where nobody knows how it works any more. People keep tacking more bits on to fix problems, relying on ever more tests to check they haven't broken it.

Simon B
  • 9,772
0

There are trivial cases: My compiler will tell me when I use a deprecated function, that is a function that works today, but is outdated, and will go away. You won’t be able to avoid fixing that code, so it’s technical debt.

gnasher729
  • 49,096
0

There are trivial cases: My compiler will tell me when I use a deprecated function, that is a function that works today, but is outdated, and will go away. You won’t be able to avoid fixing that code, so it’s technical debt.

That’s technical debt: Work that you will have to do at some point in the future, when low code quality makes working with your code base expensive.

gnasher729
  • 49,096
0

As a developer, you have to write code that works (which can be more or less difficult depending on the situation) and then you have to avoid stupid bugs. Unit tests are very, very good at finding stupid bugs at almost zero effort. So very useful during refactoring where there is a risk of stupid bugs, but totally useless for all the geniuses who never, ever in their life make stupid mistakes :-) That's a situation where unit tests won't save you time immediately, but in the long term. And if you write unit tests anyway, it's most effective to write them with the code, because you know the subject at that point.

Unit tests are also very, very useful in situations where a solution is hard to find, but easy to verify. In that situation unit tests may be unavoidable, because you can't get your code into a state where it works correctly without verifying the results.

Another situation where you need unit tests is when you have complicated rules that need to be followed. Like handling Unicode correctly can be quite complicated - you can then write unit tests that specifically test for the hard cases and it forces you to write code getting these cases right.

gnasher729
  • 49,096
0

Your understanding of techinical debt is slightly off.

Technical debt is anything that makes maintenance more difficult than necessary. It can range from trivial issues like bad variable names and inconsistent formatting, to severe architectural problems, missing documentation, or lack of automated processes. Anything where you pay an ongoing cost (in development time and project risk) that could have been avoided by investing time in fixing the underlying issues.

What you describe is not really technical debt but more a question of overall uniformity, which is IMHO not necessarily desirable. A good rule is "use the right tool for the job". It is perfectly fine to have e.g. a calculation engine in function style with a GUI in object-oriented code. Different patterns are appropriate for different problems. Trying to fit everything into a single pattern will not necessarily lead to better overall maintainability. Its about choosing the right pattern for each challenge, not the same pattern for every challenge.

You are correct that not all technical debt can be detected and flagged by automated tools. Automated tools are good for detecting the more trivial issues though, so they have value.

JacquesB
  • 61,955
  • 21
  • 135
  • 189