28

We have here a large legacy code base with bad code you can't imagine.

We defined now some quality standards and want to get those fulfilled in either completely new codebase, but also if you touch the legacy code.

And we enforce those with Sonar (code analysing tool), which has some thousands violations already.

Now the discussion came up to lower those violations for the legacy. Because it's legacy.

The discussion is about rules of readability. Like how many nested ifs/for.. it can have.

Now how can I argue against lowering our coding quality on legacy code?

Kromster
  • 606
  • 1
  • 8
  • 18
keiki
  • 481
  • 6
  • 10

8 Answers8

73

Code is just a means to an end. What that end might be varies, but typically it's profit.

If it is profit; then to successfully argue anything you want to show that it will improve profit - e.g. by increasing sales, reducing maintenance costs, reducing development costs, reducing wages, reducing retraining costs, etc. If you can't/don't show that it will improve profit; then you're mostly just saying it'd be a fun way to flush money down the toilet.

Brendan
  • 4,005
44

Me, myself, and I, have been both a producer and maintainer of legacy code. If your tool's generating "thousands of violations" (or even hundreds for that matter), forget the tool, it's inapplicable to the situation...

I assume the original developers are long gone and unavailable for discussion. So nobody's around who understands the why's and wherefore's of the design and coding style. Correcting hundreds or thousands of violations isn't going to be a matter of rewriting a few lines of code here and there. Instead, it undoubtedly requires refactoring/re-functional-decomposition. You try doing that to any large existing codebase, without intimately understanding its current design, and you're bound to introduce a whole new set of bugs/problems/etc. Just a new can of worms even worse than the one you now have (or worse than your tool >>thinks<< you now have).

The only sensible approach to address "thousands of violations" would be to rewrite from scratch. A long and expensive effort, and almost impossible to sell to management. And in this case they're probably right...

Legacy code typically just requires tweaks. Like for y2k, or when stocks went from 256th's to decimal. I did loads of both that cr*p. And lots of other similar stuff. It's typically pretty "pinpoint" in that you can "read through" the sometimes-bad style, bad functional decomposition, bad etc, and locate the collection of places that need to be modified. And then, what happens "between those places", i.e., what's the more high-level flow, can remain a mystery to you. Just make sure you understand the localized functionality you are changing, and then test, test, test for any side-effects, etc, your localized knowledge won't be able to anticipate.

If you're unable to see your way through code like that, then you may not be the best person to maintain legacy code. Some people can start with a blank screen and write beautiful programs, but can't start with a large codebase of other people's code and maintain it. Other people can maintain code, but can't start from scratch. Some can do both. Make sure the right people are maintaining your legacy code.

The occasional times you might want to redesign and rewrite your legacy codebase from scratch is when the business (or other) requirements change to such an extent that seat-of-the-pants "tweaks" just can't accommodate the changed requirements any longer. And at that point, you might as well just start off by writing a new functional requirements document in the first place, making sure all stakeholders are on-board. It's basically a whole new ballgame.

The one-and-only >>wrong<< thing to do is try treating legacy code maintenance the same way you'd go about new development. And that one wrong thing seems to be exactly the path you'd like to go down:) Take my word for it, that ain't what you want to do.

13

The question is not how good or bad that code is. The question is how much benefit you get from how much investment.

So you have a tool that finds thousands of things that it doesn't like. You have the choice of ignoring the tool's results, or to invest work to change thousands of things. That's a lot of stuff. People will not be focused, not while making the changes, not while reviewing them, and this will not be properly tested because it takes ages to figure out how to test (or just to try out) every change, so there will be bugs. So until you found and fix those bugs, you invested lots of time and money with overall negative result.

On the other hand, tools can find things that they not just "don't like", but that are bugs or potential bugs. If the legacy code is still in wide use, then the right tool can actually find bugs. So turning compiler warnings on one by one, checking whether they are useful (not all are useful) and fixing those warnings can be worthwhile.

gnasher729
  • 49,096
7

If it ain't broke don't fix it

This practically ought to be tattooed on every software developer and manager so that they never forget it.

Yes, it breaks all modern coding conventions. Yes, it's written in FORTRAN-77 with a wrapper in C so it can be invoked from Python. Yes, those are unstructured GOTOs. Yes, there's one subroutine three thousand lines long. Yes, the variable names only make sense to a Croat. etc. etc.

But if it has been tested in anger over a decade or more and has passed, leave it alone! If the modification needed is small, then keep it as small and as local as possible. Anything else runs the greater risk of breaking something that did not need fixing.

Of course, sometimes you find that it really is an unmaintainable kluge and that the only way to accomodate the "small" modification is to rewrite from scratch. In which case you have to go back to whoever is asking for the change and tell him what it will cost (both in dollars or man-hours for the rewrite, and in blood sweat and tears for the inevitable new bugs).

A long time ago there was a rash of failures of one of Digital's disk drives. They ended up recalling and replacing all of them at gord knows what cost. Apparently there was a blob of glue holding a filter inside the HDA. The engineering manifest said of the glue "Non-parametrically specified, NO SUBSTITUTES ACCEPTABLE". A bean-counter "saved" under a cent per disk drive by sourcing some other glue. It gave off a vapour inside the HDA which killed the drive after a couple of years in service. It wasn't broke until he fixed it. Doubtless "it was only a tiny change..."

nigel222
  • 255
5

There is definitely no point to reduce the quality level for legacy code. There is also no need to fix warnings immediately. Just ensure that all your "new" changes in every component of your project are following high quality standard. I.e. no commit should increase warning count ever.

It is an uniform and dead simple approach.

It allows old legacy code to work as is (because no unnecessary modifications are done to it). It ensures new code is of high quality. No additional cost is involved.

Basilevs
  • 3,896
3

Risk control….

  • The code in the large legacy codebase has been tested at least by real life usage of the software.
  • It is very unliky the code in the large legacy codebase is covered by automated unit tests.
  • Hence any changes to the cold in the large legacy codebase is high risk.

Cost….

  • Making code pass a code checker while you are writing the code is cheap
  • Doing so at a later state is much more costly

Benefit

  • You hope that keeping to a coding standards leads to a better design of code.

  • But it is very unlikely that having someone spend many week changing code just to remove warnings from a coding standard checking tool, will result in an improvement to the design of the code.

  • Simple code is clearer and tends to be easier to understand, unless the complex code is split into short methods by someone that does not understand it….

Recommendation….

  • If a section of code is shown to have lots of bugs in it, or needs lots of changes to add additional functions then spend the time 100% understanding what it does.
  • Also spend the time adding good unit test to that section of code, as you have a proven need for them.
  • You could then consider refactoring the code (you now understand) so it passed the new code checking too.

Personally experience….

  • In other 20 years of working as a programmer, I have never seen a case where blindly changing old to remove all output from a new coding standard checker had any benefits apart from increasing the income of the contractors that have been instructed to do so.
  • Likewise when old code has to be made to pass code reviews (TickIt/ISO 9001 done wrong) that required the old code to look like the new coding standard.
  • I have seen many cases where using coding standard checking tools on new code have had great benefits by reducing ongoing costs.
  • I have never seen a company fail to the costs of maintaining old code that is used in a product with lots of customers.
  • I have seen company fail due to not getting the income from customers by being too slow to market….
Ian
  • 4,623
0

Is the discussion about lowering the thresholds of the tool? ...or did I misunderstand it. It's written in a confusing way. If it is not, please discard my answer.

IMHO it would a good idea to reduce the tool's sensibility. Why? Because it would highlight the most hairy pieces of code. The alternative is producing reports with thousands of violations, becoming useless by its sheer size.

...other than that, just talk about it with the people involved and hear out their reasons too.

dagnelies
  • 5,493
0

I would try to separate the legacy code from the new clean code. In e.g. Eclipse you could do this by putting them in different projects which use each other. 'clean'-project and 'legacy'-project.

This way you can analyze both projects in sonar with different quality standards. The standards should only differ in the importance of the rules. The rules itself should be the same. This way you can see in the 'legacy'-project what needs to be 'fixed' to meet the standards of the 'clean'-project.

Whenever you managed to lift some legacy code to the higher standards you can move it to the 'clean'-project.

In every day work you cannot accomplish to lift every class you touch to the 'clean'-project standards because of the lag of time. But you should try to get there in little steps by trying to improve the legacy code a bit every time you touch it.

MrSmith42
  • 1,039
  • 7
  • 12