7

A major objective of software development is to focus on delivery of features implemented in good quality code.

Knowledgeable developers are supposed to write software with good performance to the extent of their understanding (that is, in the software module they are working on). For example, they should be able to use appropriate algorithms and data structures when implementing various parts of the system. In other words, the first implementation which they wrote to satisfy a unit test would have the good performance characteristic that the final product is supposed to have.

However, when the system is large and complex, consisting of many components written by different people/teams and include multiple vendors, is it more productive for a company to separate the roles of performance engineering from software developers, so that software developers can focus on features and correctness that meet specification, and the performance engineer can focus on measuring and improving performance without affecting feature and correctness?

Put simply, are there two different hats, requiring two different mind-sets?

Motivation:

From the answers I got from another question An alternative to requiring red in TDD: reverting code change?, it dawns on me that during the main software development time, meeting functional requirements is a must-have while software performance is a nice-to-have.

  • Tests on functional requirements have deterministic outcomes, usually Pass/Fail, or Accept/Reject.
  • Performance characteristics don't have a fixed Accept/Reject target, but do have a metric which can drive the performance-optimizing effort.
  • Future changes in functional requirements may make it necessary to sacrifice/adjust certain performance objectives.
  • Personally, I do agree that there should be a quick feedback cycle between feature implementation and software performance monitoring.
  • In this question I'm arguing whether these two are separate specializations - although some people might be specialized in both.

Possible outcomes

When feature implementation and optimization are done by different developers:

  1. They will go into conflicts and nothing gets done. (contributed by @quant_dev)
  2. The resulting software has better performance.
  3. The resulting software has poor performance.

Related:

  • https://softwareengineering.stackexchange.com/q/59292/620
  • (Will ask in a separate question soon.) Is it possible that software performance issues can arise from large software systems that cannot be forecast even by the very knowledgeable software developers? Will a dedicated performance engineer be able to solve it?
rwong
  • 17,140

4 Answers4

7

No, no, no, you can't separate something that is intrinsically a part of the job.

Let programmers code any crap that makes all tests tick green and than tell the other team to rewrite everything to make it also work fast? Doesn't work that way.

It is a job of the same programmer who writes the code to also think about performance. If you free them of that obligation, what would motivate them to learn and make things better each time?

Having said that, there is a career path if you choose to specialize in performance tuning. But it's not like a daily job, it is rather you offering your consulting services to various clients to help with their performance issues. But obviously to be able to do that you must already have passed beyond being able to write just working code and have gained insight into how make working code a fast code.

5

Yes, it is absolutely useful to have dedicated performance engineers, if your system is above medium complexity.

The reason is that locating the bottleneck of a system in actual use is very different from general non-wasteful programming. It absolutely has to be done on a realistic system, not a development sandbox. It requires skills (analytic rather than synthetic) and tools (tracers, online profilers) that are very different from application programming. Even the greatest language experts usually cannot predict what the bottleneck will be just from inspecting the source code, and indeed this is not a worthwhile thing to do. Never assume; always measure.

Also, once you have measured, the part of a system that has to be changed to improve performance is usually very small compared to the entire system. And solving the problem may well require a different programming language than the original one (C instead of Python, Assembler instead of C...) It simply isn't worthwhile to expend this kind of effort on every part of a system to begin with, and it makes no economic sense to require all your developers to know assembler just because it might turn out to be required in a few places.

Kilian Foth
  • 110,899
4

I don't believe you can split this roles completely, because performance tuning often leads to different design solutions (flatter code structure, more repetition, less indirection, less abstraction) than focusing totally on development speed, maintainability and code aesthetics. Fast code is often ugly code. If you have 2 separate teams, one working on performance improvements, another working on functionality, they will inevitably clash and argue about design changes, which can be irreversible. Simply put, if a software must be fast AND maintainable, it needs to be written by a single team which knows BOTH performance tuning AND principles of "nice design". And yes, such a team is harder to build. Sorry ;-)

quant_dev
  • 5,227
1

Here's an example of one of my experiences in performance tuning.

Here's another.

If I were back in academia, and I could tell today's programmers how to deal with this issue, I would make sure they are competent in the random-pausing technique.

As it is, they come out of school thinking a) performance is all about big-O, and b) profilers are how you find performance problems. This is because that's all their teachers know, and their teachers have never done much performance tuning on real (large) software, as in the links above. They still think measuring = finding, and they still think they're looking for "methods where time is spent", as if the meaning of that is even understood. Here's a list of the myths.

So, to answer your question, a) quality testing should make sure stress-tests are included, to detect performance problems, and b) the programmers should be the ones to find and fix the performance problems, just like any other bug.

When programmers get enough experience, they learn to avoid design approaches that lead to performance problems, such as over-complicated data structure design, unnormalized data, and things they've been taught as part of OOP, such as over-reliance on notification, too much "new"-ing, and information-hiding leading to program-counter-swallowing black holes.

With experience, they learn how to use their OOP skills judiciously and sparingly, so as not to cause many performance problems. They also learn how to find and remove the remaining ones they do cause.

Performance problems are just like bugs. Experienced programmers make less of them, but if they're not making any they're not working. In any case it's their job to find and fix them, but in my experience, sadly, few of them actually know how, and we all experience the result.

Mike Dunlavey
  • 12,905