8

I work in a small sized software/web development company.

I have gotten into the habit of optimizing prematurely, I know it is evil and promotes bad code, but I have been working at this firm for a long while and I have deemed this as a necessary evil.

It has never caused me an issue so far in the past, but it might if I get partners or a successor.

Should I change my current practice now to prepare for that case, or should I not worry about it?

MattyD
  • 2,295

8 Answers8

14

IMO, 'optimising prematurely' is only bad if reduces readability. Programming tends to be a write-once read-many activity, and if your optimisations make the code significantly harder to understand then I would be concerned.

I would take a guess that the effort required to refactor your code would not be worth the benefits (in business terms), but its amazing what the odd comment does to enhance code clarity. This is especially true for 'unusual' code (i.e. the optimisations your are concerned about).

GavinH
  • 676
12

Although this question has been answered IMO biggest drawback of premature optimization is the time wasted both initially while optimizing without having the necessary performance information as well as (frequent) rewrites because while focusing on optimization the implementation has gone askew. Again IMO I think it is not a good programming practice.

Gaurav
  • 3,739
3

Sounds like what you call optimising is what I call cutting corners.

If it truly is optimising, then there is only one answer: document it well.

If it is cutting corners, then it really is up to you to see where the benefits outweigh the risks, be prepared to justify, or change that habit if it is not justifiable (in your own view... until you get involved with others and then use peer review as your measuring stick).

asoundmove
  • 1,667
2

When talking about microoptimizations, looking to squeeze every last nanosecond out of that for loop, it often isn't worth your time. When working on a web/database application, make sure your database interactions are more or less optimal, but more importantly, make sure they'll scale. Unless you're working on very high-performance code, scientific data processing or some such thing, other procedural optimizations won't help you much.

Michael
  • 1,327
1

Optimizing without profiling is always wasted time & energy. Shooting in the dark. Voodoo stuff. Bad.

Before optimizing do profile and/or analyze to make sure that:

  • You do actually have a performance problem

and

  • You know where the bottleneck is

Don't try guessing. Measure twice, cut once.

And after optimizing, profile again to make sure you solved the problem.

Maglob
  • 3,849
0

In small shops many developers are also the designers and analysts. When you have a bigger picture (did I just say that?) of the problem, we are more likely to plan ahead. If it performs poorly, it's a direct reflection on you since you're the only programmer.

Narrowing your estimation time can be a little trick to cause you to speed up and stop any unnecessary development. Any kick in the pants will do.

JeffO
  • 36,956
0

To Your current state

It always worked for You to do it? "If it is not broken, don't fix it" is another golden rule which I should stick to more. It caused me lot of headache when I did not stick to it. It can never cause You a problem, but who knows.

As has been said before "do the simplest thing that could possibly work". I found Pike rules on programming to be very enlightening.

What to consider

I assume You mean mostly speed optimization. Because there is also optimization for readability, size ...

Premature optimization is always bad, because:

  • It takes time to optimize. You might have spend this time by making something better; for example, calling Your girlfriend;).
  • Optimization usually means making things more concrete.
  • It will most likely be unreadable to other programmers. Every programmer should know simple algorithms so if there is something more complex he will waste time puzzing on what it means; he will be thninking low level, parsing more information and quite likely making errors in understanding Your optimized code.
  • It has been said by Donnald Knuth, which is the authority for programming and if he claims something is some way it is almost certainly so.
  • You have to measure first where is Your system slow and THEN optimize. Not the other way around. For example Quake engine used string compare O(N) to lookup variables instead of hashing O(1), it used scripted language Quake C which took about 10 percent of it's performance and "famous" PVS is simple hashing O(1) with simple compression (visibility is precomputed, each section of world to each section therefore what objects are visible is obtained trough a hash map lookup and since such information is huge in memory it is compressed). Michael Abrash's Graphics Programming Black Book is full of "measure first"" examples and examples where he was wrong estimating what was problem (chapter 17

But premature optimization is not always bad, because:

  • There cases when, by optimizing, You can see better solution. But is is about more about a chance.

So You have to weight pros and cons of doing premature optimizations.

I think that, because we are all people You just have to burn Yourself few times with premature optimizations to stop using them.

user712092
  • 1,412
  • 10
  • 19
0

The Knuth quote is in a context where it is expected you write as good code as you can in the first place. The optimization part is where you give in on the good code to get an expected performance boost, by writing less clear code or do smart tricks.

This is usually not a good idea, as you rarely know when it will be needed ahead of time - this is the job of the profiler to tell you - so you end up with sub-optimal code without any benefit.

Note, that "as good code as you can" may be doing a normal amount of trickery which in your experience is necessary anyway, but then it is your job to document these things well for future you's.

It is my experience that compilers do a very good job of creating good code for good, simple code, but may be confused by smart code. Smart code has its place to work around issues (runtime or compiler wise) but do not overdo it.