15

Since I discovered the powers of the final keyword in Java a few years ago, it helped me to make my codes a lot more readable, since people can easily see what are read-only variables. It may also provide a little boost to the JIT, and while that's a very limited one, it can't harm, especially when targeting embedded devices such as android platforms.

But more than everything, it contributes to make a design more robust to changes, and guide other committers when they need to modify the code base.

However, while browsing some of the JDK source code, I never stumbled upon this keyword, either for classes, methods, or variables. The same applies for various systems I had to review.

Is there a reason? Is it a common design paradigm to let everything mutable from the inside?

Aurelien Ribon
  • 261
  • 2
  • 6

5 Answers5

15

The problem with using final to convey that something is read-only is that it only really works for primitive types like int, char etc. All objects in Java are actually referred to using a (kind of) pointer. As a result, when you use the final keyword on an object, you are only saying that the reference is read-only, the object itself is still mutable.

It might have been used more if it did actually make the object read-only. In C++ that's exactly what const does and as a result it is a much more useful and heavily used keyword.

One place I use the final keyword heavily is with parameters to avoid any confusion created by things like this:

public void someMethod(FancyObject myObject) {
    myObject = new FancyObject();
    myObject.setProperty(7);
}
...
public static void main(final String[] args) {
    ...
    FancyObject myObject = new FancyObject();
    someOtherObject.someMethod(myObject);
    myObject.getProperty(); // Not 7!
}

In this example it seems obvious why this doesn't work but if someMethod(FancyObject) is big and complicated confusion can ensue. Why not avoid it?

It's also part of the Sun (or Oracle now I guess) coding standards.

Gyan
  • 2,835
9

I suspect that the reason you don't see it is because the JDK is designed to be extended (it is the base from which all of your programs are built on). It wouldn't be uncommon at all for application code to make use of it. I also wouldn't expect to see a lot of it in library code either (as libraries are often designed for extension as well).

Try taking a look at some good open source projects and I think you'll find more.

By contrast to what you're seeing in Java, in the .NET world, I've heard the argument many times that clases should be sealed (similar to final applied to a class) by default, and that the developer should explicitly un-seal it.

Robert Harvey
  • 200,592
Steven Evers
  • 28,180
3

I agree that the final keyword improves readability. It makes it much easier to understand when an object is immutable. However, in Java it is extremely verbose, particularly (in my opinion) when used on parameters. This is not to say that people shouldn't use it because it is verbose, but rather they don't use it because it is verbose.

Some other language, such as scala, make it much easier to make final declarations (val). In these languages, final declarations can be more common than variables.

Note that there are many different uses of the final keyword. Your post mostly covers items 2 and 3.

  1. final Classes (JLS 8.1.1.2)
  2. final Fields (JLS 8.3.1.2)
  3. final Methods (JLS 8.4.3.3)
  4. final Variables (JLS 4.12.4)
schmmd
  • 139
  • 1
2

Well to start with, official Java Code Conventions neither favor nor prohibit particular use of final. That is, JDK developers are free to choose the way they prefer.

I can't read their mind but to me, the preference on whether to use final or not has been a matter of focus. A matter of whether I have enough time to thoroughly concentrate on the code or not.

  • Say, in one of my projects we could afford spending average a day on like 100 lines of code. In this project, I had a distinct perception of final as a garbage that just obscures things that are already expressed clearly enough in the code. It looks like JDK developers fall in that category, too.
     
    On the other hand, it was totally opposite in another project, where we spent an hour average on 100 lines of code. There, I found myself shooting final like a mashine gun in my own and in other's code - simply because it was the quickest way to detect an intent of the guy who wrote it before me and, similarly, the quickest way to communicate my own intent to the guy who will work on my code later.

It may also provide a little boost to the JIT, and while that's a very limited one, it can't harm

Reasoning like above is slippery. Premature optimization can harm; Donald Knuth goes as far as to calling it "the root of all evil". Don't let it trap you. Write dumb code.

gnat
  • 20,543
  • 29
  • 115
  • 306
2

Recently I discovered the joy of "Save Actions" feature in Eclipse IDE. I can force it to reformat my code, insert missing @Override annotations, and do some nifty stuff like remove unnecessary parenthesis in expressions or put final keyword everywhere automatically every time I hit ctrl + S. I activated some of those triggers and, boy, it helps a lot!

It turned out that many of those triggers act like a quick sanity check for my code.

  • I intended to override a method but annotation didn't show up when I hit ctrl + s? - perhaps I screwed up parameter types somewhere!
  • Some parenthesis were removed from the code on save? - maybe that logic expression is way too difficult for a programmer to get around with quickly. Otherwise, why would I add those parenthesis in a first place?
  • That parameter or local variable isn't final. Does it have to change its value?

It turned out that the fewer variables change the less trouble I have at debug time. How many times you were following some variable's value only to find that it somehow changes from say 5 to 7? "How the hell it could be?!" you ask yourself and spend next couple of hours stepping in and out of countless methods to find out that you've made a mistake in your logic. And in order to fix it you have to add one more flag, a couple of conditions and carefully change some values here and there.

Oh, I hate debugging! Each time I run the debugger I feel like my time is running out and I desperately need that time to make at least some of my childhood dreams to become true! To hell with debugging! finals mean no more mysterious value changes. More finals => less flimsy parts in my code => less bugs => more time to do good stuff!

As for final classes and methods I don't really care. I love polymorphism. Polymorphism means reuse means less code means less bugs. JVM does a pretty good job with devirtualization and method inlining anyway so I don't see a value in killing possibilities for code reuse for unsound performance benefits.


Seeing all those finals in the code is somewhat distracting at first and takes time to get used too. Some of my team mates still getting very surprised to see so many final keywords. I wish there was a setting in the IDE for special syntax coloring for it. I would be happy to switch it to some shade of gray (like annotations) so they won't get too distracting when reading code. Eclipse currently has a separate color for return and all other keywords but not for final.