7

I got into a debate on this question which distilled to if it is a good idea for a specialization of a class to add business rules. Unfortunately this point got trampled in the comments so I'm asking it again as a separate question.

I believe two things:

  • An object is responsible for its internal consistency
  • A specialization/child class has more specific rules than the super class which can be seen as the general case.

The logical result of this is that a specialization might only accept some values of input for its method or might change some values in order to stay consistent. But isn't that OK, since guarding its internal consistency is what an object should do?

A point many people were making is that some code could break if it would make assumptions. For example that setting the width would not change the height of a square. However wouldn't that be bad code? Since you make assumptions on how the object does something instead of just telling it what to do and not worry about it?

If we would not write code like that almost all overloading would have problems. How often doesn't overloading add an extra fail condition or more internal logic that might be seen via other parts of the interface? Maybe the point an old professor of me once made is correct: "you should only ever use inheritance to overload the constructor". At the time that seemed a bit strict but now it seems like the only way to guarantee these kinds of problem never happening. To use the old square: rectangle analogy again:

public class Rectangle
{
    private int width, height;

    public Rectangle(int width, int height){this.width = width; this.height = height;}

    public void SetWidth... SetHeight...
}

public class Square : Rectangle
{
    public Square(int diameter) : base(diameter, diameter) {}

    public void SetDiameter...
}

Note: I hope we can play this question a little bit less 'on the man' than the question that inspired it. I've been on Stack Exchange for more than three years but I was quite intimidated by the type of responses here.

Roy T.
  • 654

4 Answers4

7

The trap a lot of people fall into is looking at inheritance as a means to codify any relationship or similarity between two classes. That's not the case. Inheritance is useful for certain limited kinds of relationships and is actually harmful when used outside those contexts. Lack of substitutability is one reason why.

The crucial point a lot of people miss about the square-rectangle example is that it is perfectly substitutable if you reverse the relationship. In object-oriented design, a rectangle is a specialized form of a square. The reason that's hard to see is that people want to organize classes by the similarities between the classes themselves, perhaps following real-world taxonomies, when they should really be concerned with organizing classes by what methods the calling code will need to use on a mixed collection of those classes. That's where the idea of substitutability comes in.

Think of it this way. You have a bunch of code that sets the width of squares and you want to throw some rectangles into the mix. You can set the width on either a square or rectangle all day without violating substitutability, but independently setting the height only applies to the rectangle, so it should not be a part of the base class. You're adding to the specialized class, not changing the common behavior.

In other words, don't make it a false choice and say to yourself, "I have no choice but to violate substitutability." If you can't do something without violating substitutability, then you either need to change your inheritance relationship, or not use inheritance at all.

Karl Bielefeldt
  • 148,830
3

A point many people were making is that some code could break if it would make assumptions. For example that setting the width would not change the height of a square. However wouldn't that be bad code? Since you make assumptions on how the object does something instead of just telling it what to do and not worry about it?

I'm allowed to assume the object follows its specifications. If the specification for Rectangles says that the width and height are independently modifiable, then any implementation must conform. (If you don't require conformance, it's impossible to reason about your program.) Now, you could argue that the specification for Rectangles never said that setWidth can't change the height, but if you attempt to list all the things something must not do you'll find that the list is infinite:

  • setWidth musn't reformat my hard drive
  • setWidth musn't delete files in my home folder
  • setWidth musn't make changes to the Windows Registry
  • setWidth musn't change another object's state
  • setWidth musn't go into an infinite loop
  • setWidth musn't post to Twitter on my behalf
  • ...

The only sensible way to specify something is to list the things it must and may do and assume anything not listed is forbidden. So if the spec for setWidth says it changes the rectangle's width, I assume it doesn't change the height.

How often doesn't overloading add an extra fail condition...

Doing this will definitely bring you pain and misery. Any program written according to the specifications is assuming a certain operation can only fail because of A, B, and C. If you introduce a new fail condition D, no one can possibly handle it.

A word to the wise, though - if you need this kind of substitutability, inheritance is probably not what you want. You start with some type Foo and then you realize you want a ShinyFoo. Later you want a TransparentFoo. Eventually you'll want a ShinyTransparentFoo and then you'll be in trouble. You don't run into this sort of problem if you use an interface and rely on composition to reuse behavior.

Doval
  • 15,487
0

Take the classic example for inheritance - Dog : Animal where Dog is a class that inherits from Animal. While Animal can eat, a dog can bark, jump, and swim. So you add these methods to Dog.

It makes sense from a relation point of view. Why should Animal be able to bark? Why shouldn't Dog be able to bark? In fact, nobody is claiming otherwise. You see this type of relation often in code, however, whenever you need to use bark, you simultaneously need to know if it is a Dog. Therefore any usage of bark automatically adds a direct dipendency on Dog any way you slice it. Even if you take an Animal and check if it is a Dog, and then act accordingly, you're no longer acting on the general case of handling Animal. There's nothing stopping you from diong this, though you've lost any advantage you had with inheritance.

In order to truly take advantage of inheritance, you must let inherited classes represent various implementations of the super class. In this way, it isn't enough that a class "is a type of" another class. It must also play the part of the super class with minor exceptions such as during its creation which is the only point in your program that should know the specific implementation being used.

So perhaps a better example wouldn't be Dog : Animal but rather Square : Drawable, where Drawable is an object that can be called to draw itself regardless of how.

Neil
  • 22,848
0

The logical result of this is that a specialization might only accept some values of input for its method or might change some values in order to stay consistent. But isn't that OK, since guarding its internal consistency is what an object should do?

The issue isn't really what the object does internally to implement the message it is asked to perform (eg setWidth). The issue is what the caller understands the end result of the message to be.

Both a setWidth method on a rectangle that only sets width, and a setWidth method on a square that sets both width and height, are perfectly valid so long as it is clear to anyone making the calls what these methods do.

The problem is when you say a square is a rectangle, and then look at the setWidth method of a rectangle. That method puts the object into a particular state. It is understood by all who call the method that it will update the width and only the width.

When you say a square is a rectangle you are saying to anyone working with the square that everything you knew about rectangles still holds when working with squares. And one of the things that the programmer knew about rectangles was that setWidth updated only the width. That is known behaviour.

So the programmer knows what this method does, she knows that square is a rectangle, so she knows that setWidth will only update width.

Except it doesn't only update width, it now updates height as well. The program has just lied to the programmer, it has broken its contract.

This example might seem trivial, but it becomes much more important when you factor in polymorphism.

You might not know what shape you have, you might not know if you have a rectangle or a square. If you know that a square is a rectangle then you know that what ever object you have it will behave the way you expect a rectangle to behave if you apply rectangle behaviour to it.

But of course if it doesn't then you have a serious problem because you now cannot trust your objects to behave the way you expect them to, and thus you cannot trust your code to do what you expect it to do.

Instead what you need to do is break the connection, tell the programmer that a square is NOT a rectangle and that the programmer needs to know if they have a square or not and they need to understand what behaviour a square specifically has because it is not the same as a rectangle.

Cormac Mulhall
  • 5,176
  • 2
  • 21
  • 19