28

One of the most common things I see when discussing pros/cons of microservice vs monolithic architecture is that monolithic applications have, or always trend toward, 'tight coupling.'

To be honest, I'm not seeing why this is true if your developers know how to leverage any standard OO language. If I need a certain part of my application to handle, say, payments, and multiple other parts to interact with the payments system, can I not leverage the abstraction features of the language I'm using (classes, access control, interfaces) to create modularization between different application functions?

For example, if I'm using Java, I could create a 'PaymentsDAO' (data access object), or maybe 'PaymentsClient', which can expose functions that the rest of the code can use to interact with the payments database, etc. If one sub-team in my team wants to work on payments, they can continue to write code in the PaymentsDAO, publish that code to the central repo, etc, while I simply use the DAO's function signatures, which would not change, and continue to write code wherever I need it, right? Where's the coupling? If payments code changes, I don't need to change anything in my code, or understand the changes, to account for that.

Is the only drawback of this 'coupling' that I need to git pull more often, since the payments code would need to be in the same deployment as my change, as opposed to a separate deployment, and then consumed over the network through an API call?

To be honest, I'm not seeing a strong case for the 'tight coupling', and I want someone to change my view here because my current team at work is using a microservice architecture :D I'm more certain about the other pros of MSA, like scalability, flexibility of technology stacks across microservices, fault tolerance of a dist system, and less deployment complexity, but I'm still uncertain on coupling.

Bob Dole
  • 421

5 Answers5

33

Why X when you can do Y?

The problem with questions like these is that no one has ever claimed that you can't take the more difficult route if you really want to. People aren't pointing out that the difficult route is impossible, but rather that it is needlessly difficult.

So, yes, if you avoid certain approach and instead spend all needed effort making sure things don't go wrong, that inherently means that things aren't going wrong and you didn't need to use that certain approach. But can you really be sure that:

  • You're not spending more effort than if you had followed the suggested approach
  • You and every other developer will not make any mistakes in avoiding those problems
  • You'll be able to recover when you come across such a problem, and not just realize you've painted yourself in a corner?

The odds are not in your favor.


Different types of coupling

There are degrees of loose coupling. It's not a binary state. This is not a complete list, but for the purposes of this discussion, some degrees are:

  • Level 0 - No abstractions, hardcoded references
  • Level 1 - Abstractions (interfaces), but everything in one project
  • Level 2 - One solution, separate projects abstractions
  • Level 3 - Abstractions are not part of the project but are loaded separately (think NuGet)
  • Level 4 - Independently hosted microservices

You're at level 1/2 now. And it avoids some of the more blatant problems with tight coupling, what I tend to refer to as "code coupling". There are different types of coupling, and not all of them are related to the code of a specific application. Subsequent levels are going to solve additional problems with certain kinds of coupling.

Level 3 moves the abstraction out of the solution, so that it can live its own life. This is the most useful when more than one solution consumes this abstraction, or when the developers of the abstraction are a completely different team from the developers of the main product. Maybe it's a different company team, maybe they're different companies altogether.

You've probably used several third party libraries. How much did you know about their teams, who developed them, or the internals of the library they created? Not much. And that's the point, because the abstraction is so loosely coupled to the end product that you don't even need to know anything (other than its interface) about it anymore.

But level 3 still suffers a specific problem: if the abstraction gets updated, then all of its consumers will have to rebuild (or at the very least re-release) to account for these new changes. This requires an active line of communication, at least one-way (e.g. a third party developer's news feed announcing a patch to fix issues or expand the feature set).

This is what level 4 (microservices) solve. Because it's now an independently deployed application, any published change goes live for every consumer at the same time. The consumers don't need to change anything or re-release their own application.
As long as the interface or public URL doesn't change, the two services don't even need to communicate anything about any of their changes.

A very clear example of why this is desirable is e.g. an identity provider.

  • Levels 0/1/2 all amount to authorization for this specific application.
  • Level 3 makes it possible to standardize identity management across all of the library's consumers, but in the end each consumer is still choosing to update to new versions of the library individually.
  • By having this as a microservice (level 4), any changes made to the identity provider (e.g. changed rights, user access, different security protocols) automatically deploy everywhere at the same time, and the developers of the consuming applications don't even need to be kept in the loop at any stage.
Flater
  • 58,824
10

Distributed Systems

The problem here is that you're still thinking of this in terms of a single program running on a single machine.

Let's say you have three services:

  1. A Payment System.
  2. A Background Task that takes 1 hour.
  3. A Video Upload Service.

Let's say you build this as a single service and deploy it. It goes live and starts handling payments, uploading, and background tasks. Everything works fine, and your code is well structured.

That is, until the problems start...

Deployment

You find out there is a major problem with the payment system, and you need to change some configuration. This is a trivial change, and only requires a quick restart.

Problem: A quick restart breaks the background task, you now have to rerun it... delaying results (or even leading to a corrupt state!)

Scaling

Your service is going viral, you need to ensure you have the resources to handle the load! You can scale based on CPU or Requests/s.

Problem: The payment system uses little CPU, so needs to be scaled based on Requests/s. Video uses a lot of CPU (transcode), but few requests... so needs to scale based on CPU. Your infrastructure only allows you to pick one.

Diagnostics

A random rare bug is occurring. This breaks everything, including payment - costing the company big time!

Problem: You don't know which service the bug occurs in. Furthermore... it might be an unexpected result of multiple subsystems interfering.

Dependancies

You need to upgrade a library to fix a production bug in the Background worker.

Problem: The upload service uses the same library... but the upgraded version breaks it. Fixing one service breaks another.


As you can see while programming languages do have tools to help decouple code, that doesn't decouple deployment and execution by itself. Microservices help to decouple deployment and execution, as well as helping decouple code.

NPSF3000
  • 285
8

If I need a certain part of my application to handle, say, payments, and multiple other parts to interact with the payments system, can I not leverage the abstraction features of the language I'm using (classes, access control, interfaces) to create modularization between different application functions?

Not in practice.

The problem is that all of those abstraction features live in the code that developers control. They can change them, ignore them, destroy them. And companies will push developers to cut corners to meet a deadline, or hit some quota. Even if you believe that all engineers will refuse to cut corners to get a promotion, they’ll still screw up. Once you get a dozen or so programmers making a few dozen changes a week, the chance that someone adds coupling somewhere it shouldn’t go increases rapidly over time.

Sure, people can do the same thing with microservices, but it is harder to do; easier to notice.

Telastyn
  • 110,259
3

What is ‘tight coupling’

Tight coupling means when the components know a lot about what the other does. It does not matter whether they communicate by calling functions, passing interfaces or making HTTP requests—if the other side uses deep understanding of the data, it is tightly coupled, and will have to be adjusted a lot as the project continues.

If the application is developed as one product by one team, it will tend to tight coupling. Even if you make it a bunch of micro-services. Everybody understands what the other parts do and just slapping the logic anywhere is simply easier than creating proper abstractions.

You have to put conscious effort to insert good abstractions to reduce the coupling—and it again does not matter if the abstractions are interfaces, network calls, or on the other hand just views in a database. What matters is that they only expose specific aspect of the problem and hide all the other details.

Reducing coupling

You can only reduce coupling to the point the problem you are solving itself can be decomposed. Payments probably need to know about the services being paid for all the way until you generate the payment orders to send to the bank. There you have an abstraction, the payment orders are generic, but the code generating them is coupled to other parts of your accounting. And no DAO can change anything on that. As long you need the data, it is coupled.

Taking advantage of lose coupling

What is an advantage of micro-service architecture is that it can take advantage of lose coupling if and when you can achieve it.

So you need authentication? If you have a monolithic application in Java, you need a middleware fitting the framework you use—and if it does not fit well, you'll have to adapt it. But if you use micro-services, you just install this other service and it absolutely does not matter that it's implemented in Go.

Then you need to generate the billing information. Hey, that component does the job and we know how to use it because we've used it on that other project already and it totally does not matter that it is a blob of ageing PHP.

And that cloud service that provides nice graphs for the managers? Much easier to integrate in a micro-service architecture again.

So that's the benefit of micro-service architecture with lose coupling—for the functionality that you can abstract you can more easily integrate off-the-shelf components, and you have bigger choice of components, because you are not tied to the application server.

Micro-service architecture still has advantages in things like scaling even for components that are tightly coupled, but that is a different question.

Deployment

Is the only drawback of this 'coupling' that I need to git pull more often, since the payments code would need to be in the same deployment as my change, as opposed to a separate deployment, and then consumed over the network through an API call?

For most software, you want to be deploying the whole solution as a unit no matter how many components it is split over and how tightly or loosely coupled they are.

Deploying parts of the solution is a configuration management nightmare. You will end up doing it if you have millions of users connected night and day and disruption to the service would give you bad reputation. Then you do rolling updates and switch components separately. And of course carefully evolve all your interfaces to be backward compatible so it still works during the update

But for a typical corporate information system, you test the new set of components (of which some may not have changed) and deploy the new set of components as a unit. Less risk of trouble. So the coupling is not as much of a concern.

Jan Hudec
  • 18,410
0

"Monolithic" is a conflated term, let's break it down:

  • Monolithic binary (distribution)
  • Monolithic project (all code in one place, but might be distributed in separate binaries)
  • Inversion of Control (within a large project)
  • Direct function calls between modules (totally valid if all modules are in your control and they will be deployed as a single binary)
  • Big ball of mud - where there is no "cohesion" - this is the bad thing.

If I need a certain part of my application to handle, say, payments, and multiple other parts to interact with the payments system, can I not leverage the abstraction features of the language I'm using (classes, access control, interfaces) to create modularization between different application functions?

You can, and you might use Inversion of Control.

For example, if I'm using Java, I could create a 'PaymentsDAO' (data access object), or maybe 'PaymentsClient', which can expose functions that the rest of the code can use to interact with the payments database, etc. If one sub-team in my team wants to work on payments, they can continue to write code in the PaymentsDAO, publish that code to the central repo, etc, while I simply use the DAO's function signatures, which would not change, and continue to write code wherever I need it, right?

Where's the coupling?

Having Inversion of Control in a single binary works. Distributing your coupling (microservices) over network calls, and breaking up deployment of smaller binaries, risks making that coupling harder to manage. It's funny how someone can say "looser coupling" and everyone just nodes and repeats "looser coupling".

I'm more certain about the other pros of MSA, like scalability

You might be referring to Function-as-a-service (like Lambda). They are actually optimised for low-loads, and will drain your bank account as you scale (unless you reserve capacity). Also, you shouldn't overengineer for an assumed scalability problem. A single web service with an AWS RDS database can scale quite well - you can keep adding read-only replicas too which is nice.

flexibility of technology stacks across microservices

Make sure you use the word "polyglot" you'll feel smarter. While code bases will stratify over time, you should try to minimise the amount of languages.

fault tolerance of a dist system

But at the expense of greater complexity, which can lead to more bugs and worse quality for your user.

less deployment complexity

Instead of deploying a single binary, you need to have deployment pipelines for 30. I call that more complexity.

but I'm still uncertain on coupling.

That's a great start.

Kind Contributor
  • 890
  • 4
  • 13