2

Currently, our architecture uses an "API-first" approach in building our product. This product is divided across multiple teams handling different microservices.

enter image description here

The above image succinctly shows our approach. There are JSON Schema files that are shared between all microservices. These JSON schema files are used to auto-generate DTO classes for every microservices. These JSON schema files are contracts between the services. RabbitMQ is the backend for all the microservices. The message is published or consumed using these DTO classes. The classes are serialized into JSON format and sent across RabbitMQ. While receiving, the JSON format is deserialized into DTO classes.

Since the underlining domain model does not directly map to the JSON Schema, there is a translation layer, which extracts relevant properties from DTO classes and fills the domain model for processing the business logic.

The problem we are facing currently is, if there is a change in JSON schema files, all the services using the files, along with the translation layer in the respective services need to be adapted to imbibe the change. Also, the translation layer is growing with every change to the JSON schema.

One possible solution here may be to create JSON schema files based on the domain model directly. Though this does not obviate the need for a translation layer, it would make it short enough to maintain. The new architecture is shown below.

enter image description here

The main problem here is, this is not an "API-first" approach. Two microservices having the same domain class might differ in meaning, eg. Person(age, height, address) and Person(strengths, weakness, insurities). In this case, a translation is not possible.

One possibility here is to somehow share the JSON schema across all the microservices, using a shared configuration service(like Netflix Archaius).

My questions are

  1. Is such an architecture feasible?
  2. If yes, how can I share JSON files(probably using a shared configuration)?
  3. If no, what alternative approach should I use?
lennon310
  • 3,242

2 Answers2

4

The problem we are facing currently is if there is a change in JSON schema files, all the services using the files, along with the translation layer in the respective services need to be adapted to imbibe the change

Microservices is meant to allow independent development of each microservice. So each microservice should be responsible for it's own API, including any schemas it exposes. Each microservice would be responsible to ensure backward compatibility according to whatever policy you have. This should ensure that services can be deployed according to different schedules without affecting other services. If schemas are shared between services without a clear owner or change management policy, it becomes difficult to claim that the microservices are actually independent, it might even be a distributed monolith.

Overall I would recommend carefully considering the advantages and disadvantages of each kind of architecture. Microservices have distinct advantages, but also disadvantages. It sounds a bit like you are trying to have the advantages of independent microservices while not actually committing and accepting the cost of potential code duplication and strict version/change management.

Edit:

It is perfectly acceptable to have different definitions of something depending on the context. A UI-service might have a line that is a pair of screen-coordinates and a colour, for a map-service it might be a sequence of geo-coordinates using some specific projection. If there are conflicts regarding what a schema should contain, chances are it is should not be a single thing, since it is obviously used in different contexts.

If two services need to agree on how to communicate the teams should talk to each other to decide on how something should be done, but the team that implements the service should also own the API and any associated schemas, have the final say in it's design, and have to support said API for however long you decide to.

Now, Designing a good API is very difficult since you need to balance ease of use, generality, flexibility, security, backward-compatibility etc. This typically requires experience, so microservices are often most successful when you already have a good understanding of how they should be designed.

JonasH
  • 6,195
  • 22
  • 20
0

You have come to the realization that independence does not mean independence huh. You have "loosely coupled" services because of the message queue (at least I hope your services are not expecting responses from the messages that get sent to the queue), but services are still coupled, just the direction of the coupling changed (people like to skip this tiny detail). This does not mean that you have a distributed monolith, but it means that you are working with a system... which is more complex, because there is 4 different code bases in 4 different languages and out of your reach to change.

If I understand you correctly, you have Service 1 which emits {x, z, y} and Service 2, which consumes it and translates it into it's domain (which is a very good design). Service 1 is thus independent of Service 2 and Service 2 is also independent of Service 1.

BUT the translation logic is dependent on Service 1. So if you change Service 1 to emit {q, x, z}, Service 2 will fail, because it might have used y, which is no longer present.

You already have Service 1 responsible for the schema it emits, so this is basically the most you can do.

There simply isn't a good fix for this as far as I know, other than to not introduce breaking changes to your events - do not update field names and do not remove fields. There is a book by Greg Young, Versioning in an Event Sourced System, which might give you some ideas, even if it is not entirely related to the messaging.

Blaž Mrak
  • 478
  • 3
  • 8