-1

I'm working on a big C# application that is currently under development so we have some room for structural refactoring.

The application is divided into 10 microservices, which some of them communicate with each other.

That intercommunication is done via 2 different approaches at the moment: A) using Apache Kafka as an event bus and B) creating an internal facade/service-object approach when need to request some data from another microservice. I want to focus in the later approach.

This facade/service is done by literally copying all the important data types definition and having the HTTP requests hard-coded inside each project (ie. we have local copies - duplicated - of each type related to a given microservice request/response inside each microservice project that needs to communicate with it).

I'm not very well versed in microservices and Rest API, but this seems to me a recipe for disaster. If microservice A is requested by 5 other microservices, and we change A's API, now we need to update all other 5 microservices projects with all the new data types or changes that was made rather than having some sort of shared library with that facade or something like that.

I know managing API versioning is difficult and painful, but this certainly doesn't seem right to me.

I didn't find a straight forward guide that addresses a problem like this.

Can anyone shed some light on this? How should we manage those duplicated data type definitions? And lastly how should we manage versioning properly between those different APIs?

2 Answers2

0

I'm a little confused by your description of the code base. You say its a microservice architecture with duplicated code across the services. That is by definition not a service. It also violates the definition of a facade as a facade should not have any hard coded requesting within it. The implementation of the interfaces function calls should be decoupled from the interface itself. The only time you should have to make a change across microservices is if you change the function call or API call reference name (for components vs services respectively). If done as intended, the new implementation updates should be followed by all microservices when you update the implementation.

Services are (should be) essentially containerized apps with their dependencies locally hosted so that if another service changes, then no redeployment is necessary for sister services. This is used for distributed architectures where the services may not be on the same machine and shared libraries are impossible.

Components use shared libraries so that duplication is minimized across an application for storage reasons. Mostly useful in monolithic applications. But if one component requires a certain version of a library and another component uses another version of the same library then you get the matrix from hell situation.

I believe your quickest solution is to add an event queue and event log and message broker to so that no calls are made FROM the services but all events are sent TO the services. And keep in mind that an event can be anything, so you just keep your http logic just send it to the queue instead of to another microservice. You could even preempt the broker with an event processor that could have encoding logic so that you could backward compatibalize any API changes.

This gives you the benefit of adding and deleting any services in a central location easily and future proofs (HA!) your API

See these diagrams: (Not Mine) Mediator Topology Broker Topology

0
  • Each service should store its own data.

If you take that principle, you should be using events to populate each service with the data it needs from other services. Therefore no direct calls will be needed.

The only time I ever introduce direct API calls is when I have to call something outside of my architecture.