0

In RabbitMQ streams or Kafka, messages are not deleted after being consumed. If you have a consumer application replicated across multiple Kubernetes pods, how can you ensure that each pod picks up a different message? For instance, if you store the offset in a database, all pods will connect to the same database, sharing the same data. This means they will all see the same next message. What strategies can be employed to prevent this?

0 Answers0