What makes a DBA's blood run cold the most is split-brain.
That's the term for the scenario where you start writing updates to one of the nodes in a cluster while it is still applying updates replicated from one of the other nodes. So you have a high chance of committing an update to some row of data, but then it is overwritten by a replicated change that originally occurred earlier.
Then the transaction you just committed executes on the other node. But the change is based on an outdated version of data, so it has the wrong values. Now neither node has the "true" state of data.
Recovering from this kind of mistake is incredibly difficult. You'd have to reconstruct the correct sequence of changes from logs somehow, then restore the whole cluster from backup, then re-run the changes in the correct order. And of course the application owner says you must do it without downtime, so you can't interrupt ongoing traffic or take the database offline to restore it.
It's virtually impossible to do this, but you soon have managers all the way up the hierarchy yelling at you to fix it immediately.
This is why synchronous replication is worth it, even if it delays failover. It protects against split-brain, which is, frankly, more important than high throughput.
If you have frequent replication lag, switching to EVENTUAL is not a good solution. The solution is either to reduce the query traffic, or increase the server's performance, until the group replication cluster can keep up with it without causing frequent replication lag.
This is done in one of two ways:
Scale up: get faster CPUs, faster storage drives, more RAM.
Scale out: split the data over multiple clusters, and distribute data updates more or less evenly between the clusters, so each cluster only needs to handle a fraction of the traffic.
Or eventually, you need to use both solutions, because there's only so far you can scale up.