1

after upgrading our first node, it has a different schema version (according to node tool describe cluster). This caused Spark Jobs to hang, because of reoccurring "schema agreement not reached" by metadata.SchemaAgreementChecker.

Is this different schema version by purpose? Will the problem being gone after updating all nodes (first update all nodes, then run upgrade sstables)?

Can Spark Jobs configured to overcome the hang-up?

Many thanks in advance.

Best Regards,

Sven

Sven
  • 11
  • 2

2 Answers2

1

So issues like this during the upgrade are common. This is because nodes on different versions cannot stream data to each other, and that includes schema. Continue with the upgrade, and run a nodetool describecluster to check the schema version once all of the nodes have been upgraded.

If there is still a schema disagreement once you're done, run a rolling restart of the affected node(s). Once a node comes up, it checks for (and resolves) schema agreement.

Aaron
  • 4,420
  • 3
  • 23
  • 36
0

It is normal for upgraded nodes to have a different schema version. This is by design.

This however should be transparent to clients including your Spark application so it shouldn't have caused the jobs to hang.

If you are able to provide additional information about the issue, we would be happy to review it. We mostly need the steps to replicate the problem and should include:

  • software stack + full version numbers
  • full error message + full stacktrace
  • minimal sample code that replicates the issue

Cheers!

Erick Ramirez
  • 4,590
  • 1
  • 8
  • 30