Unable to write into REFERENCE table from spark

I am trying to write into a reference table.
I am using spark-connector 2.0.2

The spark code is trying t write a dataframe using something like this:

df.write.format(“com.memsql.spark.connector”).mode(“overwrite”).save(target_reference_memsql_table)

But, it throws this error:

 ERROR Client: Application diagnostics message: User class threw exception: com.memsql.spark.SaveToMemSQLException: SaveToMemSQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 22 in stage 2.0 failed 4 times, most recent failure: Lost task 22.3 in stage 2.0 (TID 79, <ip>, executor 10): java.sql.SQLException: This instance is not the master aggregator. Modifying a reference table is not permitted on child aggregators and leaves.
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:996)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823)
    at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435)
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2526)
    at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1618)
    at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1549)

This error can occur when ddlEndpoint option is configured incorrectly. Could you please check that it points to the Master Aggregator?

Does that work in spark-connector prior to 3 beta ? I have used master aggregator host in the configurations, but I think it fires queries on child agg and leaf nodes

Yes, it works in 3 beta. Could you try to specify writeToMaster option? (.option("writeToMaster", "true"))

Thank you so much for the help.
Yes, writeToMaster worked.

Also, since now I need to use beta 3.0 version could you please let me know if I need to add the dmlEndpoints config in spark configurations for DML statements to work in distributed manner ?

OR will the DML statements work on the leaves in a distributed manner by default even if I don’t put the child agg and leave nodes under dmlEndpoints option ?

Without specifying the dmlEndpoints option all queries will be sent to the master aggregator and then they will be executed in leaves. For proper load balancing, specify all of the aggs (including the master if you like) in the dmlEndpoints option so the Spark Connector can correctly distribute the parts of the workload which may be distributed.

okay.

Thanks for the help. Much appreciated