解决方法
请看
this databricks article : Connecting to SQL Databases using JDBC.
import org.apache.spark.sql.SaveMode val df = spark.table("...") println(df.rdd.partitions.length) // given the number of partitions above,users can reduce the partition value by calling coalesce() or increase it by calling repartition() to manage the number of connections. df.repartition(10).write.mode(SaveMode.Append).jdbc(jdbcUrl,"product_MysqL",connectionProperties)