postgresql – Apache Spark:JDBC连接不起作用

前端之家收集整理的这篇文章主要介绍了postgresql – Apache Spark:JDBC连接不起作用前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我之前也问过这个问题,但没有得到任何答案( Not able to connect to postgres using jdbc in pyspark shell).

我已经在我的本地窗口上成功安装了Spark 1.3.0并运行了示例程序以使用pyspark shell进行测试.

现在,我想对存储在Postgresql中的数据运行Mllib的Correlations,但是我无法连接到postgresql.

我已成功通过运行在类路径中添加了所需的jar(测试此jar)

pyspark --jars "C:\path\to\jar\postgresql-9.2-1002.jdbc3.jar"

我可以看到jar已成功添加到环境UI中.

当我在pyspark shell中运行以下代码时 –

from pyspark.sql import sqlContext
sqlContext = sqlContext(sc)
df = sqlContext.load(source="jdbc",url="jdbc:postgresql://[host]/[dbname]",dbtable="[schema.table]")

我得到这个错误

>>> df = sqlContext.load(source="jdbc",dbtable="[schema.table]")
Traceback (most recent call last):
  File "<stdin>",line 1,in <module>
  File "C:\Users\ACERNEW3\Desktop\Spark\spark-1.3.0-bin-hadoop2.4\python\pyspark\sql\context.py",line 482,in load
    df = self._ssql_ctx.load(source,joptions)
  File "C:\Users\ACERNEW3\Desktop\Spark\spark-1.3.0-bin-hadoop2.4\python\lib\py4j-0.8.2.1-src.zip\py4j\java_gateway.py",line 538,in __call__
  File "C:\Users\ACERNEW3\Desktop\Spark\spark-1.3.0-bin-hadoop2.4\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py",line 300,in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o20.load.
: java.sql.sqlException: No suitable driver found for     jdbc:postgresql://[host]/[dbname]
        at java.sql.DriverManager.getConnection(DriverManager.java:602)
        at java.sql.DriverManager.getConnection(DriverManager.java:207)
        at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:94)
        at org.apache.spark.sql.jdbc.JDBCRelation.<init>    (JDBCRelation.scala:125)
        at  org.apache.spark.sql.jdbc.DefaultSource.createRelation(JDBCRelation.scala:114)
        at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:290)
        at org.apache.spark.sql.sqlContext.load(sqlContext.scala:679)
        at org.apache.spark.sql.sqlContext.load(sqlContext.scala:667)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Thread.java:619)
我对MysqL / mariadb有这个确切的问题,从 this question获得了大的线索

所以你的pyspark命令应该是:

pyspark --conf spark.executor.extraClassPath=<jdbc.jar> --driver-class-path <jdbc.jar> --jars <jdbc.jar> --master <master-URL>

还要注意pyspark启动时的错误,如“警告:本地jar …不存在,跳过”.和“错误的SparkContext:罐子里找不到……”,这些可能意味着你拼错了路径.

猜你在找的Postgre SQL相关文章