Sparkクラスタ:要求されたアドレスを指定できません.Service'Driver'could not bind on a random free port.
6604 ワード
19/07/09 17:00:48 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
19/07/09 17:00:49 WARN Utils: Service 'Driver' could not bind on a random free port. You may check whether configuring an appropriate binding address.
Exception in thread "main" java.net.BindException: : Service 'Driver' failed after 16 retries (on a random free port)! Consider explicitly setting the appropriate binding address for the service 'Driver' (for example spark.driver.bindAddress for SparkDriver) to the correct binding address.
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:128)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:558)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1283)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:989)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:254)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:364)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)
私のsparkが実行するjarファイルコードの内容は以下の通りです.
SparkSession.Builder builder = SparkSession.builder()/*.master("local[*]")*/.appName("SparkCalculateRecommend").
config("spark.mongodb.input.uri", "mongodb://xx:[email protected]:27018/sns.igomoMemberInfo_Spark_input")
.config("spark.mongodb.output.uri", "mongodb://xx:[email protected]:27018/sns.igomoMemberInfo_Spark_output")
.config("spark.driver.bindAddress","127.0.0.1")
.config("spark.executor.memory", "1g")
.config("es.nodes", esIpAddr)
.config("es.port", "9200")
.config("es.nodes.wan.only", "true");
私のsparkはjavaを通じてクラスタに提出したコードは以下の通りです.
SparkLauncher spark = new SparkLauncher()
.setDeployMode("cluster")
.setMainClass("com.fsdn.zaodian.spark.XunMeiSpark")
.setMaster("spark://192.168.31.205:8180")
.setConf(SparkLauncher.EXECUTOR_MEMORY, "512m")
.setConf(SparkLauncher.EXECUTOR_CORES,"2")
.setSparkHome("/data/spark-2.4.3")
// .setSparkHome("D:\\soft\\spark-2.4.3-bin-hadoop2.7")
.setVerbose(true)
.setAppResource("/data/spark-2.4.3/examples/jars/zaodian-0.0.1-SNAPSHOT.jar")
.addAppArgs(memberIds);
シリーズ採坑後、元はDeployModeパラメータの制定が間違っていたので、ここではそれを注釈して問題を解決することに成功した.注釈後のコードは以下の通りである.
SparkLauncher spark = new SparkLauncher()
// .setDeployMode("cluster")
.setMainClass("com.fsdn.zaodian.spark.XunMeiSpark")
.setMaster("spark://192.168.31.205:8180")
.setConf(SparkLauncher.EXECUTOR_MEMORY, "512m")
.setConf(SparkLauncher.EXECUTOR_CORES,"2")
.setSparkHome("/data/spark-2.4.3")
// .setSparkHome("D:\\soft\\spark-2.4.3-bin-hadoop2.7")
.setVerbose(true)
.setAppResource("/data/spark-2.4.3/examples/jars/zaodian-0.0.1-SNAPSHOT.jar")
.addAppArgs(memberIds);
問題を詳しく解決する構想は以下の通りである.
多くの実験を経て、linux shell方式でsparkクラスタをコミットすると正常に動作しますが、javaコードを使用してsparkクラスタをコミットするとエラーが発生します.したがって、コミットされたパラメータに問題がある可能性が高いと推測されます.
shell方式コミット後のログパラメータ:
Spark Executor Command:
"/usr/java/jdk1.8.0_171/bin/java"
"-cp" "/data/spark-2.4.3/conf/:/data/spark-2.4.3/jars/*"
"-Xmx512M" "-Dspark.ui.port=4349"
"-Dspark.driver.port=35938" "org.apache.spark.executor.CoarseGrainedExecutorBackend"
"--driver-url" "spark://CoarseGrainedScheduler@okdiz:35938"
"--executor-id" "1"
"--hostname" "192.168.31.207"
"--cores" "1"
"--app-id" "app-20190710174108-0011"
"--worker-url" "spark://[email protected]:20157"
JAvaコードコミット後のログパラメータ:
Launch Command: "/usr/java/jdk1.8.0_171/bin/java"
"-cp" "/data/spark-2.4.3/conf/:/data/spark-2.4.3/jars/*"
"-Xmx1024M" "-Dspark.executor.memory=512m"
"-Dspark.driver.supervise=false"
"-Dspark.app.name=com.fsdn.zaodian.spark.XunMeiSpark"
"-Dspark.submit.deployMode=cluster"
"-Dspark.ui.port=4349"
"-Dspark.master=spark://192.168.31.205:8180"
"-Dspark.jars=file:/data/spark-2.4.3/examples/jars/zaodian-0.0.1-SNAPSHOT.jar"
"-Dspark.rpc.askTimeout=10s"
"-Dspark.executor.cores=2" "org.apache.spark.deploy.worker.DriverWrapper"
"spark://[email protected]:10440"
"/data/spark-2.4.3/work/driver-20190710181012-0001/zaodian-0.0.1-SNAPSHOT.jar"
"com.fsdn.zaodian.spark.XunMeiSpark"
"420388870078501"
2つのパラメータには多くの異なる項目が見られ,実験後,「-DSpark.submit.deployMode=cluster」の作成によりクラスタのポートに接続できないことが分かった.わざとこの文を作る.