接着上篇:http://www.jb51.cc/article/p-pwoyusmt-bod.html
接着上篇:http://www.jb51.cc/article/p-kiwxozhi-bod.html
接着上篇:http://www.jb51.cc/article/p-nksepega-bod.html
下面安装spark
安装spark要先安装scala。
- 安装spark(2台机器配置基本相同)
#!/bin/bash #环境变量文件 PATH_FILE="/etc/profile" #安装包路径 SCALA_TAR="/home/hdp/Downloads/scala-2.11.8.tgz" SCALA_INSTALL_HOME="/usr/local" #安装包路径 SPARK_TAR="/home/hdp/Downloads/spark-2.0.2-bin-hadoop2.7.tgz" SPARK_INSTALL_HOME="/usr/local" #安装scala if [ -d $SCALA_INSTALL_HOME/scala ] then sudo rm -rf $SCALA_INSTALL_HOME/scala fi #解压scala sudo tar -zxvf $SCALA_TAR -C $SCALA_INSTALL_HOME #修改文件名称 sudo mv $SCALA_INSTALL_HOME/scala-2.11.8 $SCALA_INSTALL_HOME/scala #将scala的所有者修改为hadoop sudo chown -R hadoop $SCALA_INSTALL_HOME/scala #设置环境变量 if [ -z $SCALA_HOME ] then sudo echo "export SCALA_HOME=\"$SCALA_INSTALL_HOME/scala\"" >> $PATH_FILE sudo echo "export PATH=\"\${SCALA_HOME}/bin:\$PATH\"" >> $PATH_FILE #刷新环境变量 source /etc/profile fi #安装spark if [ -d $SPARK_INSTALL_HOME/spark ] then sudo rm -rf $SPARK_INSTALL_HOME/spark fi #解压spark sudo tar -zxvf $SPARK_TAR -C $SPARK_INSTALL_HOME #修改文件名称 sudo mv $SPARK_INSTALL_HOME/spark-2.0.2-bin-hadoop2.7 $SPARK_INSTALL_HOME/spark #将spark的所有者修改为hadoop sudo chown -R hadoop $SPARK_INSTALL_HOME/spark #设置环境变量 if [ -z $SPARK_HOME ] then sudo echo "export SPARK_HOME=\"$SPARK_INSTALL_HOME/spark\"" >> $PATH_FILE sudo echo "export PATH=\"\${SPARK_HOME}/bin:\$PATH\"" >> $PATH_FILE #刷新环境变量 source /etc/profile fi
注意路径
-
export JAVA_HOME=/usr/lib/jvm/java export SCALA_HOME=/usr/local/scala export HADOOP_HOME=/usr/local/hadoop export YARN_HOME=/usr/local/hadoop export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$YARN_HOME/etc/hadoop export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin export SPARK_MASTER_IP=d155 export SPARK_LOCAL_DIRS=/usr/local/spark export SPARK_LIBARY_PATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$HADOOP_HOME/lib/native
配置slaves文件。加入一行
-
d156
启动停止 ./sbin/start-all.sh ./sbin/stop-all.sh
检查状态
浏览器输入 http://d155:8080/