第一步:
安装java:通过下载官网安装包方式我就不说了,网上很多;现在采用的是ppa(源) 方式安装。
1.添加ppa
- sudo add-apt-repository ppa:webupd8team/java
- sudo apt-get update
2.安装oracle-java-installer
- sudo apt-get install oracle-java8-installer
安装器会提示你同意 oracle 的服务条款,选择 ok
然后选择yes 即可
3.设置系统默认jdk
- sudo update-java-alternatives -s java-8-oracle
4.测试jdk 是是否安装成功:
- java -version
- javac -version
第二步:
更改主机名:
- vim /etc/hostname
其他节点都需要更改,然后重启即可
配置hosts文件:
填写好IP地址及其映射名。
设置hadoop用户:
- sudo addgroup hadoop
- sudo adduser --ingroup hadoop hadoop
例子:sudo usermod -aG sudo hadoop
其中a:表示添加,G:指定组名
第三步:
免密码登录:
- ssh-keygen -t rsa
- cd ~/.ssh
- cp id_rsa.pub authorized_keys
每个节点都运行:
然后把authorized_keys复制到各个节点
- scp /home/hadoop/.ssh/authorized_keys hadoop@slave1:~/.ssh/
- scp /home/hadoop/.ssh/authorized_keys hadoop@slave2:~/.ssh/
测试:
- ssh master
- ssh slave1
- ssh slave2
第四步:
配置Hadoop文件:
我的Hadoop存放位置
- /home/hadoop/hadoop273
创建目录
- mkdir /home/hadoop/tmp
- mkdir /home/hadoop/dfs
- mkdir /home/hadoop/dfs/name
- mkdir /home/hadoop/dfs/data
配置hadoop-env.sh和yarn-env.sh的java目录
配置slaves
- slave1
- slave2
配置core-site.xml
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://master:9000</value>
- </property>
- <property>
- <name>io.file.buffer.size</name>
- <value>131072</value>
- </property>
- <property>
- <name>hadoop.tmp.dir</name>
- <value>file:/home/hadoop/tmp</value>
- <description>Abase for other temporary directories.</description>
- </property>
- </configuration>
配置hdfs-site.xml
- <configuration>
- <property>
- <name>dfs.namenode.secondary.http-address</name>
- <value>master:9001</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:/home/hadoop/dfs/name</value>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>file:/home/hadoop/dfs/data</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- <property>
- <name>dfs.webhdfs.enabled</name>
- <value>true</value>
- </property>
- <property>
- <name>dfs.permissions</name>
- <value>false</value>
- </property>
- </configuration>
配置mapred-site.xml
- <configuration>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>master:10020</value>
- </property>
- <property>
- <name>mapreduce.jobhistory.webapp.address</name>
- <value>master:19888</value>
- </property>
- <property>
- <name>mapred.job.tracker</name>
- <value>master:9001</value>
- </property>
- </configuration>
第五步:
格式化hdfs
- bin/hadoop namenode -format
运行Hadoop
- ./sbin/start-all.sh
检查是否成功
常见问题:
免密码的时候
ssh中“Host key verification Failed.“的解决方案
在/etc/ssh/ssh_config
- StrictHostKeyChecking no
- UserKnownHostsFile /dev/null
出现Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hadoop/2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now的解决方案
- vim ~/.bash_profile
- export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
- export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
- source ~/.bash_profile
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
运行hadoop namenode -format 出现该警告通过如下方法消除了: 在hadoop-env.sh中 修改HADOOP_OPTS: exportHADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native"