如何在Centos6.5下部署Hadoop2.2的完全分布式集群(三)

前端之家收集整理的这篇文章主要介绍了如何在Centos6.5下部署Hadoop2.2的完全分布式集群(三)前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
散仙在上篇文章中,已经讲述了部署Hadoop2.2伪分布式的步骤,那么今天,我们来看下,如何在Centos6.5下,部署完全分布式集群。
下面先来看下具体的系统环境
序号 名称 描述
1 系统环境Centos6.5 最好在linux上部署
2 Hadoop版本Hadoop2.2.0 Hadoop2.x中的第一个稳定版本
3 JAVA环境JDK1.7 64位(build 1.7.0_25-b15)


部署情况
序号 IP地址 节点名
1 192.168.46.28 hp1(master)
2 192.168.46.29 hp2(slave)
3 192.168.46.30 hp3(slave)


部署步骤
序号 操作
1 配置SSH无密码登陆
2 配置环境变量JAVA(必须),MAVEN,ANT
3 配置Hadoop环境变量
4 配置core-site.xml文件
5 配置hdfs-site.xml文件
6 配置mapred-site.xml文件
7 配置yarn-site.xml文件
8 配置slaves文件
9 分发到从机上
10 在每台机器上格式化namenode
11 启动集群sbin/start-all.sh
12 执行jps命令,查询master与slave的java进程
13 测试页面访问,集群状态信息,
14 可以测试一个MR作业,验证集群



1,首先我们的集群之间的ssh是信任的,方便hadoop进程之间的通信。

生成公钥:ssh-keygen -t rsa -P ''
拷贝信任:ssh-copy-id -i .ssh/id_rsa.pub root@hp2
2,配置各种环境变量包括java,maven,ant,hadoop等的变量,代码如下:

Java代码
  1. exportPATH=.:$PATH@H_502_322@
  2. @H_502_322@
  3. exportJAVA_HOME="/usr/local/jdk"@H_502_322@
  4. exportCLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib@H_502_322@
  5. exportPATH=$PATH:$JAVA_HOME/bin@H_502_322@
  6. @H_502_322@
  7. exportHADOOP_HOME=/root/hadoop@H_502_322@
  8. exportHADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop@H_502_322@
  9. exportCLASSPATH=.:$CLASSPATH:$HADOOP_HOME/lib@H_502_322@
  10. exportPATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin@H_502_322@
  11. @H_502_322@
  12. exportANT_HOME=/usr/local/ant@H_502_322@
  13. exportCLASSPATH=$CLASSPATH:$ANT_HOME/lib@H_502_322@
  14. exportPATH=$PATH:$ANT_HOME/bin@H_502_322@
  15. @H_502_322@
  16. exportMAVEN_HOME="/usr/local/maven"@H_502_322@
  17. exportCLASSPATH=$CLASSPATH:$MAVEN_HOME/lib@H_502_322@
  18. exportPATH=$PATH:$MAVEN_HOME/bin@H_502_322@

3,配置core-site.xml文件
Xml代码
  1. <?xmlversion="1.0"encoding="UTF-8"?>@H_502_322@
  2. <?xml-stylesheettype="text/xsl"href="configuration.xsl"?>@H_502_322@
  3. <!--@H_502_322@
  4. LicensedundertheApacheLicense,Version2.0(the"License");@H_502_322@
  5. youmaynotusethisfileexceptincompliancewiththeLicense.@H_502_322@
  6. YoumayobtainacopyoftheLicenseat@H_502_322@
  7. @H_502_322@
  8. http://www.apache.org/licenses/LICENSE-2.0@H_502_322@
  9. @H_502_322@
  10. Unlessrequiredbyapplicablelaworagreedtoinwriting,software@H_502_322@
  11. distributedundertheLicenseisdistributedonan"ASIS"BASIS,@H_502_322@
  12. WITHOUTWARRANTIESORCONDITIONSOFANYKIND,eitherexpressorimplied.@H_502_322@
  13. SeetheLicenseforthespecificlanguagegoverningpermissionsand@H_502_322@
  14. limitationsundertheLicense.SeeaccompanyingLICENSEfile.@H_502_322@
  15. -->@H_502_322@
  16. @H_502_322@
  17. <!--Putsite-specificpropertyoverridesinthisfile.-->@H_502_322@
  18. @H_502_322@
  19. <configuration>@H_502_322@
  20. <property>@H_502_322@
  21. <name>fs.default.name</name>@H_502_322@
  22. <value>hdfs://192.168.46.28:9000</value>@H_502_322@
  23. </property>@H_502_322@
  24. <property>@H_502_322@
  25. <name>hadoop.tmp.dir</name>@H_502_322@
  26. <value>/root/hadoop/tmp</value>@H_502_322@
  27. </property>@H_502_322@
  28. @H_502_322@
  29. </configuration>@H_502_322@


4,配置hdfs-site.xml文件
Xml代码
  1. <?xmlversion="1.0"encoding="UTF-8"?>@H_502_322@
  2. <?xml-stylesheettype="text/xsl"href="configuration.xsl"?>@H_502_322@
  3. <!--@H_502_322@
  4. LicensedundertheApacheLicense,eitherexpressorimplied.@H_502_322@
  5. SeetheLicenseforthespecificlanguagegoverningpermissionsand@H_502_322@
  6. limitationsundertheLicense.SeeaccompanyingLICENSEfile.@H_502_322@
  7. -->@H_502_322@
  8. @H_502_322@
  9. <!--Putsite-specificpropertyoverridesinthisfile.-->@H_502_322@
  10. @H_502_322@
  11. <configuration>@H_502_322@
  12. <property>@H_502_322@
  13. <name>dfs.replication</name>@H_502_322@
  14. <value>1</value>@H_502_322@
  15. </property>@H_502_322@
  16. @H_502_322@
  17. <property>@H_502_322@
  18. <name>dfs.namenode.name.dir</name>@H_502_322@
  19. <value>/root/hadoop/nddir</value>@H_502_322@
  20. </property>@H_502_322@
  21. @H_502_322@
  22. @H_502_322@
  23. <property>@H_502_322@
  24. <name>dfs.datanode.data.dir</name>@H_502_322@
  25. <value>/root/hadoop/dddir</value>@H_502_322@
  26. </property>@H_502_322@
  27. @H_502_322@
  28. <property>@H_502_322@
  29. <name>dfs.permissions</name>@H_502_322@
  30. <value>false</value>@H_502_322@
  31. </property>@H_502_322@
  32. @H_502_322@
  33. </configuration>@H_502_322@

配置mapred-site.xml文件
Xml代码
  1. <?xmlversion="1.0"?>@H_502_322@
  2. <?xml-stylesheettype="text/xsl"href="configuration.xsl"?>@H_502_322@
  3. <!--@H_502_322@
  4. LicensedundertheApacheLicense,eitherexpressorimplied.@H_502_322@
  5. SeetheLicenseforthespecificlanguagegoverningpermissionsand@H_502_322@
  6. limitationsundertheLicense.SeeaccompanyingLICENSEfile.@H_502_322@
  7. -->@H_502_322@
  8. @H_502_322@
  9. <!--Putsite-specificpropertyoverridesinthisfile.-->@H_502_322@
  10. @H_502_322@
  11. <configuration>@H_502_322@
  12. <property>@H_502_322@
  13. <name>mapred.job.tracker</name>@H_502_322@
  14. <value>hp1:8021</value>@H_502_322@
  15. <final>true</final>@H_502_322@
  16. <description>ThehostandportthattheMapReduceJobTrackerrunsat.</description>@H_502_322@
  17. </property>@H_502_322@
  18. <property>@H_502_322@
  19. <name>mapreduce.cluster.temp.dir</name>@H_502_322@
  20. <value></value>@H_502_322@
  21. <description>Nodescription</description>@H_502_322@
  22. <final>true</final>@H_502_322@
  23. </property>@H_502_322@
  24. @H_502_322@
  25. <property>@H_502_322@
  26. <name>mapreduce.cluster.local.dir</name>@H_502_322@
  27. <value></value>@H_502_322@
  28. <description>Nodescription</description>@H_502_322@
  29. <final>true</final>@H_502_322@
  30. </property>@H_502_322@
  31. </configuration>@H_502_322@


配置yarn-site.xml文件
Xml代码
  1. <?xmlversion="1.0"?>@H_502_322@
  2. <!--@H_502_322@
  3. LicensedundertheApacheLicense,eitherexpressorimplied.@H_502_322@
  4. SeetheLicenseforthespecificlanguagegoverningpermissionsand@H_502_322@
  5. limitationsundertheLicense.SeeaccompanyingLICENSEfile.@H_502_322@
  6. -->@H_502_322@
  7. <configuration>@H_502_322@
  8. @H_502_322@
  9. <property>@H_502_322@
  10. <name>yarn.nodemanager.aux-services</name>@H_502_322@
  11. <value>mapreduce_shuffle</value>@H_502_322@
  12. </property>@H_502_322@
  13. @H_502_322@
  14. <property>@H_502_322@
  15. <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>@H_502_322@
  16. <value>org.apache.hadoop.mapred.ShuffleHandler</value>@H_502_322@
  17. </property>@H_502_322@
  18. @H_502_322@
  19. <property>@H_502_322@
  20. <name>Yarn.nodemanager.aux-services</name>@H_502_322@
  21. <value>mapreduce.shuffle</value>@H_502_322@
  22. </property>@H_502_322@
  23. @H_502_322@
  24. <property>@H_502_322@
  25. <name>yarn.resourcemanager.address</name>@H_502_322@
  26. <value>hp1:8032</value>@H_502_322@
  27. </property>@H_502_322@
  28. @H_502_322@
  29. <property>@H_502_322@
  30. <name>yarn.resourcemanager.scheduler.address</name>@H_502_322@
  31. <value>hp1:8030</value>@H_502_322@
  32. </property>@H_502_322@
  33. @H_502_322@
  34. <property>@H_502_322@
  35. <name>yarn.resourcemanager.resource-tracker.address</name>@H_502_322@
  36. <value>hp1:8031</value>@H_502_322@
  37. </property>@H_502_322@
  38. @H_502_322@
  39. <property>@H_502_322@
  40. <name>yarn.resourcemanager.admin.address</name>@H_502_322@
  41. <value>hp1:8033</value>@H_502_322@
  42. </property>@H_502_322@
  43. @H_502_322@
  44. <property>@H_502_322@
  45. <name>yarn.resourcemanager.webapp.address</name>@H_502_322@
  46. <value>hp1:8088</value>@H_502_322@
  47. </property>@H_502_322@
  48. @H_502_322@
  49. </configuration>@H_502_322@

配置slaves文件
Java代码
  1. 192.168.46.28@H_502_322@
  2. 192.168.46.29@H_502_322@
  3. 192.168.46.30@H_502_322@

配置好后,注意,在hdfs-site.xml文件里的目录,需要自己在hadoop根目录下创建,以及hadoop的HDFS的tmp目录。一切做好之后,我们就可以分发整套hadoop到从机上,然后格式化namenode,并启动集群,使用jps在主机,和从机上分别显示如下:
master的jps显示如下:
Java代码
  1. 4335SecondaryNameNode@H_502_322@
  2. 4464ResourceManager@H_502_322@
  3. 4553NodeManager@H_502_322@
  4. 4102NameNode@H_502_322@
  5. 4206Datanode@H_502_322@
  6. 6042Jps@H_502_322@

slave上的jps显示如下:
Java代码
  1. 1727Datanode@H_502_322@
  2. 1810NodeManager@H_502_322@
  3. 2316Jps@H_502_322@

确实jps命令显示的java进程正确,我们就可以访问,web界面进行查看了,截图如下:







至此,我们已经成功的部署完成hadoop集群,安装时,注意散仙的步骤,按这样顺序来,一般不容易不错。

猜你在找的CentOS相关文章