hadoop2.7之作业提交详解(上) hadoop2.7之作业提交详解(下)

前端之家收集整理的这篇文章主要介绍了hadoop2.7之作业提交详解(上) hadoop2.7之作业提交详解(下)前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

根据wordcount进行分析:@H_404_2@

import@H_404_2@ org.apache.hadoop.conf.Configuration;
@H_404_2@ org.apache.hadoop.fs.Path;
@H_404_2@ org.apache.hadoop.io.IntWritable;
@H_404_2@ org.apache.hadoop.io.Text;
@H_404_2@ org.apache.hadoop.mapreduce.Job;
@H_404_2@ org.apache.hadoop.mapreduce.Mapper;
@H_404_2@ org.apache.hadoop.mapreduce.Reducer;
@H_404_2@ org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
@H_404_2@ org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

@H_404_2@ java.io.IOException;


@H_404_2@/**@H_404_2@
 * @H_404_2@@author@H_404_2@: LUGH1
 * @date: 2019-4-8
 * @description:
 @H_404_2@*/@H_404_2@
public@H_404_2@ class@H_404_2@ WordCount {
    @H_404_2@static@H_404_2@ void@H_404_2@ main(String[] args) throws@H_404_2@ IOException,ClassNotFoundException,InterruptedException {
        Configuration conf @H_404_2@= new@H_404_2@ Configuration();
        conf.set(@H_404_2@"fs.defaultFS","hdfs://192.168.88.130:9000");
        Job job @H_404_2@= Job.getInstance(conf);
        job.setJarByClass(WordCount.@H_404_2@);

        job.setMapperClass(WdMapper.@H_404_2@);
        job.setReducerClass(WdReducer.@H_404_2@);

        job.setMapOutputKeyClass(Text.@H_404_2@);
        job.setMapOutputValueClass(IntWritable.@H_404_2@);

        job.setOutputKeyClass(Text.@H_404_2@);
        job.setOutputValueClass(IntWritable.@H_404_2@);

        FileInputFormat.setInputPaths(job,@H_404_2@new@H_404_2@ Path("/test/word.txt"));
        FileOutputFormat.setOutputPath(job,1)">new@H_404_2@ Path("/test/output"));

        @H_404_2@boolean@H_404_2@ result = job.waitForCompletion(true@H_404_2@);
        System.exit(result@H_404_2@?0:1);


        System.out.println(@H_404_2@"good job");
    }
}

@H_404_2@class@H_404_2@ WdMapper extends@H_404_2@ Mapper<Object,Text,IntWritable> {
    @Override
    @H_404_2@protected@H_404_2@ void@H_404_2@ map(Object key,Text value,Context context) 404_2@= value.toString();
        String[] split @H_404_2@= line.split(" ");
        @H_404_2@for@H_404_2@(String word : split){
            context.write(@H_404_2@new@H_404_2@ Text(word),new@H_404_2@ IntWritable(1));
        }
    }
}

@H_404_2@class@H_404_2@ WdReducer extends@H_404_2@ Reducer<Text,IntWritable,1)">void@H_404_2@ reduce(Text key,Iterable<IntWritable> values,InterruptedException {
        @H_404_2@int@H_404_2@ count = 0;
        @H_404_2@(IntWritable i : values){
            count @H_404_2@+= i.get();
        }
        context.write(key,1)"> IntWritable(count));
    }
}@H_404_2@@H_404_2@

这上面是个简单wordcount的代码,这里就不一一说明了,我们首先看main方法获取一个job对象,然后经过一系列的设置,最后调用waitForCompletion方法@H_404_2@

public static void main(String[] args) throws IOException,InterruptedException {  
 //....省略具体代码.....
   boolean result = job.waitForCompletion(true);  //调用由Job类提供的方法waitForCompletion()提交作业
   System.exit(result?0:1);@H_404_2@
}@H_404_2@

  接下来我们看下一调用waitForCompletion方法的这个类Job(由于类的内容很多,这里只展示我们需要的部分):@H_404_2@

public class Job extends JobContextImpl implements JobContext {                                                                                                                                                                                                                                                                                                                                                                                                                                                                
  private static final Log LOG = LogFactory.getLog(Job.class);
  public static enum JobState {DEFINE,RUNNING}; //定义两种状态
  private static final long MAX_JOBSTATUS_AGE = 1000 * 2;  //表示最多2000毫秒刷新状态
  public static final String OUTPUT_FILTER = "mapreduce.client.output.filter";
  public static final String COMPLETION_POLL_INTERVAL_KEY = "mapreduce.client.completion.pollinterval";
  static final int DEFAULT_COMPLETION_POLL_INTERVAL = 5000;
  public static final String PROGRESS_MONITOR_POLL_INTERVAL_KEY ="mapreduce.client.progressmonitor.pollinterval";
  static final int DEFAULT_MONITOR_POLL_INTERVAL = 1000;
  public static final String USED_GENERIC_PARSER = "mapreduce.client.genericoptionsparser.used";
  public static final String SUBMIT_REPLICATION =  "mapreduce.client.submit.file.replication";
  public static final int DEFAULT_SUBMIT_REPLICATION = 10;
  public static enum TaskStatusFilter { NONE,KILLED,@R_403_159@,SUCCEEDED,ALL }
  static {
    ConfigUtil.loadResources();  //加载配置
  }
  private JobState state = JobState.DEFINE;  //加载类的时候默认设置状态为DEFINE状态
  private JobStatus status;
  private long statustime;
  private Cluster cluster;
  private ReservationId reservationId;    

 boolean waitForCompletion(booleanverbose) @H_404_2@
submit() setUseNewAPI() connect() getJobSubmitter(FileSystemfs,ClientProtocolsubmitClient) isUber() //是否“拼车”模式(MapTask与ReduceTask在同一节点上) setPartitionerClass()//Mapper的输出可能要由Partitioner按某种规则分发给多个Reducer setMapSpeculativeExecution() //是否需要有Speculative的Mapper起预备队的作用 setReduceSpeculativeExecution() //是否需要有Speculative的Reducer起预备队的作用 setCacheFiles()@H_404_2@
}@H_404_2@

  在Job类中有很多的静态变量,代码块等,我们知道在java中初始化会先加载静态的这些变量和代码块,所以我们在main方法调用Job job = Job.getInstance(conf);方法的时候,就会对这些静态的变量和代码进行加载,这些静态的变量和代码块就是设置一些参数,比如设置job的默认状态的DEFINE状态,以及加载一些配置文件,加载配置文件方法如下:@H_404_2@

public static void loadResources() {
    addDeprecatedKeys();
    Configuration.addDefaultResource("mapred-default.xml");
    Configuration.addDefaultResource("mapred-site.xml");
    Configuration.addDefaultResource("yarn-default.xml");
    Configuration.addDefaultResource("yarn-site.xml");
  }
@H_404_2@

 记载配置文件就是加载hadoop的一些配置文件,所以在我们调用waitForCompletion方法之前这些都是已经加载好了的,接下来我们看waitForCompletion方法:@H_404_2@

//org.apache.hadoop.mapreduce中的Job类
public boolean waitForCompletion(boolean verbose) throws IOException,InterruptedException,ClassNotFoundException {
if (state == JobState.DEFINE) {   //判断作业是否是DEFINE状态,防止重复提交作业
    submit();  //提交作业 
}  
if (verbose) { //提交之后监控其运行,直到作业结束
  monitorAndPrintJob();   //周期性报告作业进度情况
 } else {   //要不然就周期行询问作业是否文成
    // get the completion poll interval from the client.
    int completionPollIntervalMillis =  Job.getCompletionPollInterval(cluster.getConf());
    while (!isComplete()) {
      try {
       Thread.sleep(completionPollIntervalMillis); 
      } catch (InterruptedException ie) {
      }
    }
 }
  return isSuccessful();
}
@H_404_2@

  @H_404_2@

  从作业提交流程的角度看,这个方法代码再简单不过了,实际就是对Job.submit()的调用,只是在调用之前要检查一下本作业是否处于 DEFINE 状态,以确保一个作业不会被提交多次。 如上所述,JobState的值只有 DEFINE 和 RUNNING 两种,具体Job对象创建之初在构造函数Job()中将其设置成 DEFINE,作业提交成功之后就将其改成 RUNNING,这就把门关上了。@H_404_2@
  在正常的情况下,Job.submit() 很快就会返回,因为这个方法的作用只是把作业提交上去,而无须等待作业的执行和完成。 但是,在Job.submit()返回之后,Job.waitForCompletion()则要等待作业执行完成了以后才会返回。 在等待期间,如果参数verbose为true,就要周期地报告作业执行的进展,或者就只是周期地检测作业是否已经完成。@H_404_2@

所以我们的作业提交流程目前是:@H_404_2@

[WordCount.main() -> Job.waitForCompletion() -> Job.submit() ]@H_404_2@

那么,接下来,看一看这个submit方法:@H_404_2@

void@H_404_2@ submit() 
         @H_404_2@404_2@final@H_404_2@ JobSubmitter submitter = 
        getJobSubmitter(cluster.getFileSystem(),cluster.getClient());//获取@H_404_2@@H_404_2@JobSubmitter的实例对象submitter 
status @H_404_2@= ugi.doAs(new@H_404_2@ PrivilegedExceptionAction<JobStatus>() { //ugi.doAs用来控制权限 @H_404_2@public@H_404_2@ JobStatus run() 404_2@return@H_404_2@ submitter.submitJobInternal(Job.this@H_404_2@,cluster); //真正用于提交作业 } }); state @H_404_2@= JobState.RUNNING; //设置job的状态为RUNNING LOG.info(@H_404_2@"The url to track the job: " + getTrackingURL()); }@H_404_2@@H_404_2@

接下来我们先看connect方法:@H_404_2@

private@H_404_2@ synchronized@H_404_2@  connect()
          @H_404_2@404_2@if@H_404_2@ (cluster == null@H_404_2@) { //如果cluter为空,我们就创建一个cluster实例
      cluster @H_404_2@= 
        ugi.doAs(@H_404_2@new@H_404_2@ PrivilegedExceptionAction<Cluster>() {
                   @H_404_2@public@H_404_2@ Cluster run()
                          @H_404_2@404_2@return@H_404_2@  Cluster(getConfiguration()); //创建cluster
                   }
                 });
    }
  }@H_404_2@@H_404_2@

可见connect()的作用就是保证节点上有个Cluster类对象,如果还没有,就创建一个。 那我们就看一下Cluster这个类(列出一部分):@H_404_2@

 Cluster {
  @InterfaceStability.Evolving  @H_404_2@enum@H_404_2@ JobTrackerStatus {INITIALIZING,RUNNING}; //@H_404_2@作业跟踪状态@H_404_2@
  private@H_404_2@ ClientProtocolProvider clientProtocolProvider; 集群版为YarnClientProtocolProvider ,本地模式为LocalClientProtocolProvider@H_404_2@
  private@H_404_2@ ClientProtocol client;  在集群条件下,这是与外界通信的渠道和规则@H_404_2@
  private@H_404_2@ UserGroupInformation ugi; 用来控制权限@H_404_2@
  private@H_404_2@ Configuration conf;  配置信息@H_404_2@
  private@H_404_2@ FileSystem fs = null@H_404_2@; 文件系统@H_404_2@
  private@H_404_2@ Path sysDir = 系统目录@H_404_2@
  private@H_404_2@ Path stagingAreaDir = ; 
  @H_404_2@private@H_404_2@ Path jobHistoryDir = 历史作业目录@H_404_2@
  final@H_404_2@ Log LOG = LogFactory.getLog(Cluster.);
@H_404_2@ServiceLoader<ClientProtocolProvider>,就是针对
@H_404_2@ClientProtocolProvider类的ServiceLoader,而且这就是通过ServiceLoaderl.oad()装载的@H_404_2@ServiceLoader实现了Iterable界面,
//提供一个iterator()函数,因而可以用在for循环中。 @H_404_2@它还提供了一个load()方法,可以通过ClassLoader加载Class@H_404_2@ static@H_404_2@ ServiceLoader<ClientProtocolProvider> frameworkLoader = ServiceLoader.load(ClientProtocolProvider.); @H_404_2@static@H_404_2@ { ConfigUtil.loadResources(); @H_404_2@加载配置文件@H_404_2@ } @H_404_2@构造器@H_404_2@ public@H_404_2@ Cluster(Configuration conf) IOException { @H_404_2@this@H_404_2@(404_2@ Cluster(InetSocketAddress jobTrackAddr,Configuration conf) @H_404_2@this@H_404_2@.conf = conf; @H_404_2@this@H_404_2@.ugi = UserGroupInformation.getCurrentUser(); initialize(jobTrackAddr,conf); @H_404_2@调用initialize方法@H_404_2@ } @H_404_2@目的是要创建ClientProtocolProvider和ClientProtocol@H_404_2@ initialize(InetSocketAddress jobTrackAddr,Configuration conf) @H_404_2@synchronized@H_404_2@ (frameworkLoader) { 不允许多个线程同时进入此段代码,需要加锁@H_404_2@ for@H_404_2@ (ClientProtocolProvider provider : frameworkLoader) { 遍历frameworkLoader获取provider@H_404_2@ LOG.debug("Trying ClientProtocolProvider : " + provider.getClass().getName()); ClientProtocol clientProtocol @H_404_2@= ; @H_404_2@try@H_404_2@ { @H_404_2@if@H_404_2@ (jobTrackAddr == null@H_404_2@) { 通过ClientProtocolProvider的create方法创建clientProtocol@H_404_2@ clientProtocol = provider.create(conf); } @H_404_2@else@H_404_2@ { clientProtocol @H_404_2@= provider.create(jobTrackAddr,conf); } @H_404_2@if@H_404_2@ (clientProtocol != ) { clientProtocolProvider @H_404_2@= provider; client @H_404_2@= clientProtocol; 已经创建了ClientProtocol对象,YARNRunner或LocalJobRunner@H_404_2@ LOG.debug("Picked " + provider.getClass().getName() @H_404_2@+ " as the ClientProtocolProvider"); @H_404_2@break@H_404_2@; 成功后结束循环@H_404_2@ } @H_404_2@else@H_404_2@ { 失败,记录日志 @H_404_2@ LOG.debug("Cannot pick " + provider.getClass().getName() @H_404_2@+ " as the ClientProtocolProvider - returned null protocol"); } } @H_404_2@catch@H_404_2@ (Exception e) { LOG.info(@H_404_2@"@R_403_159@ to use " + provider.getClass().getName() @H_404_2@+ " due to error: "404_2@if@H_404_2@ (null@H_404_2@ == clientProtocolProvider || null@H_404_2@ == client) { 判断是否创建了ClientProtocolProvider和ClientProtocol对象@H_404_2@ throw@H_404_2@ IOException( @H_404_2@"Cannot initialize Cluster. Please check your configuration for " + MRConfig.FRAMEWORK_NAME @H_404_2@+ " and the correspond server addresses."); } }@H_404_2@@H_404_2@

  那么知道job类的connect方法就是确保有实例cluster,如果没有就通过Cluster的构造函数进行创建,在创建之前需要加载一些配置信息ConfigUtil.loadResources()和对静态的变量@H_404_2@frameworkLoader等@H_404_2@赋值,然后在调用Cluster的构造方法,在Cluster的构造方法中必定调用Cluster.@H_404_2@initialize()方法,其中@H_404_2@ClientProtocolProvider和ClientProtocol:用户向RM节点提交作业,是要RM为其安排运行,所以RM起着服务提供者的作用,而用户则处于客户的位置。既然如此,双方就得有个协议,对于双方怎么交互,乃至服务怎么提供,都得有个规定。在Hadoop的代码中,这所谓Protocol甚至被“上纲上线”到了计算框架的高度,连是否采用YARN框架也被纳入了这个范畴。实际上ClientProtocol就起着这样的作用,而ClientProtocolProvider顾名思义是ClientProtocol的提供者,起着有点像是Factory的作用。@H_404_2@@H_404_2@

至于ServiceLoader<ClientProtocolProvider>,那是用来装载ClientProtocolProvider的。@H_404_2@

我们首先看一下这个类@H_404_2@ClientProtocolProvider,很明显是一个抽象类,@H_404_2@这意味着只有继承和扩充了这个抽象类的具体类才能被实体化成对象@H_404_2@:@H_404_2@@H_404_2@

abstract@H_404_2@  ClientProtocolProvider {
  
  @H_404_2@abstract@H_404_2@ ClientProtocol create(Configuration conf)  IOException;
  
  @H_404_2@abstract@H_404_2@ ClientProtocol create(InetSocketAddress addr,Configuration conf) @H_404_2@ IOException;

  @H_404_2@void@H_404_2@ close(ClientProtocol clientProtocol)  IOException;

}@H_404_2@@H_404_2@

接下来我们看看这个抽象类的两个子类YarnClientProtocolProvider和@H_404_2@LocalClientProtocolProvider @H_404_2@@H_404_2@

package@H_404_2@ org.apache.hadoop.mapred;
@H_404_2@class@H_404_2@ YarnClientProtocolProvider extends@H_404_2@ ClientProtocolProvider {
  @Override
  @H_404_2@public@H_404_2@ ClientProtocol create(Configuration conf)  IOException {
    @H_404_2@if@H_404_2@ (MRConfig.YARN_FRAMEWORK_NAME.equals(conf.get(MRConfig.FRAMEWORK_NAME))) {
     @H_404_2@new@H_404_2@ YARNRunner(conf); YARNRunner实现了ClientProtocol接口@H_404_2@
    }
    @H_404_2@;
  }
  @Override
@H_404_2@404_2@return@H_404_2@ create(conf); } @Override @H_404_2@if@H_404_2@ (clientProtocol instanceof@H_404_2@ YARNRunner) { ((YARNRunner)clientProtocol).close(); } }@H_404_2@@H_404_2@
class@H_404_2@ LocalClientProtocolProvider  IOException {
    String framework @H_404_2@=
        conf.get(MRConfig.FRAMEWORK_NAME,MRConfig.LOCAL_FRAMEWORK_NAME);
    @H_404_2@if@H_404_2@ (!MRConfig.LOCAL_FRAMEWORK_NAME.equals(framework)) {
      @H_404_2@;
    }
    conf.setInt(JobContext.NUM_MAPS,@H_404_2@1); map数为1@H_404_2@
    new@H_404_2@ LocalJobRunner(conf); LocalJobRunner实现了ClientProtocol接口@H_404_2@
  }
  @Override
  @H_404_2@404_2@ LocalJobRunner doesn't use a socket@H_404_2@
 close(ClientProtocol clientProtocol) {
    @H_404_2@ no clean up required@H_404_2@
  }@H_404_2@

现在返回来在聊聊Cluster.initialize()方法:@H_404_2@@H_404_2@

  其中ServiceLoader实现了Iterable界面,提供一个iterator()函数,因而可以用在for循环中。它还提供了一个load()方法,可以通过ClassLoader加载Class。此外,它还提供解析文件内容功能@H_404_2@装载了作为ServiceLoader对象的frameworkLoader,其LinkedHashMap中就有了上述的两个路径,这样就可以通过其iterator()函数依次引用这两个路径了@H_404_2@@H_404_2@

  然后,在Cluster类的构造函数中就会调用其initialize(),目的是要创建ClientProtocolProvider和ClientProtocol。@H_404_2@

  但是ClientProtocolProvider是个抽象类,这意味着只有继承和扩充了这个抽象类的具体类才能被实体化成对象。Hadoop的源码中一共只有两个类扩充和落实了这个抽象类,那就是LocalClientProtocolProvider和YarnClientProtocolProvide@H_404_2@

 

  可想而知,由这两种ClientProtocolProvider提供的ClientProtocol也是不一样的。事实上ClientProtocol是个界面,实现了这个界面的类也有两个,分别为LocalJobRunner和YARNRunner。但是实际使用的只能是其中之一。@H_404_2@

  @H_404_2@@H_404_2@initialize@H_404_2@的for循环,是基于前述ServiceLoader中iterator()的循环。实际上也就是对两个ClientProtocolProvider的循环,目的是要通过ClientProtocolProvider.create()创建用户所要求的ClientProtocol,也无非就是LocalJobRunner或YARNRunner。只要有一次创建成功,循环就没有必要继续了,因为只能有一种选择;但是,如果两次都失败,程序就无法继续了,因为不知道该怎样让RM提供计算服务。而能否成功创建,则取决于前述配置项的设置。不过ClientProtocolProvider是抽象类,实际上依次进行尝试的是LocalClientProtocolProvider和YarnClientProtocolProvider。假定第一轮循环时进行尝试的是前者,那么作业的流程就是:@H_404_2@@H_404_2@

[@H_404_2@@H_404_2@WordCount.main() -> Job.waitForCompletion() -> Job.submit()  -> Job.connect() -> Cluster.Cluster() -> Cluster.initialize() -> LocalClientProtocolProvider.create()]@H_404_2@

如果是后者,则作业的流程就是:@H_404_2@

[@H_404_2@WordCount.main() -> Job.waitForCompletion() -> Job.submit()  -> Job.connect() -> Cluster.Cluster() -> Cluster.initialize() -> YarnClientProtocolProvider.create()]@H_404_2@

这里我们假定以yarn方式提交,所以流程为第二种。@H_404_2@

通过@H_404_2@YarnClientProtocolProvider.create()方法,最终返回的是一个@H_404_2@new@H_404_2@ YARNRunner(conf)对象。@H_404_2@

  好了,继续回到我们的Job.submit()方法,到这里connect方法就算执行完毕了,接下就@H_404_2@是对getJobSubmitter()的调用。 这个函数创建一个JobSubmitter类对象,然后Jobs. ubmit()就调用它的submitJobInternal()方法,完成作业的提交。创建JobSubmitter对象时的两个参数就是调用getJobSubmitter()时的两个参数,就是cluster.getFileSystem()和cluster.getClient()。 其中cluster.getClient()返回的就是 YARNRunner或LocalJobRunner;而cluster.getFileSystem()的返回结果对于 YARNRunner是 RM 节点上文件系统的 URL,对于 LocalJobRunner则是本节点上的一个相对路径为“mapred/system”的目录。@H_404_2@

  接下来了解下@H_404_2@JobSubmitter这个类(部分展示)@H_404_2@:@H_404_2@

 

 org.apache.hadoop.mapreduce;
@H_404_2@ JobSubmitter {
  @H_404_2@final@H_404_2@ Log LOG = LogFactory.getLog(JobSubmitter.final@H_404_2@ String SHUFFLE_KEYGEN_ALGORITHM = "HmacSHA1"; //shuffle算法
  @H_404_2@final@H_404_2@ int@H_404_2@ SHUFFLE_KEY_LENGTH = 64;
  @H_404_2@private@H_404_2@ FileSystem jtFs;
  @H_404_2@ ClientProtocol submitClient;
  @H_404_2@ String submitHostName;
  @H_404_2@ String submitHostAddress;
  JobSubmitter(FileSystem submitFs,ClientProtocol submitClient) 
  @H_404_2@this@H_404_2@.submitClient = submitClient; 在集群条件下是YARNRunner @H_404_2@
    this@H_404_2@.jtFs = submitFs;
  }

compareFs(FileSystemsrcFs,FileSystemdestFs) @H_404_2@比较两个文件系统是否相同@H_404_2@
getPathURI()
checkSpecs()
copyRemoteFiles()
copyAndConfigureFiles()
copyJar(PathoriginalJarPath,PathsubmitJarFile,shortreplication)
addMRFrameworkToDistributedCache()
submitJobInternal(Jobjob,Clustercluster) @H_404_2@将作业提交给集群@H_404_2@
writeNewSplits(JobContextjob,PathjobSubmitDir)
getJobSubmitter(FileSystem fs,ClientProtocol submitClient)//底层调用的就是JobSubmitter的构造方法
}@H_404_2@

 

接下来看看submitJobInternal方法@H_404_2@

JobStatus submitJobInternal(Job job,Cluster cluster) 
@H_404_2@ ClassNotFoundException,IOException {

  @H_404_2@validate the jobs output specs 验证输出格式等配置 @H_404_2@
  checkSpecs(job);

  Configuration conf @H_404_2@= job.getConfiguration(); 获取配置信息@H_404_2@
  addMRFrameworkToDistributedCache(conf); 添加到缓存@H_404_2@

  Path jobStagingArea @H_404_2@= JobSubmissionFiles.getStagingDir(cluster,conf); 获取目录路径
  @H_404_2@configure the command line options correctly on the submitting dfs@H_404_2@
  InetAddress ip = InetAddress.getLocalHost(); 获取本节点(该主机)的ip地址@H_404_2@
  if@H_404_2@ (ip != ) {
    submitHostAddress @H_404_2@= ip.getHostAddress();本节点IP地址的字符串形式 @H_404_2@
    submitHostName = ip.getHostName();本节点名称 @H_404_2@
    conf.set(MRJobConfig.JOB_SUBMITHOST,submitHostName); 写入配置conf中@H_404_2@
    conf.set(MRJobConfig.JOB_SUBMITHOSTADDR,submitHostAddress);
  }
  JobID jobId @H_404_2@= submitClient.getNewJobID(); 设置JOBId(作业ID唯一)@H_404_2@
  job.setJobID(jobId); 设置job的id@H_404_2@
  Path submitJobDir = new@H_404_2@ Path(jobStagingArea,jobId.toString());本作业的临时子目录名中包含着作业ID号码 @H_404_2@
  JobStatus status =  {
    conf.set(MRJobConfig.USER_NAME,UserGroupInformation.getCurrentUser().getShortUserName()); @H_404_2@这是用户名@H_404_2@
    conf.set("hadoop.http.filter.initializers"404_2@"org.apache.hadoop.yarn.server.webproxy.amfilter.AmFilterInitializer");准备用于Http接口的过滤器初始化 @H_404_2@
    conf.set(MRJobConfig.MAPREDUCE_JOB_DIR,submitJobDir.toString());设置提交job的路径@H_404_2@
    LOG.debug("Configuring job " + jobId + " with " + submitJobDir 
        @H_404_2@+ " as the submit dir");

    @H_404_2@ get delegation token for the dir  /* 准备好与访问权限有关的证件(token) */ @H_404_2@
    TokenCache.obtainTokensForNamenodes(job.getCredentials(),1)">new@H_404_2@ Path[] { submitJobDir },conf); 获取与NameNode打交道所需证件 @H_404_2@
    
    populateTokenCache(conf,job.getCredentials());

    @H_404_2@ generate a secret to authenticate shuffle transfers@H_404_2@需要生成Mapper与Reducer之间的数据流动所用的密码 @H_404_2@
    if@H_404_2@ (TokenCache.getShuffleSecretKey(job.getCredentials()) == ) {
      KeyGenerator keyGen;
      @H_404_2@ {
        keyGen @H_404_2@= KeyGenerator.getInstance(SHUFFLE_KEYGEN_ALGORITHM);
        keyGen.init(SHUFFLE_KEY_LENGTH);
      } @H_404_2@ (NoSuchAlgorithmException e) {
        @H_404_2@new@H_404_2@ IOException("Error generating shuffle secret key"404_2@= keyGen.generateKey();
      TokenCache.setShuffleSecretKey(shuffleKey.getEncoded(),job.getCredentials());
    }
    @H_404_2@ (CryptoUtils.isEncryptedSpillEnabled(conf)) {
      conf.setInt(MRJobConfig.MR_AM_MAX_ATTEMPTS,@H_404_2@1);
      LOG.warn(@H_404_2@"Max job attempts set to 1 since encrypted intermediate" +
              "data spill is enabled");
    }

    copyAndConfigureFiles(job,submitJobDir);@H_404_2@将可执行文件之类拷贝到HDFS中,默认的是保留10份,会存在不同的节点上@H_404_2@

    Path submitJobFile @H_404_2@= JobSubmissionFiles.getJobConfPath(submitJobDir);配置文件路径 
    
    @H_404_2@ Create the splits for the job@H_404_2@
    LOG.debug("Creating splits at " + jtFs.makeQualified(submitJobDir));
    @H_404_2@int@H_404_2@ maps = writeSplits(job,submitJobDir);    设置map数,这里如何设置map的数量我会单独写一篇介绍,@H_404_2@
    conf.setInt(MRJobConfig.NUM_MAPS,maps);
    LOG.info(@H_404_2@"number of splits:" + maps);

    @H_404_2@ write "queue admins of the queue to which job is being submitted"  to job file.@H_404_2@
    String queue = conf.get(MRJobConfig.QUEUE_NAME,JobConf.DEFAULT_QUEUE_NAME); @H_404_2@默认作业调度队列名为“default”@H_404_2@

    AccessControlList acl @H_404_2@= submitClient.getQueueAdmins(queue);
    conf.set(toFullPropertyName(queue,QueueACL.ADMINISTER_JOBS.getAclName()),acl.getAclString());  @H_404_2@设置acl权限 

    @H_404_2@ removing jobtoken referrals before copying the jobconf to HDFS
    @H_404_2@ as the tasks don't need this setting,actually they may break
    @H_404_2@ because of it if present as the referral will point to a
    @H_404_2@ different job.@H_404_2@
    TokenCache.cleanUpTokenReferral(conf); 清楚Token引用的缓存@H_404_2@

     (conf.getBoolean(
        MRJobConfig.JOB_TOKEN_TRACKING_IDS_ENABLED,MRJobConfig.DEFAULT_JOB_TOKEN_TRACKING_IDS_ENABLED)) {
      @H_404_2@ Add HDFS tracking ids 如果启用了跟踪机制的话@H_404_2@
      ArrayList<String> trackingIds = new@H_404_2@ ArrayList<String>();
      @H_404_2@for@H_404_2@ (Token<? extends@H_404_2@ TokenIdentifier> t :
          job.getCredentials().getAllTokens()) {
        trackingIds.add(t.decodeIdentifier().getTrackingId()); @H_404_2@获取所有相关跟踪机制@H_404_2@
      }
      conf.setStrings(MRJobConfig.JOB_TOKEN_TRACKING_IDS,trackingIds.toArray(@H_404_2@new@H_404_2@ String[trackingIds.size()])); 设置跟踪机制@H_404_2@
    }

    @H_404_2@ Set reservation info if it exists设置预设参数(如果有)@H_404_2@
    ReservationId reservationId = job.getReservationId();
    @H_404_2@if@H_404_2@ (reservationId != ) {
      conf.set(MRJobConfig.RESERVATION_ID,reservationId.toString());
    }

    @H_404_2@ Write job file to submit dir@H_404_2@
    writeConf(conf,submitJobFile);将conf的内容写入一个.xml文件 
    
    @H_404_2@//@H_404_2@
     Now,actually submit the job (using the submit name)
    @H_404_2@//
@H_404_2@    printTokens(jobId,job.getCredentials());

@H_404_2@提交作业,通过YarnRunner.submitJob()或LocalJobRunner.submitJob() @H_404_2@
    status = submitClient.submitJob(
        jobId,submitJobDir.toString(),job.getCredentials());
    @H_404_2@if@H_404_2@ (status != ) {
      @H_404_2@return@H_404_2@ status;  返回状态@H_404_2@
    }  {
      @H_404_2@new@H_404_2@ IOException("Could not launch job");
    }
  } @H_404_2@finally@H_404_2@ {
    @H_404_2@if@H_404_2@ (status == ) {
      LOG.info(@H_404_2@"Cleaning up the staging area " + submitJobDir);
      @H_404_2@if@H_404_2@ (jtFs != null@H_404_2@ && submitJobDir != )
        jtFs.delete(submitJobDir,1)">true@H_404_2@);  删除临时目录 @H_404_2@

    }
  }
}@H_404_2@

submitJobInternal方法可以得知,@H_404_2@需要随同作业单一起提交的资源和信息有两类:

  一类是需要交到资源管理器RM手里,供RM在立项和调度时使用的;

  一类则并非供RM直接使用,而是供具体进行计算的节点使用的。前者包括本节点即作业提交者的IP地址、节点名、用户名、作业ID号,以及有关MapReduce计算输入数据文件的信息,还有为提交作业而提供的“证章(Token)”等。这些信息将被打包提交给RM,这就是狭义的作业提交,是流程的主体。后者则有作业执行所需的jar可执行文件、外来对象库等。如果计算的输入文件在本地,则后者还应包括输入文件。这些资源并不需要提交给RM,因为RM本身并不需要用到这些资源,但是必须要把这些资源复制或转移到全局性的HDFS文件系统中,让具体承担计算任务的节点能够取用。

  为了上传相关的资源和信息,需要在HDFS文件系统中为本作业创建一个目录。HDFS文件系统中有一个目录是专门用于作业提交的,称为“舞台目录(stagingdirectory)”。所以这里要通过JobSubmissionFiles.getStagingDir()从集群获取这个目录的路径。然后就以本作业的ID,即JobId为目录名在这个舞台目录中创建一个临时的子目录,这就是代码中的submitJobDir。以后凡是与本作业有关的资源和信息,就都上传到这个子目录中。

  这个方法包括设置map数,执行队列呀等最后执行connect()方法中创建的对象YARNRunner(或者是@H_404_2@LocalJobRunner@H_404_2@)的@H_404_2@submitJob方法。这样我们的作业就提交给RM了,作业流程如下:@H_404_2@

[@H_404_2@WordCount.main() -> Job.waitForCompletion() -> Job.submit()  -> Job.connect() -> Cluster.Cluster() -> Cluster.initialize() -> YarnClientProtocolProvider.create() -> JobSubmitter.sbumitJobInternal() -> YARNRunner.submitJob()]@H_404_2@

可继续看(hadoop2.7之作业提交详解(下))@H_404_2@

 @H_404_2@

 

 

 

 @H_404_2@

猜你在找的大数据相关文章