windows单机实现hbase、hive 整合

前端之家收集整理的这篇文章主要介绍了windows单机实现hbase、hive 整合前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

首先hbase-site.xml设置好 hbase使用的zk端口

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License,Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing,software
 * distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
 hdfs://localhost:9000
hdfs://127.0.0.1:9000/hbase/
-->
<configuration>
	  <property> 
    	<name>hbase.master</name> 
    	<value>localhost</value> 
    </property> 
    <property>  
       <name>hbase.rootdir</name>  
       <value>hdfs://127.0.0.1:9000/hbase/</value>  
    </property>
    <property>
       <name>hbase.tmp.dir</name>  
       <value>D:/hbase-1.2.5/tmp</value>  
    </property>
 
    <property>
       <name>hbase.zookeeper.quorum</name>
       <value>127.0.0.1</value>
    </property>
    <property>  
       <name>hbase.zookeeper.property.dataDir</name>
       <value>D:/hbase-1.2.5/zoo</value>
    </property> 
 
    <property>  
       <name>hbase.cluster.distributed</name>  
       <value>false</value>  
    </property>
 		<property>
    	<name>hbase.master.info.port</name>
      <value>60010</value>
		</property>
    <property>    
      <name>hbase.zookeeper.property.clientPort</name>    
      <value>2185</value>    
    </property>    		
 	                   
 
                
</configuration>

不难看到zk使用的端口 是2185,

hive-size.xml

<configuration>

	<!-- WARNING!!! This file is provided for documentation purposes ONLY!     -->
	<!-- WARNING!!! Any changes you make to this file will be ignored by Hive. -->
	<!-- WARNING!!! You must make your changes in hive-site.xml instead.       -->

	<!-- config MysqL connection -->
	<property>
		<name>javax.jdo.option.ConnectionURL</name>
		<value>jdbc:MysqL://127.0.0.1:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value>
		<description>JDBC connect string for a JDBC Metastore</description>
	</property>

	<property>
		<name>javax.jdo.option.ConnectionDriverName</name>
		<value>com.MysqL.jdbc.Driver</value>
		<description>Driver class name for a JDBC Metastore</description>
	</property>

	<property>
		<name>javax.jdo.option.ConnectionUserName</name>
		<value>root</value>
		<description>username to use against Metastore database</description>
	</property>

	<property>
		<name>javax.jdo.option.ConnectionPassword</name>
		<value>root</value>
		<description>password to use against Metastore database</description>
	</property>
	<property>
		<name>hive.Metastore.schema.verification</name>
		<value>false</value>
	</property>

 
	<property>
		<name>hive.Metastore.warehouse.dir</name>
		<value>/user/hive/warehouse</value>
	</property>

	<property>
		<name>javax.jdo.option.DetachAllOnCommit</name>
		<value>true</value>
		<description>detaches all objects from session so that they can be used after transaction is committed</description>
	</property>

	<property>
		<name>javax.jdo.option.NonTransactionalRead</name>
		<value>true</value>
		<description>reads outside of transactions</description>
	</property>

 	<property>
        <name>datanucleus.readOnlyDatastore</name>
        <value>false</value>
    </property>
    <property> 
        <name>datanucleus.fixedDatastore</name>
        <value>false</value> 
    </property>

    <property> 
        <name>datanucleus.autoCreateSchema</name> 
        <value>true</value> 
    </property>
    
    <property>
        <name>datanucleus.autoCreateTables</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.autoCreateColumns</name>
        <value>true</value>
    </property>
    <!-- hive2 服务 -->
		<property>
		   <name>hive.support.concurrency</name>
		   <value>true</value>
		</property>
		<property>
		   <name>hive.zookeeper.quorum</name>
		   <value>localhost</value>
		</property>
		<property>
		   <name>hive.server2.thrift.min.worker.threads</name>
		   <value>5</value>
		</property>
		<property>
		   <name>hive.server2.thrift.max.worker.threads</name>
		   <value>100</value>
		</property>
		<!--CUSTOM,NONE  -->
		<!--
		<property>  
		   <name>hive.server2.authentication</name>  
		   <value>NONE</value>  
		</property>  
	
		<property>  
		   <name>hive.server2.custom.authentication.class</name>  
		   <value>tv.huan.hive.auth.HuanPasswdAuthenticationProvider</value>  
		</property>  
		<property>  
		   <name>hive.server2.custom.authentication.file</name>  
		   <value>D:/apache-hive-2.1.1-bin/conf/user.password.conf</value>  
		</property>
		-->
 		<property>
    	<name>hive.server2.transport.mode</name>
    	<value>binary</value>
  	</property>
  	<property>
    	<name>hive.hwi.listen.host</name>
    	<value>0.0.0.0</value>
  	</property>

 		<property>
    	<name>hive.server2.webui.host</name>
    	<value>0.0.0.0</value>
    </property>
     <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
     </property>
     <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
     </property>
     
     
		<property>
	    <name>hive.server2.thrift.client.user</name>
	    <value>root</value>
 	  </property>
	  <property>
	    <name>hive.server2.thrift.client.password</name>
	    <value>123456</value>
 	  </property>
	       
    
     <!--
	  <property>
	    <name>hive.Metastore.uris</name>
	    <value>thrift://127.0.0.1:9083</value>
	    <description>Thrift URI for the remote Metastore. Used by Metastore client to connect to remote Metastore.</description>
	  </property>    
	  --> 
     <property>
        <name>hive.server2.thrift.http.port</name>
        <value>11002</value>
     </property>
     <property>
        <name>hive.server2.thrift.port</name>
        <value>11006</value>
     </property>    	  
 

 		<property>
			<name>hbase.zookeeper.quorum</name>
      <value>127.0.0.1</value>
    </property>
 		<property>
			<name>hbase.zookeeper.property.clientPort</name>
      <value>2185</value>
    </property>    

 <property>
   <name>hive.aux.jars.path</name>
   <value>file:///D:/apache-hive-2.1.1-bin/lib/hive-hbase-handler-2.1.1.jar,file:///D:/apache-hive-2.1.1-bin/lib/protobuf-java-2.5.0.jar,file:///D:/apache-hive-2.1.1-bin/lib/hbase-common-1.2.5.jar,file:///D:/apache-hive-2.1.1-bin/lib/hbase-client-1.2.5.jar,file:///D:/apache-hive-2.1.1-bin/lib/hbase-server-1.2.5.jar,file:///D:/apache-hive-2.1.1-bin/lib/zookeeper-3.4.6.jar,file:///D:/apache-hive-2.1.1-bin/lib/guava-14.0.1.jar</value>
 </property>
		<property>  
				<name>hive.querylog.location</name>  
		  <value>D:/apache-hive-2.1.1-bin/logs</value>  
		</property> 

  		

</configuration>

该配置在hive中增加了hbase的连接,和加载相应的jar

在启动hive shell,和start_Metastore时,加上个环境变量(不加好像也行)start_Metastore.cmd内容如下:

cd D:\apache-hive-2.1.1-bin\bin
SET HIVE_AUX_JARS_PATH=D:\apache-hive-2.1.1-bin\auxlib\
hive --service Metastore
auxlib目录里放的是hive.aux.jars.path所配置的jar文件

从Hive中创建HBase表

创建表语句:

CREATE TABLE iteblog(key int,value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "iteblog","hbase.mapred.output.outputtable" = "iteblog");

成功后,查看:


创建一个临时表

create table pokes(foo int,bar string)
row format delimited fields terminated by ',';
以下面两种方式插入数据。
1、insert into pokes(foo,bar) values (4,'KK')

2、load data local inpath 'd:/pokes.txt' overwrite into table pokes;



在hive中执行:

insert overwrite table iteblog select * from pokes;

在hbase shell 查看iteblog 表的记录:


我说我们整合hbase+hive 的记录添加成功。

再用hive执行

insert into iteblog(key,value) values (99,'KK');


至此,从Hive中创建HBase表 完成!!!!!

使用Hive中映射HBase中已经存在的表

本机开发环境hbase中存在一个info_user的表



hive 不存在这个表:


@H_403_127@hbase库里的info_user 有两字段id,name 在hive 要建外部表语句:

@H_403_127@CREATE EXTERNAL TABLE info_user(rowid string,id string,name string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,id:id,id:name") TBLPROPERTIES("hbase.table.name" = "info_user","hbase.mapred.output.outputtable" = "info_user");


@H_403_127@



参考:

windows单机实现hbase、hive 整合

hive支持事务-更新与删除

原文链接:https://www.f2er.com/windows/372849.html

猜你在找的Windows相关文章