下载之前,需先查看兼容的 Hadoop 版本,并安装 hadoop,参考
因为自己安装的是 hadoop2.7.0,所以就直接下载了 Hive2.0.1 版本安装。
下载连接
注:由于 Hive 运行在 Hadoop 上,每个 Hive 发布的版本都可以和多个 Hadoop 版本共同工作。一般来说,Hive 支持 Hadoop 的新老版本。
1. 解压后 hive 包位置在 /opt/apache-hive-2.0.1-bin 下。
- [root@hadoop001 opt]# tar apache-hive-2.0.1-bin.tar.gz
2. 安装包授权给 hadoop 用户
- [root@hadoop001 opt]# chown hadoop:hadoop -R apache-hive-2.0.1-bin/
3. 切回 hadoop 用户,并添加 hive 环境变量
- [hadoop@hadoop001 ~]$ vim ~/.bash_profile
添加 Hive 路径
应用一下环境变量文件
- [hadoop@hadoop001 ~]$ source ~/.bash_profile
4. Hive 的元数据
Hive 元数据有三种存储方式
这里采用第二种方式,本地搭建 Mysql 元数据。
首先是安装 Mysql
- [hadoop@hadoop001 ~]$ yum -y install mysql-server
完成后配置开机启动
- [root@hadoop001 hadoop]# chkconfig mysqld on
启动 Mysql
- [root@hadoop001 hadoop]# service mysqld start
因为是第一次安装,需要先初始化用户 root 的密码
- [root@hadoop001 hadoop]# mysqladmin -u root password 'hive'
随后登录 root 用户,输入密码 hive
- [root@hadoop001 hadoop]# mysql -uroot –p
创建 hive 用户,密码 hive,并创建 hive 源数据库
- mysql > insert into mysql.user(Host, User, Password) values("localhost", "hive", password("hive"));
- Query OK,
- 1 row affected,
- 3 warnings(0.00 sec) mysql > create database hive;
- Query OK,
- 1 row affected(0.00 sec) mysql > grant all on hive. * to hive@'%'identified by 'hive';
- Query OK,
- 0 rows affected(0.00 sec) mysql > grant all on hive. * to hive@'localhost'identified by 'hive';
- Query OK,
- 0 rows affected(0.00 sec) mysql > grant all on hive. * to hive@'hadoop001'identified by 'hive';
- Query OK,
- 0 rows affected(0.00 sec) mysql > flush privileges;
- Query OK,
- 0 rows affected(0.00 sec)
5. 修改 Hive 配置文件
创建 hive 临时文件目录并全部授权给 hadoop 用户
- [root@hadoop001 hive]# mkdir -p /tmp/hive//iotmp
- [root@hadoop001 hive]# chown hadoop:hadoop -R /tmp/hive/
然后生成 hive-site.xml
- [root@hadoop001 hive]# cp /opt/apache-hive-2.0.1-bin/conf/hive-default.xml.template /opt/apache-hive-2.0.1-bin/conf/hive-site.xml
以下几项需要修改
- <property>
- <name>
- javax.jdo.option.ConnectionURL
- </name>
- <value>
- jdbc:mysql://hadoop001:3306/hive
- </value>
- <description>
- JDBC connect string for a JDBC metastore
- </description>
- </property>
- <property>
- <name>
- javax.jdo.option.ConnectionDriverName
- </name>
- <value>
- com.mysql.jdbc.Driver
- </value>
- <description>
- Driver class name for a JDBC metastore
- </description>
- </property>
- <property>
- <name>
- javax.jdo.option.ConnectionPassword
- </name>
- <value>
- hive
- </value>
- </property>
- <property>
- <name>
- hive.hwi.listen.port
- </name>
- <value>
- 3306
- </value>
- <description>
- This is the port the Hive web Interface will listen on
- </description>
- </property>
- <property>
- <name>
- datanucleus.schema.autoCreateAll
- </name>
- <value>
- true
- </value>
- <description>
- creates necessary schema on a startup if one doesn't exist. set this to
- false, after creating it once
- </description>
- </property>
- <property>
- <name>
- javax.jdo.option.ConnectionUserName
- </name>
- <value>
- hive
- </value>
- <description>
- Username to use against metastore database
- </description>
- </property>
- <property>
- <name>
- hive.exec.local.scratchdir
- </name>
- <value>
- /tmp/hive/iotmp
- </value>
- <description>
- Local scratch space for Hive jobs
- </description>
- </property>
- <property>
- <name>
- hive.downloaded.resources.dir
- </name>
- <value>
- /tmp/hive/iotmp
- </value>
- <description>
- Temporary local directory for added resources in the remote file system.
- </description>
- </property>
- <property>
- <name>
- hive.querylog.location
- </name>
- <value>
- /home/hdpsrc/hive/iotmp
- </value>
- <description>
- Location of Hive run time structured log file
- </description>
- </property>
6. 配置 mysql 的 jdbc 驱动
下载 mysql 的 jdbc 驱动包,将 mysql 驱动包 copy 到 $HIVE_HOME/lib 下
- [root@hadoop001 lib]# mv /opt/soft/mysql-connector-java-5.1.17.jar /opt/apache-hive-2.0.1-bin/lib/
7. 启动 hadoop
- start-dfs.sh
8. 启动 hive,创建测试表
- [hadoop@hadoop001 conf]$ hive
- which: no hbase in (/usr/java/jdk1.8.0_40//bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin:/opt/hadoop-2.7.3/bin:/opt/hadoop-2.7.3/sbin:JAVA_HOME/bin:/opt/apache-hive-2.0.1-bin/bin)
- SLF4J: Class path contains multiple SLF4J bindings.
- SLF4J: Found binding in [jar:file:/opt/apache-hive-2.0.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
- SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
- Logging initialized using configuration in jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-common-2.0.1.jar!/hive-log4j2.properties
- Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
- hive> show databases;
- OK
- default
- Time taken: 1.079 seconds, Fetched: 1 row(s)
- hive> create table test(x int);
- OK
- Time taken: 0.56 seconds
- hive> show tables;
- OK
- test
- Time taken: 0.075 seconds, Fetched: 1 row(s)
8. 在 mysql 中查看新建表 test 的元数据
- [root@hadoop001 apache-hive-2.0.1-bin]# mysql -u root -p
- mysql> use hive;
- mysql> show tables;
- +---------------------------+
- | Tables_in_hive |
- +---------------------------+
- | BUCKETING_COLS |
- | CDS |
- | COLUMNS_V2 |
- | DATABASE_PARAMS |
- | DBS |
- | FUNCS |
- | FUNC_RU |
- | GLOBAL_PRIVS |
- | PARTITIONS |
- | PARTITION_KEYS |
- | PARTITION_KEY_VALS |
- | PARTITION_PARAMS |
- | PART_COL_STATS |
- | ROLES |
- | SDS |
- | SD_PARAMS |
- | SEQUENCE_TABLE |
- | SERDES |
- | SERDE_PARAMS |
- | SKEWED_COL_NAMES |
- | SKEWED_COL_VALUE_LOC_MAP |
- | SKEWED_STRING_LIST |
- | SKEWED_STRING_LIST_VALUES |
- | SKEWED_VALUES |
- | SORT_COLS |
- | TABLE_PARAMS |
- | TAB_COL_STATS |
- | TBLS |
- | TBL_PRIVS |
- | VERSION |
- +---------------------------+
- 30 rows in set (0.00 sec)
查看 TBLS 表,可以看到新增的 test 表的属性信息。
至此,Hive 安装完毕。
来源: http://www.cnblogs.com/yongjian/p/6607984.html