軟件環(huán)境:
linux系統(tǒng): CentOS6.7
Hadoop版本: 2.6.5
zookeeper版本: 3.4.8
</br>
主機(jī)配置:
一共m1, m2, m3這五部機(jī), 每部主機(jī)的用戶名都為centos
192.168.179.201: m1
192.168.179.202: m2
192.168.179.203: m3
m1: Zookeeper, Namenode, DataNode, ResourceManager, NodeManager, Master, Worker
m2: Zookeeper, Namenode, DataNode, ResourceManager, NodeManager, Worker
m3: Zookeeper, DataNode, NodeManager, Worker
</br>
集群搭建:
一.搭建基本功能的Hive(注:Hive只在一個(gè)節(jié)點(diǎn)上安裝即可)
1.下載Hive2.1.1安裝包
http://www.apache.org/dyn/closer.cgi/hive/
2.解壓
tar -zxvf hive-0.9.0.tar.gz -C /home/hadoop/soft
3.配置環(huán)境變量
vi /etc/profile
# Hive
export HIVE_HOME=/home/centos/soft/hive
export HIVE_CONF_DIR=$HIVE_HOME/conf
export CLASSPATH=$CLASSPATH:$HIVE_HOME/lib
export PATH=$PATH:$HIVE_HOMW/bin
source /etc/profile
4.配置MySQL(注:切換到root用戶)
- 卸載CentOS自帶的MySQL
rpm -qa | grep mysql
rpm -e mysql-libs-5.1.66-2.el6_3.i686 --nodeps
yum -y install mysql-server
- 初始化MySQL
(1) 修改mysql的密碼(root權(quán)限執(zhí)行)
cd /usr/bin
./mysql_secure_installation
(2) 輸入當(dāng)前MySQL數(shù)據(jù)庫(kù)的密碼為root, 初始時(shí)root是沒(méi)有密碼的, 所以直接回車(chē)
Enter current password for root (enter for none):
(3) 設(shè)置MySQL中root用戶的密碼(應(yīng)與下面Hive配置一致,下面設(shè)置為123456)
Set root password? [Y/n] Y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
(4)刪除匿名用戶
Remove anonymous users? [Y/n] Y
... Success!
(5)是否不允許用戶遠(yuǎn)程連接,選擇N
Disallow root login remotely? [Y/n] N
... Success!
(6)刪除test數(shù)據(jù)庫(kù)
Remove test database and access to it? [Y/n] Y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
(7)重裝
Reload privilege tables now? [Y/n] Y
... Success!
(8)完成
All done! If you've completed all of the above steps, your MySQL
installation should now be secure.
Thanks for using MySQL!
(9)登陸mysql
mysql -uroot -p
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123' WITH GRANT OPTION;
FLUSH PRIVILEGES;
exit;
至此MySQL配置完成
</br>
5.配置Hive
1.將hive-env.sh.template文件復(fù)制為hive-env.sh, 編輯hive-env.xml文件
JAVA_HOME=/home/centos/soft/jdk
HADOOP_HOME=/home/centos/soft/hadoop
HIVE_HOME=/home/centos/soft/hive
export HIVE_CONF_DIR=$HIVE_HOME/conf
export HIVE_AUX_JARS_PATH=$SPARK_HOME/lib/spark-assembly-1.6.0-hadoop2.6.0.jar
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$HADOOP_HOME/lib:$HIVE_HOME/lib
export HADOOP_OPTS="-Dorg.xerial.snappy.tempdir=/tmp -Dorg.xerial.snappy.lib.name=libsnappyjava.jnilib $HADOOP_OPTS"
2.配置hive-site.xml文件, 將hive-default.xml.template文件拷貝為hive-default.xml, 并編輯hive-site.xml文件(刪除所有內(nèi)容,只留一個(gè)<configuration></configuration>)
配置項(xiàng)參考:
hive.server2.thrift.port– TCP 的監(jiān)聽(tīng)端口,默認(rèn)為10000。
hive.server2.thrift.bind.host– TCP綁定的主機(jī),默認(rèn)為localhost
hive.server2.thrift.min.worker.threads– 最小工作線程數(shù),默認(rèn)為5。
hive.server2.thrift.max.worker.threads – 最小工作線程數(shù),默認(rèn)為500。
hive.server2.transport.mode – 默認(rèn)值為binary(TCP),可選值HTTP。
hive.server2.thrift.http.port– HTTP的監(jiān)聽(tīng)端口,默認(rèn)值為10001。
hive.server2.thrift.http.path – 服務(wù)的端點(diǎn)名稱,默認(rèn)為 cliservice。
hive.server2.thrift.http.min.worker.threads– 服務(wù)池中的最小工作線程,默認(rèn)為5。
hive.server2.thrift.http.max.worker.threads– 服務(wù)池中的最小工作線程,默認(rèn)為500。
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://m1:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateColumns</name>
<value>true</value>
</property>
<!-- 設(shè)置 hive倉(cāng)庫(kù)的HDFS上的位置 -->
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive</value>
<description>location of default database for the warehouse</description>
</property>
<!--資源臨時(shí)文件存放位置 -->
<property>
<name>hive.downloaded.resources.dir</name>
<value>/home/centos/soft/hive/tmp/resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<!-- Hive在0.9版本之前需要設(shè)置hive.exec.dynamic.partition為true, Hive在0.9版本之后默認(rèn)為true -->
<property>
<name>hive.exec.dynamic.partition</name>
<value>true</value>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<!-- 修改日志位置 -->
<property>
<name>hive.exec.local.scratchdir</name>
<value>/home/centos/soft/hive/tmp/HiveJobsLog</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/home/centos/soft/hive/tmp/ResourcesLog</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/home/centos/soft/hive/tmp/HiveRunLog</value>
<description>Location of Hive run time structured log file</description>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/home/centos/soft/hive/tmp/OpertitionLog</value>
<description>Top level directory where operation tmp are stored if logging functionality is enabled</description>
</property>
<!-- 配置HWI接口 -->
<property>
<name>hive.hwi.war.file</name>
<value>/home/centos/soft/hive/lib/hive-hwi-2.1.1.jar</value>
<description>This sets the path to the HWI war file, relative to ${HIVE_HOME}. </description>
</property>
<property>
<name>hive.hwi.listen.host</name>
<value>m1</value>
<description>This is the host address the Hive Web Interface will listen on</description>
</property>
<property>
<name>hive.hwi.listen.port</name>
<value>9999</value>
<description>This is the port the Hive Web Interface will listen on</description>
</property>
<!-- Hiveserver2已經(jīng)不再需要hive.metastore.local這個(gè)配置項(xiàng)了(hive.metastore.uris為空,則表示是metastore在本地,否則就是遠(yuǎn)程)遠(yuǎn)程的話直接配置hive.metastore.uris即可 -->
<!-- property>
<name>hive.metastore.uris</name>
<value>thrift://m1:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property -->
<property>
<name>hive.server2.thrift.bind.host</name>
<value>m1</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.http.port</name>
<value>10001</value>
</property>
<property>
<name>hive.server2.thrift.http.path</name>
<value>cliservice</value>
</property>
<!-- HiveServer2的WEB UI -->
<property>
<name>hive.server2.webui.host</name>
<value>m1</value>
</property>
<property>
<name>hive.server2.webui.port</name>
<value>10002</value>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>755</value>
</property>
<!-- 下面hive.aux.jars.path這個(gè)屬性里面你這個(gè)jar包地址如果是本地的記住前面要加file://不然找不到, 而且會(huì)報(bào)org.apache.hadoop.hive.contrib.serde2.RegexSerDe錯(cuò)誤 -->
<property>
<name>hive.aux.jars.path</name>
<value>file:///home/centos/soft/spark/lib/spark-assembly-1.6.0-hadoop2.6.0.jar</value>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
</property>
<!-- property>
<name>hive.server2.authentication</name>
<value>NOSASL</value>
</property -->
<property>
<name>hive.auto.convert.join</name>
<value>false</value>
</property>
</configuration>
<property>
<name>spark.dynamicAllocation.enabled</name>
<value>true</value>
<description>動(dòng)態(tài)分配資源</description>
</property>
<!-- 使用Hive on spark時(shí),若不設(shè)置下列該配置會(huì)出現(xiàn)內(nèi)存溢出異常 -->
<property>
<name>spark.driver.extraJavaOptions</name>
<value>-XX:PermSize=128M -XX:MaxPermSize=512M</value>
</property>
</configuration>
3.配置日志地址, 修改hive-log4j.properties文件
cp hive-log4j.properties.template hive-log4j.properties
vi hive-log4j.properties
hive.log.dir=/home/centos/soft/hive/tmp ## 將hive.log日志的位置改為${HIVE_HOME}/tmp目錄下
mkdir ${HIVE_HOME}/tmp
4.配置$HIVE_HOME/conf/hive-config.sh文件
## 增加以下三行
export JAVA_HOME=/home/centos/soft/java
export HIVE_HOME=/home/centos/soft/hive
export HADOOP_HOME=/home/centos/soft/hadoop
## 修改下列該行
HIVE_CONF_DIR=$HIVE_HOME/conf
</br>
6.將JDBC的jar包放入$HIVE_HOME/lib目錄下
cp /home/centos/soft/tar.gz/mysql-connector-java-5.1.6-bin.jar /home/centos/soft/hive/lib/
</br>
7.拷貝jline擴(kuò)展包
將$HIVE_HOME/lib目錄下的jline-2.12.jar包拷貝到$HADOOP_HOME/share/hadoop/yarn/lib目錄下,并刪除$HADOOP_HOME/share/hadoop/yarn/lib目錄下舊版本的jline包
</br>
8.復(fù)制$JAVA_HOME/lib目錄下的tools.jar到$HIVE_HOME/lib下
cp $JAVA_HOME/lib/tools.jar ${HIVE_HOME}/lib
</br>
9.執(zhí)行初始化Hive操作
選用MySQLysql和Derby二者之一為元數(shù)據(jù)庫(kù)
注意:先查看MySQL中是否有殘留的Hive元數(shù)據(jù),若有,需先刪除
schematool -dbType mysql -initSchema ## MySQL作為元數(shù)據(jù)庫(kù)
其中mysql表示用mysql做為存儲(chǔ)hive元數(shù)據(jù)的數(shù)據(jù)庫(kù), 若不用mysql做為元數(shù)據(jù)庫(kù), 則執(zhí)行
schematool -dbType derby -initSchema ## Derby作為元數(shù)據(jù)庫(kù)
腳本hive-schema-1.2.1.mysql.sql會(huì)在配置的Hive元數(shù)據(jù)庫(kù)中初始化創(chuàng)建表
</br>
10.啟動(dòng)Metastore服務(wù)
執(zhí)行Hive前, 須先啟動(dòng)metastore服務(wù), 否則會(huì)報(bào)錯(cuò)
./hive --service metastore
然后打開(kāi)另一個(gè)終端窗口,之后再啟動(dòng)Hive進(jìn)程
</br>
11.測(cè)試
hive
show databases;
show tables;
create table book (id bigint, name string) row format delimited fields terminated by '\t';
select * from book;
select count(*) from book;
</br>
</br>
</br>