一、啟動hive客戶端
? ? ? ? ? ? hive
二、創(chuàng)建表
在hive提示符下
CREATE TABLE IF NOT EXISTS `test_01`(
? `id` int,`name` String,`age` INT,`score` FLOAT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
CREATE external TABLE IF NOT EXISTS `test_02`(
? `id` int, `name` String,`age` INT,`score` FLOAT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
三、制作數(shù)據(jù)
在Ubuntu Terminal下
vi /home/hadoop/share/mydata/hive/score.txt
內(nèi)容如下:
1,'zhang',20,120
2,'zhao',19,119
3,'qian',18,118
4,'li',21,121
vi /home/hadoop/share/mydata/hive/score02.txt
內(nèi)容如下:
5,'wang',20,120
6,'zhou',19,119
7,'wu',18,118
8,'hu',21,121
四、加載數(shù)據(jù)
1.在hive提示符下
load data local inpath '/home/hadoop/share/mydata/hive/score.txt' overwrite into table test_01;
load data local inpath '/home/hadoop/share/mydata/hive/score.txt' overwrite into table test_02;
select * from test_01;
select * from test_02;

2.在Ubuntu Terminal下
hadoop fs -ls /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_01
hadoop fs -ls /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_02
hadoop fs -cat /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_01/score.txt
hadoop fs -cat /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_02/score.txt

五、刪除表
1.在hive提示符下
drop table test_01;
drop table test_02;
2.在Ubuntu Terminal下
hadoop fs -ls /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db

六、創(chuàng)建表
在hive提示符下
CREATE TABLE IF NOT EXISTS `test_01`(
? `id` int,`name` String,`age` INT,`score` FLOAT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
CREATE external TABLE IF NOT EXISTS `test_02`(
? `id` int, `name` String,`age` INT,`score` FLOAT)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;
select * from test_01;
select * from test_02;

七、重新加載數(shù)據(jù)
1.在hive提示符下
load data local inpath '/home/hadoop/share/mydata/hive/score02.txt' overwrite into table test_01;
load data local inpath '/home/hadoop/share/mydata/hive/score02.txt' overwrite into table test_02;
select * from test_01;
select * from test_02;

2.在Ubuntu Terminal下
hadoop fs -ls /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db
hadoop fs -ls /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_01
hadoop fs -ls /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_02
hadoop fs -cat /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_02/*

八、繼續(xù)加載數(shù)據(jù)
1.在hive提示符下
注意沒有用overwrite
load data local inpath '/home/hadoop/share/mydata/hive/score02.txt' into table test_02;
2.在Ubuntu Terminal下
hadoop fs -cat /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_02/*
hadoop fs -ls /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_02

3.在hive提示符下
注意這次用overwrite
load data local inpath '/home/hadoop/share/mydata/hive/score02.txt'?overwrite into table test_02;
select * from test_02;

4.在Ubuntu Terminal下
hadoop fs -ls /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_02
hadoop fs -cat /mylab/soft/apache-hive-3.1.2-bin/working/metastore.warehouse/testdb.db/test_02/*

九、結(jié)論
不指明類型的情況下,HIVE會默認(rèn)新建的表為內(nèi)部表,外部表需要使用external關(guān)鍵字。
當(dāng)我們刪除外部表時,刪除的只是元數(shù)據(jù),存儲數(shù)據(jù)仍被保留。當(dāng)我們刪除內(nèi)部表時,元數(shù)據(jù)和存儲數(shù)據(jù)都被刪除。
使用load data操作的時候,不管是外部表還是內(nèi)部表,如果源數(shù)據(jù)存在于HDFS層,都是數(shù)據(jù)的移動。即源數(shù)據(jù)從HDFS存儲路徑移動到HIVE數(shù)據(jù)倉庫默認(rèn)路徑。
使用load data操作的時候,要是使用了overwrite,則情況原來的文件,生成正在load的文件,要是沒有用overwrite,則在原來的基礎(chǔ)上,增加新加載的文件,要是有重名,hive會自動補(bǔ)足成唯一的文件名
十、參考資料
????https://blog.csdn.net/henrrywan/article/details/90612741