HQL操作之DQL命令
-- 測試數(shù)據(jù) /home/hadoop/data/emp.dat
7369,SMITH,CLERK,7902,2010-12-17,800,,20
7499,ALLEN,SALESMAN,7698,2011-02-20,1600,300,30
7521,WARD,SALESMAN,7698,2011-02-22,1250,500,30
7566,JONES,MANAGER,7839,2011-04-02,2975,,20
7654,MARTIN,SALESMAN,7698,2011-09-28,1250,1400,30
7698,BLAKE,MANAGER,7839,2011-05-01,2850,,30
7782,CLARK,MANAGER,7839,2011-06-09,2450,,10
7788,SCOTT,ANALYST,7566,2017-07-13,3000,,20
7839,KING,PRESIDENT,,2011-11-07,5000,,10
7844,TURNER,SALESMAN,7698,2011-09-08,1500,0,30
7876,ADAMS,CLERK,7788,2017-07-13,1100,,20
7900,JAMES,CLERK,7698,2011-12-03,950,,30
7902,FORD,ANALYST,7566,2011-12-03,3000,,20
7934,MILLER,CLERK,7782,2012-01-23,1300,,10
-- 建表并加載數(shù)據(jù)
CREATE TABLE emp (
empno int,
ename string,
job string,
mgr int,
hiredate DATE,
sal int,
comm int,
deptno int
)row format delimited fields terminated by ",";
-- 加載數(shù)據(jù)
LOAD DATA LOCAL INPATH '/home/hadoop/data/emp.dat' INTO TABLE emp;
基本查詢
-- 省略from子句的查詢 select 8*888 ;
select current_date ;
-- 使用列別名
select 8*888 product;
select current_date as currdate;
-- 全表查詢
select * from emp;
-- 選擇特定列查詢
select ename, sal, comm from emp;
-- 使用函數(shù)
select count(*) from emp;
-- count(colname) 按字段進行count,不統(tǒng)計NULL select sum(sal) from emp;
select max(sal) from emp;
select min(sal) from emp;
select avg(sal) from emp;
-- 使用limit子句限制返回的行數(shù) select * from emp limit 3;
where 子句
Where 子句不能使用列的別名

比較運算符.png
備注:通常情況下NULL不參與運算,如果參與運算返回值為NULL;NULL<=>NULL的結果為true
邏輯運算符
-- 比較運算符,null參與運算
select null=null; -- NULL
select null==null; -- NULL
select null<=>null;--true
select null is null;--true
-- 使用 is null 判空
select * from emp where comm is null;
-- 使用 in
select * from emp where deptno in (20, 30);
-- 使用 between ... and ... 左閉右閉
select * from emp where sal between 1000 and 2000;
-- 使用 like
select ename, sal from emp where ename like '%L%';
-- 使用 rlike。正則表達式,名字以A或S開頭
select ename, sal from emp where ename rlike '^(A|S).*';
Group by
分組聚合,和MySQL一樣。
表鏈接
Hive只支持等值連接。不支持非等值連接
- 內鏈接
- 外連接
- 左外連接
- 右外連接
- 全外連接

表鏈接.png
Hive 是按照從左到右的順序執(zhí)行,Hive會對每對JOIN鏈接對象啟動一個MapReduce任務
笛卡爾積:缺省條件下 hive不支持笛卡爾積運算;
-- 默認
set hive.strict.checks.cartesian.product
> hive.strict.checks.cartesian.product=true
set hive.strict.checks.cartesian.product=false;
排序子句
order by 全局有限
order by 執(zhí)行全局排序,只有一個reduce。
排序字段要出現(xiàn)在select子句中。
對于大規(guī)模數(shù)據(jù)而言,order by 效率低
sort by 局部有序
sort by 為每個reduce 產生一個排序文件,在reduce內部有序,得到局部有序結果
-- 設置reduce個數(shù)
set mapreduce.job.reduces=2;
-- 按照工資降序查看員工信息
select * from emp sort by sal desc;
-- 將查詢結果導入到文件中(按照工資降序)。生成兩個輸出文件,每個文件內部數(shù)據(jù)按 工資降序排列
insert overwrite local directory '/mnt/hadoop/output/sortsal' select * from emp sort by sal desc;
distribute by 分區(qū)排序
- distribute by 將特定的行發(fā)送到特定的reducer中,便于后繼的聚合與排序操作
- distribute by 類似于MR的分區(qū)操作,可以結合sort by,是分區(qū)數(shù)據(jù)有序
- distribute by 要寫在sort by 之前
-- 啟動2個reducer task;先按 deptno 分區(qū),在分區(qū)內按 sal+comm 排序
set mapreduce.job.reduces=2;
-- 將結果輸出到文件,觀察輸出結果
insert overwrite local directory '/mnt/hadoop/output/distBy' select empno, ename, job, deptno, sal + nvl(comm, 0) salcomm
from emp
distribute by deptno
sort by salcomm desc;
-- 上例中,數(shù)據(jù)被分到了統(tǒng)一區(qū),看不出分區(qū)的結果
-- 將數(shù)據(jù)分到3個區(qū)中,每個分區(qū)都有數(shù)據(jù)
set mapreduce.job.reduces=3;
insert overwrite local directory '/mnt/hadoop/output/distBy1' select empno, ename, job, deptno, sal + nvl(comm, 0) salcomm
from emp
distribute by deptno
sort by salcomm desc;
Cluster by
當distribute by 和sort by 是 同一個字段,可使用cluster by。且cluster 只能是生序,不能指定排序規(guī)則
select * from emp distribute by deptno sort by deptno;
select * from emp cluster by deptno;
排序小結:
- order by:全局排序
- sort by: 局部有序(在reduce內部有序)
- distribute by 按照指定條件將數(shù)據(jù)分組,常與sort by連用,使數(shù)據(jù)局部有序
- cluster by:當distribute by 與 sort by是同一個字段時,可使用cluster by簡化 語法