docker 安裝 Elasticsearch:6.5.4 并使用logstash 同步mysql數(shù)據(jù)到Elasticsearch

  • 拉鏡像
docker pull elasticsearch:6.5.4 

6.5.4: Pulling from library/elasticsearch
a02a4930cb5d: Downloading [===================>                               ]     30MB/75.17MB
dd8a94cca3f9: Downloading [=>                                                 ]  6.421MB/188.1MB
bd73f551dee4: Download complete 
70de352c4efc: Downloading [===================>                               ]  2.637MB/6.859MB
0b5ae4c7310f: Waiting 
489d9f8b18f1: Waiting 
8ba96caf5951: Waiting 
f1df04f27c5f: Waiting 
  • 查看鏡像
docker images

REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
elasticsearch                  6.5.4               93109ce1d590        5 weeks ago         774MB
  • 啟動(dòng)一個(gè)容器
    elasticsearch/jvm.options 默認(rèn)配置 -Xms2g - Xmx2g 來(lái)指定內(nèi)存 我使用的是1G內(nèi)存 所以需要指定-Xms -Xmx 大小
    內(nèi)存夠大就使用默認(rèn)-Xmx 啟動(dòng)容器如下:
docker run -d --name elasticsearch --net somenetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.5.4 

d2953375ec7ea5eef1f84d9d39f3f0678a17274d7698716456034c1563aab864

內(nèi)存比較小比如我1g 就需要指定-Xms -Xmx 大小

docker run -d --name elasticsearch --net somenetwork -p 9200:9200 -p 9300:9300 -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" -e "discovery.type=single-node" elasticsearch:6.5.4                    
ed40afba226b0ca3a148f41d142d195529b902726b0019742a83a8d595ed5583

9300端口: ES節(jié)點(diǎn)之間通訊使用
9200端口: ES節(jié)點(diǎn) 和 外部 通訊使用

  • 查看啟動(dòng)容器
docker ps 

CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS                      PORTS                                                                                        NAMES
d2953375ec7e        elasticsearch:6.5.4            "/usr/local/bin/dock…"   37 seconds ago      Exited (1) 36 seconds ago                                                                                                elasticsearch
 curl -v 127.0.0.1:9200 

* Rebuilt URL to: 127.0.0.1:9200/
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 9200 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:9200
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 494
< 
{
  "name" : "JFvwCOs",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "gFw-ERtCRs-5vc-zEMBbIg",
  "version" : {
    "number" : "6.5.4",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "d2ef93d",
    "build_date" : "2018-12-17T21:17:40.758843Z",
    "build_snapshot" : false,
    "lucene_version" : "7.5.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
* Connection #0 to host 127.0.0.1 left intact
  • 安裝head插件
    最簡(jiǎn)單方式可以直接安裝谷歌插件的elasticsearch-head-chrome,也可以在Chrome網(wǎng)上應(yīng)用店上找到

下面是通過(guò)docker 安裝方式

docker pull mobz/elasticsearch-head:5

* Pulling from mobz/elasticsearch-head
75a822cd7888: Pulling fs layer 
57de64c72267: Pulling fs layer 
4306be1e8943: Pulling fs layer 
871436ab7225: Waiting 
0110c26a367a: Waiting 
1f04fe713f1b: Waiting 
723bac39028e: Waiting 
7d8cb47f1c60: Waiting 
7328dcf65c42: Waiting 
b451f2ccfb9a: Waiting 
304d5c28a4cf: Waiting 
4cf804850db1: Waiting 

啟動(dòng)head

docker run -d -p 9100:9100 --name elasticsearch-head mobz/elasticsearch-head:5
a31c966d1eec8c83fceefd0515df2f9e91986f08315d0a0d07b9ae261086d7d4
  • 然后瀏覽器訪(fǎng)問(wèn) 127.0.0.1:9100


    image.png

    出現(xiàn)這個(gè)界面表示 elasticsearch-head 安裝成功
    但是發(fā)現(xiàn)“集群健康值:未連接” 說(shuō)明沒(méi)有和elasticsearch 連接成功,需要elasticsearch配置跨域

  • elasticsearch 跨域配置
    1.進(jìn)入elasticsearch容器
 docker exec -it 9d53699397a8 /bin/bash
[root@9d53699397a8 elasticsearch]# 

2.安裝vim

[root@9d53699397a8 elasticsearch]# yum install -y vim

3.修改/usr/share/elasticsearch/config/elasticsearch.yml

vim elasticsearch.yml

cluster.name: "docker-cluster"
network.host: 0.0.0.0

# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1


 # headR件設(shè)置
http.cors.enabled: true
http.cors.allow-origin: "*"

3.重啟容器

 docker restart 9d53699397a8
image.png
  • 使用 Logstash 將mysql 數(shù)據(jù)庫(kù)數(shù)據(jù)同步到 elasticsearch
    1.下載
 wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz

2.解壓

tar -zvxf logstash-6.5.4.tar.gz 

3.修改jvm
jvm.options 默認(rèn)
-Xms1g
-Xmx1g
我機(jī)器內(nèi)存很小所以需要修改

/opt/logstash-6.5.4/config# vim jvm.options 

-Xms512m
-Xmx512m

4.運(yùn)行

 /opt/logstash-6.5.4/bin#./logstash -e 'input { stdin { } } output { stdout {} }'

3.安裝 jdbc 和 elasticsearch 插件

/opt/logstash-6.5.4# bin/logstash-plugin install logstash-input-jdbc
Validating logstash-input-jdbc
Installing logstash-input-jdbc
Installation successful
/opt/logstash-6.5.4# bin/logstash-plugin install logstash-output-elasticsearch
Validating logstash-output-elasticsearch
Installing logstash-output-elasticsearch
Installation successful

4.下載mysql-connector-java
5.編寫(xiě)配置文件 sync_table.conf
注意:數(shù)據(jù)庫(kù)中刪除的數(shù)據(jù)無(wú)法同步到ES中,只能同步insert update 數(shù)據(jù)

/opt/logstash-6.5.4/config# vim sync_table.conf
  
input {
  jdbc {
    # mysql相關(guān)jdbc配置
    jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test?useUnicode=true&characterEncoding=utf-8&useSSL=false"
    jdbc_user => "root"
    jdbc_password => "123456"

    # jdbc連接mysql驅(qū)動(dòng)的文件  此處路徑一定要正確 否則會(huì)報(bào)com.mysql.cj.jdbc.Driver could not be loaded
    jdbc_driver_library => "/opt/logstash-6.5.4/sync_config/mysql-connector-java-8.0.12.jar"
    # the name of the driver class for mysql
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    jdbc_paging_enabled => true
    jdbc_page_size => "50000"

    jdbc_default_timezone =>"Asia/Shanghai"

    # mysql文件, 也可以直接寫(xiě)SQL語(yǔ)句在此處,如下:
    # 如果要使字段和實(shí)體類(lèi)的駝峰命名法一致  則需要這樣寫(xiě)sql  select d_name as dName, c_id as cId from area where update_time >= :sql_last_value order by update_time asc
    statement => "select * from area where update_time >= :sql_last_value order by update_time asc"
    # statement_filepath => "./config/jdbc.sql"

    # 這里類(lèi)似crontab,可以定制定時(shí)操作,比如每分鐘執(zhí)行一次同步(分 時(shí) 天 月 年)
    schedule => "* * * * *"
    #type => "jdbc"
 

    # 是否記錄上次執(zhí)行結(jié)果, 如果為真,將會(huì)把上次執(zhí)行到的 tracking_column 字段的值記錄下來(lái),保存到 last_run_metadata_path 指定的文件中
    #record_last_run => true

    # 是否需要記錄某個(gè)column 的值,如果record_last_run為真,可以自定義我們需要 track 的 column 名稱(chēng),此時(shí)該參數(shù)就要為 true. 否則默認(rèn) track 的是 timestamp 的值.
    use_column_value => true

    # 如果 use_column_value 為真,需配置此參數(shù). track 的數(shù)據(jù)庫(kù) column 名,該 column 必須是遞增的. 一般是mysql主鍵
    tracking_column => "update_time"

    tracking_column_type => "timestamp"

    last_run_metadata_path => "area_logstash_capital_bill_last_id"

    # 是否清除 last_run_metadata_path 的記錄,如果為真那么每次都相當(dāng)于從頭開(kāi)始查詢(xún)所有的數(shù)據(jù)庫(kù)記錄
    clean_run => false

    #是否將 字段(column) 名稱(chēng)轉(zhuǎn)小寫(xiě)
    #lowercase_column_names => false
  }
}

filter {
  date {
    match => [ "update_time", "yyyy-MM-dd HH:mm:ss" ]
    timezone => "Asia/Shanghai"
  }
}

output {
  elasticsearch {
    hosts => ["127.0.0.1:9200"]
    # index名 自定義 相當(dāng)于數(shù)據(jù)庫(kù) 對(duì)于實(shí)體類(lèi)上@Document(indexName = "sys_core", type = "area")indexName
    index => "sys_core"  
    #索引的類(lèi)型 相當(dāng)于數(shù)據(jù)庫(kù)里面的表 對(duì)于實(shí)體類(lèi)上@Document(indexName = "sys_core", type = "area")type
    document_type => "area"
    #需要關(guān)聯(lián)的數(shù)據(jù)庫(kù)中有有一個(gè)id字段,對(duì)應(yīng)索引的id號(hào)
    document_id => "%{id}"
    template_overwrite => true
  }

  # 這里輸出調(diào)試,正式運(yùn)行時(shí)可以注釋掉
  stdout {
      codec => json_lines
  }
}
  1. 啟動(dòng)
/opt/logstash-6.5.4# bin/logstash -f config/sync_table.cfg

7.配置同步多張表
比如想同步tableA tableB tableC 3張表 則需要?jiǎng)?chuàng)建3個(gè) sync_table.conf 文件 sync_tableA.conf sync_tableB.conf sync_tableC.conf
只是修改里面的sql語(yǔ)句和索引名
sync_table.conf 文件創(chuàng)建好后最后在 /opt/logstash-6.5.4/config/pipelines.yml 配置

- pipeline.id: table1
  path.config: "/opt/logstash-6.5.4/sync_config/sync_tableA.conf"
- pipeline.id: table2
  path.config: "/opt/logstash-6.5.4/sync_config/ sync_tableB.conf"
- pipeline.id: table3
  path.config: "/opt/logstash-6.5.4/sync_config/sync_tableC.conf"

然后啟動(dòng)

/opt/logstash-6.5.4# bin/logstash

最后成功同步數(shù)據(jù)

[2019-01-24T22:40:00,333][INFO ][logstash.inputs.jdbc     ] (0.013511s) SELECT version()
[2019-01-24T22:40:00,340][INFO ][logstash.inputs.jdbc     ] (0.002856s) SELECT version()
[2019-01-24T22:40:00,349][INFO ][logstash.inputs.jdbc     ] (0.009841s) SELECT version()
[2019-01-24T22:40:00,408][INFO ][logstash.inputs.jdbc     ] (0.005667s) SELECT count(*) AS `count` FROM (select * from area where update_time >= '2019-01-23 22:36:24' order by update_time asc) AS `t1` LIMIT 1
[2019-01-24T22:40:00,410][INFO ][logstash.inputs.jdbc     ] (0.002467s) SELECT count(*) AS `count` FROM (select * from dictionaries where update_time >= '2019-01-24 06:52:53' order by update_time asc) AS `t1` LIMIT 1
[2019-01-24T22:41:00,361][INFO ][logstash.inputs.jdbc     ] (0.000663s) SELECT version()

8.單機(jī)版(只有一個(gè)節(jié)點(diǎn)) 集群狀態(tài)為yellow 和索引為Unassigned


image.png

image.png

這里解釋一下為什么集群狀態(tài)為yellow
由于我們是單節(jié)點(diǎn)部署elasticsearch,而默認(rèn)的分片副本數(shù)目配置為1,而相同的分片不能在一個(gè)節(jié)點(diǎn)上,所以就存在副本分片指定不明確的問(wèn)題,所以顯示為yellow,我們可以通過(guò)在elasticsearch集群上添加一個(gè)節(jié)點(diǎn)來(lái)解決問(wèn)題,如果你不想這么做,你可以刪除那些指定不明確的副本分片(當(dāng)然這不是一個(gè)好辦法)但是作為測(cè)試和解決辦法還是可以嘗試的,下面我們?cè)囈幌聞h除副本分片的辦法

刪除副本分片 即可解決

curl -H "Content-Type: application/json"   -X PUT http://localhost:9200/_settings -d  '{"number_of_replicas":0}'
{"acknowledged":true}

 curl -v http://localhost:9200/_cluster/health?pretty
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9200 (#0)
> GET /_cluster/health?pretty HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 470
< 
{
  "cluster_name" : "docker-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 10,
  "active_shards" : 10,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

image.png
  • Elasticsearch設(shè)置最大返回條數(shù)

解決異常

Caused by: org.elasticsearch.search.query.QueryPhaseExecutionException: Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.

curl -H "Content-Type: application/json"   -X PUT http://localhost:9200/_settings -d  '{"max_result_window":2147483647}'

注意:

1.size的大小不能超過(guò)index.max_result_window這個(gè)參數(shù)的設(shè)置,默認(rèn)為10,000。

2.需要搜索分頁(yè),可以通過(guò)from size組合來(lái)進(jìn)行。from表示從第幾行開(kāi)始,size表示查詢(xún)多少條文檔。from默認(rèn)為0,size默認(rèn)為10;
通過(guò)頁(yè)面設(shè)置方法參考:https://blog.csdn.net/chenhq_/article/details/77507956

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀(guān)點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容