filebeat_OutPut 介紹


title: filebeat Output

date: 2017-06-11 05:22:31

categories: elk

tags: filebeat


Filbeat OutPut

Filebeat Elasticsearch OutPut

configure

output.elasticsearch:
  hosts: ["http://localhost:9200"]
  template.enabled: true
  template.path: "filebeat.template.json"
  template.overwrite: false
  index: "filebeat"
  ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  ssl.certificate: "/etc/pki/client/cert.pem"
  ssl.key: "/etc/pki/client/cert.key"

定義ip port add portocalhttps

output.elasticsearch:
  hosts: ["localhost"]
  protocol: "https"
  username: "admin"
  password: "s3cr3t"

  • compression_level

    • gzip 壓縮級(jí)別 range(1-9)
  • work

    • 發(fā)送事件到es的工作數(shù)量
  • index

    • The index name to write events to.
    • The default is "filebeat-%{+yyyy.MM.dd}" (for example,"filebeat-2015.04.26").
  • indices

支持條件,基于格式字符串的字段訪問和名稱映射的索引選擇器規(guī)則的數(shù)組

- index 
    - 要使用的索引格式字符串。 如果使用的字段丟失,則規(guī)則失敗。 
- mapping 
    - 映射字典為新名字
- default 
    - 默認(rèn)字符串?dāng)?shù)值
- when  
    - 選擇匹配條件

```
output.elasticsearch:
  hosts: ["http://localhost:9200"]
  index: "logs-%{+yyyy.MM.dd}"
  indices:
    - index: "critical-%{+yyyy.MM.dd}"
      when.contains:
        message: "CRITICAL"
    - index: "error-%{+yyyy.MM.dd}"
      when.contains:
        message: "ERR"

```
  • pipeline

    • 格式字符串指定獲取節(jié)點(diǎn)寫入事件pipeline的id值

        output.elasticsearch:
          hosts: ["http://localhost:9200"]
          pipeline: my_pipeline_id
      
  • pipelines

    filebeat.prospectors:
    - paths: ["/var/log/app/normal/*.log"]
        fields:
        type: "normal"
    - paths: ["/var/log/app/critical/*.log"]
        fields:
        type: "critical"
    
    output.elasticsearch:
    hosts: ["http://localhost:9200"]
    index: "filebeat-%{+yyyy.MM.dd}"
    pipelines:
      - pipeline: critical_pipeline
           when.equals:
              type: "critical"
     - pipeline: normal_pipeline
           when.equals:
              type: "normal"
    
    
  • template

    output.elasticsearch:
      hosts: ["localhost:9200"]
      template.name: "filebeat"
      template.path: "filebeat.template.json"
      template.overwrite: false
  • templates.versions
output.elasticsearch:
  hosts: ["localhost:9200"]
  template.path: "filebeat.template.json"
  template.overwrite: false
  template.versions.2x.path: "filebeat.template-es2x.json

  • max_retries

    • 當(dāng)發(fā)送失敗的時(shí)候,嘗試多少次發(fā)送事件
  • bulk_max_size

    • 單個(gè)Elasticsearch批量API索引請(qǐng)求中批量的最大事件數(shù) 默認(rèn)值為50
  • timeout

    • The http request timeout in seconds for the Elasticsearch request
    • The default is 90
  • flush_interval

    • 在兩個(gè)批量API索引請(qǐng)求之間等待新事件的秒數(shù)
  • ssl

Filebeat Logstash OutPut

需要logstash服務(wù)端安裝beat插件 使用lumberjack協(xié)議發(fā)送事件到logstash

output.logstash:
  hosts: ["localhost:5044"]

  • Metadata Fields
    • @meatedata
      • Filebeat使用@metadata字段將元數(shù)據(jù)發(fā)送到Logstash

      • @metadata字段的內(nèi)容只存在于Logstash中,不屬于從Logstash發(fā)送的任何事件的一部分

      • 有關(guān)@metadata字段的更多信息,請(qǐng)參閱Logstash文檔logstash doucument

          {
              ...
              "@metadata": { 
                "beat": "filebeat", 
                "type": "<event type>" 
              }   
          }
        

logstash to elasticsearch

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" 
    document_type => "%{[@metadata][type]}" 
  }
}

  • enabled

  • hosts

  • compression_level

  • worker

  • loadbalance

    output.logstash:
      hosts: ["localhost:5044", "localhost:5045"]
      loadbalance: true
      index: filebeat
    
  • pipelining

    • 異步處理事件,默認(rèn)關(guān)閉
  • index

  • ssl

  • timeout

  • max_retries

  • bulk_max_size

    • The maximum number of events to bulk in a single Logstash request
    • The default is 2048.

kafka OutPut

The Kafka output sends the events to Apache Kafka.

output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

  # message topic selection + partitioning
  topic: '%{[type]}'
  partition.round_robin:
    reachable_only: false

  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

  • enabled

  • hosts

  • version

  • username

  • password

  • topic

  • topics

    • topic
    • mapping
    • default
    • when
  • partition

    • random.group_events
    • round_robin.group_events
    • hash.hash
    • hash.random
  • client_id

  • worker

  • codec

  • metadata

    • refresh_frequency
    • retry.max
    • retry.backoff
  • max_retries

  • bulk_max_size

  • timeout

  • broker_timeout

  • channel_buffer_size

  • keep_alive

  • compression

  • max_message_bytes

  • required_acks

  • flush_interval

  • ssl

Redis OutPut

This output works with Redis 3.2.4.

output.redis:
  hosts: ["localhost"]
  password: "my_password"
  key: "filebeat"
  db: 0
  timeout: 5

  • enabled
  • hosts
  • port
  • index
  • key
    output.redis:
      hosts: ["localhost"]
      key: "%{[fields.list]:fallback}"  
  • keys

    • key

    • mapping

    • default

    • when

        output.redis:
          hosts: ["localhost"]
          key: "default_list"
          keys:
            - key: "info_list"   # send to info_list if `message` field contains INFO
              when.contains:
                message: "INFO"
           - key: "debug_list"  # send to debug_list if `message` field contains DEBUG
             when.contains:
               message: "DEBUG"
           - key: "%{[type]}"
              mapping:
                "http": "frontend_list"
                "nginx": "frontend_list"
                "mysql": "backend_list"
      
  • passport

  • db

  • datatype

用于發(fā)布事件的Redis數(shù)據(jù)類型。如果數(shù)據(jù)類型為列表,則使用Redis RPUSH命令,并將所有事件添加到列表中,并在鍵下定義鍵。 如果使用數(shù)據(jù)類型通道,則使用Redis PUBLISH命令,這意味著所有事件都被推送到Redis的pub / sub機(jī)制。 通道的名稱是鍵下定義的。 默認(rèn)值為列表。

  • codec
  • worker
  • loadbalance
  • timeout
  • max_retries
  • bulk_max_size
  • ssl
  • proxy_url
  • proxy_use_local_resolver

File OutPut

文件輸出將事務(wù)轉(zhuǎn)儲(chǔ)到每個(gè)事務(wù)處于JSON格式的文件中。 目前,該輸出用于測(cè)試,但可以作為Logstash的輸入

output.file:
  path: "/tmp/filebeat"
  filename: filebeat
  #rotate_every_kb: 10000
  #number_of_files: 7
  • enables
  • path
    • 定義保存文件的路徑
  • filename
    • 定義保存文件的名字
  • rotate_every_kb
    • 定義每個(gè)文件達(dá)到多少kb就開始切割
  • number_if_files
    • 定義保存幾份文件
  • codec

Console OutPut

output.console:
  pretty: true
  • pretty
  • codec
  • enabled
  • bulk_max_size

Codec Output

output.console:
  codec.json:
    pretty: true
output.console:
  codec.format:
    string: '%{[@timestamp]} %{[message]}'

Loggin OutPut

logging.level: warning
logging.to_files: true
logging.to_syslog: false
logging.files:
  path: /var/log/mybeat
  name: mybeat.log
  rotateeverybytes: 10MB 
  keepfiles: 7

Debugging

By default, Filebeat sends all its output to syslog. When you run Filebeat in the foreground, you can use the -e command line flag to redirect the output to standard error instead. For example:

filebeat -e

The default configuration file is filebeat.yml (the location of the file varies by platform). You can use a different configuration file by specifying
the -c flag. For example:

filebeat -e -c myfilebeatconfig.yml

You can increase the verbosity of debug messages by enabling one or more debug selectors. For example, to view the published transactions, you can start Filebeat with the publish selector like this:

filebeat -e -d "publish"

If you want all the debugging output (fair warning, it’s quite a lot), you can use *, like this:

filebeat -e -d "*"

support

https://www.elastic.co/support/matrix#show_compatibility

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容