redis.conf文件配置細(xì)節(jié)(4.0)

./redis-server 啟動的第一個參數(shù)就得帶上配置文件

./redis-server /path/to/redis.conf

配置文件中的單位換算規(guī)則,

k已1000為單位換算,kb則是準(zhǔn)確的1024換算,大小寫不敏感。

1k => 1000 bytes
1kb => 1024 bytes
1m => 1000000 bytes
1mb => 10241024 bytes
1g => 1000000000 bytes
1gb => 1024
1024*1024 bytes

多個配置文件引用

redis也支持配置文件的引用。主要是用于“多態(tài)”,基礎(chǔ)配置文件用于通用,再針對個別服務(wù)器配置單獨的文件被基礎(chǔ)配置文件引用。
注意:命令config rewrite xxxx 是在改寫redis.conf不會涉及到被引用的文件。而redis讀取配置生效的依據(jù)是同樣的命令,最后一個有效。所以當(dāng)你希望被引入的配置文件總是有效時,就將其放在文件最后。反之,放在文件最前面。

include /path/to/local.conf
include /path/to/other.conf

額外模塊(module)

增強模塊的加載在server啟動時,如果不能成功加載,忽略。

include /path/to/local.conf
include /path/to/other.conf

網(wǎng)絡(luò)

bind 指令

需要明確的是,bind指令配置的是該redis server 監(jiān)聽的網(wǎng)卡的地址,如果不配置,或者配置成 bind 0.0.0.0 ,表示監(jiān)聽所有網(wǎng)卡。redis默認(rèn)給的配置是bind 127.0.0.1,意味著只有本機能訪問。

bind 192.168.1.100 10.0.0.1
bind 127.0.0.1 ::1

protected-mode 指令

默認(rèn)開啟,在開啟時,如果以下任一條件滿足,則禁止外網(wǎng)ip訪問。
1.沒有明確的bind到一個網(wǎng)卡
2.沒有啟用訪問密碼。

protect-mode yes

port 指令

默認(rèn)6379,如果設(shè)置為0,不監(jiān)聽任何tcp接口。

port 6379

tcp-backlog 指令

客戶端準(zhǔn)備好被接入的等待隊列的大小。默認(rèn)值如下。
因為linux 內(nèi)核本身會按照/proc/sys/net/core/somaxconn的配置自動丟棄超過隊列大小的待接入client,所以,需要同時提高linux本身的配置somaxconn 和 tcp_max_syn_backlog才有效
https://blog.csdn.net/chuixue24/article/details/80486866

tcp-backlog 511

unixsocket 指令

用于IPC(inter process commication),默認(rèn)沒有開啟

unixsocket /tmp/redis.sock
unixsocketperm 700

timeout 指令

client idle一定時間后將其關(guān)閉,默認(rèn)0,表示關(guān)閉此功能(應(yīng)該是無指令發(fā)送?)

timeout 0

TCP keepalive.

tcp協(xié)議自帶的維持連接的心跳間隔
tcp-keepalive 300

通用

配置指令 意義 示例
daemonize 默認(rèn)redis并不會在linux上以daemon模式運行。開啟后會寫/var/run/redis.conf文件,用于單實例驗證。 daemonize no
supervised ???? supervised no
pidfile 指定redis以守護進程啟動時,要寫的pid文件,默認(rèn)是/var/run/redis.conf,如果不能創(chuàng)建,也不會影響redis的啟動和運行 pidfile /var/run/redis_6379.pid
loglevel 從低到高debug,verbose,notice,warning 線上推薦notice loglevel notice
logfile 日志輸送的地方,如果是空字符串,默認(rèn)輸出到標(biāo)準(zhǔn)輸出流2,比如在控制臺啟動的,則輸出到控制臺。如果以daemonize運行,又沒有指定日志,則到/dev/null logfile ""
syslog-enabled 將日志打印到system logger syslog-enabled no
syslog-ident Specify the syslog identity syslog-ident redis
syslog-facility ?????? syslog-facility local0
databases https://stackoverflow.com/questions/16221563/whats-the-point-of-multiple-redis-databases databases 16
always-show-log 打印那個酷炫的logo always-show-log yes

快照相關(guān)配置

配置指令 意義 示例
save <seconds> <changes> 多少秒后,至少N次修改請求后,執(zhí)行快照存儲RDB
注釋掉所有save指令,則不執(zhí)行持久化,在運行時也可以通過配置save "" ,來關(guān)掉持久化
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error 當(dāng)rdb持久化開啟時,如果bgsave失敗了,redis會拒絕client端的寫入,用硬核方式提醒。但是如果考慮做了其他有效監(jiān)控,不希望redis用這種方式報警,可以關(guān)閉 stop-writes-on-bgsave-error yes
rdbcompression 壓縮存儲文件,通常應(yīng)總設(shè)置為yes,除非想在存儲時節(jié)省cpu資源 rdbcompression yes
rdbchecksum 從version 5開始rdb會在存儲加載時做CRC64校驗,可以更好的驗證文件內(nèi)容,但是會降低10%性能 rdbchecksum yes
dbfilename rdb快照默認(rèn)輸出文件,dump.rdb dbfilename dump.rdb
dir redis的工作目錄,默認(rèn)當(dāng)前文件夾。注意此配置應(yīng)該是目錄。dbfilename也在此文件夾下 dir ./

集群相關(guān)配置(REPLICATION)

配置指令 意義 示例
slaveof <masterip> <masterport> 主從同步,此配置用于表明該server是某個server的slave slaveof 192.168.1.212 6379
masterauth <master-password> 如果master有秘密
slave-serve-stale-data slave數(shù)據(jù)庫是否在正在同步或斷開時,響應(yīng)客戶端,默認(rèn)yes slave-serve-stale-data yes
slave-read-only slave server是否只接受讀請求,默認(rèn)yes.感覺永遠(yuǎn)也別用這功能為好 slave-read-only yes
repl-diskless-sync 主從全量復(fù)制時是否通過純socket方式運行,該功能還在experimental階段(4.0),速度更快,每次同步masterserver都等待一段時間,以盡量同步多的slave server repl-diskless-sync no
repl-diskless-sync-delay 無盤復(fù)制的時候,默認(rèn)延遲5s repl-diskless-sync-delay 5
repl-ping-slave-period slave ping的心跳周期,默認(rèn)10s repl-ping-slave-period 10
repl-timeout 集群判定超時的時間,60s默認(rèn),這個時間應(yīng)該大于repl-ping-slave-period repl-timeout 60
repl-disable-tcp-nodelay 開啟時,redis會用更小的帶寬去做主從同步,但是最高會有40ms延遲,適用于主從server帶寬不夠時,默認(rèn)自然是關(guān)閉的 repl-disable-tcp-nodelay no
repl-backlog-size 部分同步的時候,master緩存指令的buffer大小,越大,主從失聯(lián)導(dǎo)致的全量復(fù)制可能就越低 repl-backlog-size 1mb
repl-backlog-ttl 主server和從server 失聯(lián)超過此時間后,釋放主server的緩存
從server是不會釋放backlog的,因為它需要用里面的數(shù)據(jù)來跟master重連確認(rèn)同步到哪兒了
repl-backlog-ttl 3600
slave-priority 100 用于sentinel選新的mater server,越小優(yōu)先級越高。這個不應(yīng)該搞一個zxid最大的選舉什么的么?
min-slaves-to-write 小于X臺slave連接后,master停止寫服務(wù) 默認(rèn)關(guān)閉
min-slaves-max-lag 上面這個臺數(shù)失聯(lián)后,可容忍的時間
slave-announce-ip 強制配置告訴master自己的IP,不考慮NAT情況 slave-announce-ip 5.5.5.5
slave-announce-port 強制配置告訴master自己的PORT,不考慮NAT情況 slave-announce-port 1234

5.5.5.5

安全相關(guān)配置

如果密碼不夠強,就最好別用,因為redis的讀取速度極快,很容易被破解
requirepass xxxxx

命令改別名

rename-command CONFIG ""

客戶端相關(guān)配置

默認(rèn)最大socket連接10000/或者系統(tǒng)允許的最大數(shù)。如果超過,丟出錯誤信息 max number of clients reached

maxclients 10000""

內(nèi)存管理

redis可以設(shè)置使用內(nèi)存的最大值,配合回收策略。todo

   maxmemory-policy noeviction
   maxmemory-samples 5

懶釋放/延期刪除(lazy freeing)

可以通過配置,將一些server端刪除行為改為同步或者異步 //todo
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
slave-lazy-flush no

APPEND ONLY MODE (AOF持久化)

AOF默認(rèn)不開啟,值得注意的是,aof和rdb其實是可以同時開啟的,并不沖突。如果2者皆有,AOF文件會被優(yōu)先選擇,因為其保存的信息更多。

  appendonly no
  appendfilename "appendonly.aof"

默認(rèn)調(diào)用fsync()的評率是 everysec,每秒一次.參考:http://antirez.com/post/redis-persistence-demystified.html

    appendfsync always
    appendfsync everysec
    appendfsync no

???

    no-appendfsync-on-rewrite no

設(shè)置redis重寫aof的最小size和閾值比例,默認(rèn)100%

    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb

是否接受加載未被完全序列化的aof文件(屁股被截斷了的),默認(rèn)yes.注意是尾部被截斷的情況,如果aof是中間的數(shù)據(jù)出現(xiàn)錯誤,任然會導(dǎo)致error。

    aof-load-truncated yes

???//todo

    aof-use-rdb-preamble no

LUA腳本

SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be

    lua-time-limit 5000

redis 集群(redis cluster)

普通的redis server是不能被集群化的。必須從一開始就配置為集群的一個節(jié)點才行。

指令名 意義 示例
cluster-enabled 啟動集群 cluster-enabled yes
cluster-config-file 集群節(jié)點自動創(chuàng)建,修改的文件,不要手動修改 cluster-config-file nodes-6379.conf
cluster-node-timeout 節(jié)點失聯(lián)認(rèn)定時間,單位毫秒大多數(shù)其他內(nèi)部時間限制都是基于此配置的倍數(shù) cluster-node-timeout 15000
cluster-slave-validity-factor 從節(jié)點允許可能成為master節(jié)點的時間閾值=(node-timeout * slave-validity-factor) + repl-ping-slave-period
此閾值越大,從server就允許越舊的數(shù)據(jù)存在,當(dāng)為0的時候,任何slave server都可能成為master
cluster-slave-validity-factor 10
cluster-migration-barrier 集群的“富余”slave允許分給其他孤兒master,master至少還有x+1臺slave,則可分給孤兒一臺slave,如果低于此數(shù)量,決絕。默認(rèn)為1.既只有自己有2臺slave,才能給孤兒master一臺。https://blog.csdn.net/u011535541/article/details/78625330 cluster-migration-barrier 1
cluster-require-full-coverage 當(dāng)集群的某些slot未被覆蓋,既master-slave節(jié)點全掛了。是否允許集群繼續(xù)對外服務(wù)。默認(rèn)不行 cluster-require-full-coverage yes
cluster-slave-no-failover no 如果此server是slave,阻止其自動成為master(沒看明白使用場景),默認(rèn)自然是no cluster-slave-no-failover no

集群對docker/nat的支持 CLUSTER DOCKER/NAT support

todo?

    cluster-announce-ip 10.1.1.5
    cluster-announce-port 6379
    cluster-announce-bus-port 6380

慢日志SLOW LOG

慢日志的時間并未包括對應(yīng)的I/O操作(客戶端連接,應(yīng)答,回寫),而是事實的操作時間

命令 意義 demo
slowlog-log-slower-than 10000 單位是微秒,所以默認(rèn)慢于10毫秒的都會被認(rèn)為是慢,被記錄 slowlog-log-slower-than 10000
slowlog-max-len 128 等待被IO進文件的slow command的隊列長度,雖然數(shù)字可以任意設(shè)置,但是注意這個隊列是要占用內(nèi)存的,SLOWLOG RESET可以清空這個隊列 slowlog-max-len 128

????todo LATENCY MONITOR

latency-monitor-threshold 0

事件通知?todo

notify-keyspace-events ""

############################### ADVANCED CONFIG ###############################

Hashes are encoded using a memory efficient data structure when they have a

small number of entries, and the biggest entry does not exceed a given

threshold. These thresholds can be configured using the following directives.

hash-max-ziplist-entries 512
hash-max-ziplist-value 64

Lists are also encoded in a special way to save a lot of space.

The number of entries allowed per internal list node can be specified

as a fixed maximum size or a maximum number of elements.

For a fixed maximum size, use -5 through -1, meaning:

-5: max size: 64 Kb <-- not recommended for normal workloads

-4: max size: 32 Kb <-- not recommended

-3: max size: 16 Kb <-- probably not recommended

-2: max size: 8 Kb <-- good

-1: max size: 4 Kb <-- good

Positive numbers mean store up to exactly that number of elements

per list node.

The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),

but if your use case is unique, adjust the settings as necessary.

list-max-ziplist-size -2

Lists may also be compressed.

Compress depth is the number of quicklist ziplist nodes from each side of

the list to exclude from compression. The head and tail of the list

are always uncompressed for fast push/pop operations. Settings are:

0: disable all list compression

1: depth 1 means "don't start compressing until after 1 node into the list,

going from either the head or tail"

So: [head]->node->node->...->node->[tail]

[head], [tail] will always be uncompressed; inner nodes will compress.

2: [head]->[next]->node->node->...->node->[prev]->[tail]

2 here means: don't compress head or head->next or tail->prev or tail,

but compress all nodes between them.

3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]

etc.

list-compress-depth 0

Sets have a special encoding in just one case: when a set is composed

of just strings that happen to be integers in radix 10 in the range

of 64 bit signed integers.

The following configuration setting sets the limit in the size of the

set in order to use this special memory saving encoding.

set-max-intset-entries 512

Similarly to hashes and lists, sorted sets are also specially encoded in

order to save a lot of space. This encoding is only used when the length and

elements of a sorted set are below the following limits:

zset-max-ziplist-entries 128
zset-max-ziplist-value 64

HyperLogLog sparse representation bytes limit. The limit includes the

16 bytes header. When an HyperLogLog using the sparse representation crosses

this limit, it is converted into the dense representation.

A value greater than 16000 is totally useless, since at that point the

dense representation is more memory efficient.

The suggested value is ~ 3000 in order to have the benefits of

the space efficient encoding without slowing down too much PFADD,

which is O(N) with the sparse encoding. The value can be raised to

~ 10000 when CPU is not a concern, but space is, and the data set is

composed of many HyperLogLogs with cardinality in the 0 - 15000 range.

hll-sparse-max-bytes 3000

Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in

order to help rehashing the main Redis hash table (the one mapping top-level

keys to values). The hash table implementation Redis uses (see dict.c)

performs a lazy rehashing: the more operation you run into a hash table

that is rehashing, the more rehashing "steps" are performed, so if the

server is idle the rehashing is never complete and some more memory is used

by the hash table.

The default is to use this millisecond 10 times every second in order to

actively rehash the main dictionaries, freeing memory when possible.

If unsure:

use "activerehashing no" if you have hard latency requirements and it is

not a good thing in your environment that Redis can reply from time to time

to queries with 2 milliseconds delay.

use "activerehashing yes" if you don't have such hard requirements but

want to free memory asap when possible.

activerehashing yes

The client output buffer limits can be used to force disconnection of clients

that are not reading data from the server fast enough for some reason (a

common reason is that a Pub/Sub client can't consume messages as fast as the

publisher can produce them).

The limit can be set differently for the three different classes of clients:

normal -> normal clients including MONITOR clients

slave -> slave clients

pubsub -> clients subscribed to at least one pubsub channel or pattern

The syntax of every client-output-buffer-limit directive is the following:

client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>

A client is immediately disconnected once the hard limit is reached, or if

the soft limit is reached and remains reached for the specified number of

seconds (continuously).

So for instance if the hard limit is 32 megabytes and the soft limit is

16 megabytes / 10 seconds, the client will get disconnected immediately

if the size of the output buffers reach 32 megabytes, but will also get

disconnected if the client reaches 16 megabytes and continuously overcomes

the limit for 10 seconds.

By default normal clients are not limited because they don't receive data

without asking (in a push way), but just after a request, so only

asynchronous clients may create a scenario where data is requested faster

than it can read.

Instead there is a default limit for pubsub and slave clients, since

subscribers and slaves receive data in a push fashion.

Both the hard or the soft limit can be disabled by setting them to zero.

client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

Client query buffers accumulate new commands. They are limited to a fixed

amount by default in order to avoid that a protocol desynchronization (for

instance due to a bug in the client) will lead to unbound memory usage in

the query buffer. However you can configure it here if you have very special

needs, such us huge multi/exec requests or alike.

client-query-buffer-limit 1gb

In the Redis protocol, bulk requests, that are, elements representing single

strings, are normally limited ot 512 mb. However you can change this limit

here.

proto-max-bulk-len 512mb

Redis calls an internal function to perform many background tasks, like

closing connections of clients in timeout, purging expired keys that are

never requested, and so forth.

Not all tasks are performed with the same frequency, but Redis checks for

tasks to perform according to the specified "hz" value.

By default "hz" is set to 10. Raising the value will use more CPU when

Redis is idle, but at the same time will make Redis more responsive when

there are many keys expiring at the same time, and timeouts may be

handled with more precision.

The range is between 1 and 500, however a value over 100 is usually not

a good idea. Most users should use the default of 10 and raise this up to

100 only in environments where very low latency is required.

hz 10

When a child rewrites the AOF file, if the following option is enabled

the file will be fsync-ed every 32 MB of data generated. This is useful

in order to commit the file to the disk more incrementally and avoid

big latency spikes.

aof-rewrite-incremental-fsync yes

Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good

idea to start with the default settings and only change them after investigating

how to improve the performances and how the keys LFU change over time, which

is possible to inspect via the OBJECT FREQ command.

There are two tunable parameters in the Redis LFU implementation: the

counter logarithm factor and the counter decay time. It is important to

understand what the two parameters mean before changing them.

The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis

uses a probabilistic increment with logarithmic behavior. Given the value

of the old counter, when a key is accessed, the counter is incremented in

this way:

1. A random number R between 0 and 1 is extracted.

2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).

3. The counter is incremented only if R < P.

The default lfu-log-factor is 10. This is a table of how the frequency

counter changes with a different number of accesses with different

logarithmic factors:

+--------+------------+------------+------------+------------+------------+

| factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |

+--------+------------+------------+------------+------------+------------+

| 0 | 104 | 255 | 255 | 255 | 255 |

+--------+------------+------------+------------+------------+------------+

| 1 | 18 | 49 | 255 | 255 | 255 |

+--------+------------+------------+------------+------------+------------+

| 10 | 10 | 18 | 142 | 255 | 255 |

+--------+------------+------------+------------+------------+------------+

| 100 | 8 | 11 | 49 | 143 | 255 |

+--------+------------+------------+------------+------------+------------+

NOTE: The above table was obtained by running the following commands:

redis-benchmark -n 1000000 incr foo

redis-cli object freq foo

NOTE 2: The counter initial value is 5 in order to give new objects a chance

to accumulate hits.

The counter decay time is the time, in minutes, that must elapse in order

for the key counter to be divided by two (or decremented if it has a value

less <= 10).

The default value for the lfu-decay-time is 1. A Special value of 0 means to

decay the counter every time it happens to be scanned.

lfu-log-factor 10

lfu-decay-time 1

########################### ACTIVE DEFRAGMENTATION #######################

WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested

even in production and manually tested by multiple engineers for some

time.

What is active defragmentation?

-------------------------------

Active (online) defragmentation allows a Redis server to compact the

spaces left between small allocations and deallocations of data in memory,

thus allowing to reclaim back memory.

Fragmentation is a natural process that happens with every allocator (but

less so with Jemalloc, fortunately) and certain workloads. Normally a server

restart is needed in order to lower the fragmentation, or at least to flush

away all the data and create it again. However thanks to this feature

implemented by Oran Agra for Redis 4.0 this process can happen at runtime

in an "hot" way, while the server is running.

Basically when the fragmentation is over a certain level (see the

configuration options below) Redis will start to create new copies of the

values in contiguous memory regions by exploiting certain specific Jemalloc

features (in order to understand if an allocation is causing fragmentation

and to allocate it in a better place), and at the same time, will release the

old copies of the data. This process, repeated incrementally for all the keys

will cause the fragmentation to drop back to normal values.

Important things to understand:

1. This feature is disabled by default, and only works if you compiled Redis

to use the copy of Jemalloc we ship with the source code of Redis.

This is the default with Linux builds.

2. You never need to enable this feature if you don't have fragmentation

issues.

3. Once you experience fragmentation, you can enable this feature when

needed with the command "CONFIG SET activedefrag yes".

The configuration parameters are able to fine tune the behavior of the

defragmentation process. If you are not sure about what they mean it is

a good idea to leave the defaults untouched.

Enabled active defragmentation

activedefrag yes

Minimum amount of fragmentation waste to start active defrag

active-defrag-ignore-bytes 100mb

Minimum percentage of fragmentation to start active defrag

active-defrag-threshold-lower 10

Maximum percentage of fragmentation at which we use maximum effort

active-defrag-threshold-upper 100

Minimal effort for defrag in CPU percentage

active-defrag-cycle-min 25

Maximal effort for defrag in CPU percentage

active-defrag-cycle-max 75

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容