Redis配置(机翻)

前景提要

使用Redis的时候,有时候不明白配置是用来干嘛的,虽然上面有注释,但是英文的还是不方便,不如直接写一个程序调百度翻译的接口进行翻译,当然对程序肯定是做了优化的,不会直接逐行翻译,那样可能翻译出来的牛头不对马嘴

结果



compare:0-sign:2   # Redis配置文件示例。

compare:2-sign:5   # 注意,要读取配置文件,Redis必须以文件路径作为第一个参数启动:
compare:5-sign:7   #  ./redis-server /path/to/redis.conf

compare:7-sign:10   # 关于单位的说明:当需要内存大小时,可以以1k 5GB 4M等常用形式指定:

compare:10-sign:17   # 1K=>1000 bytes 1kb=>1024 bytes 1m=>1000000 bytes 1mb=>1024*1024 bytes 1g=>>100000000 bytes 1gb=>1024*1024 bytes
compare:17-sign:19   # 单元不区分大小写,所以1GB 1GB 1GB都是一样的。
compare:19-sign:21   # #################################包括###################################

compare:21-sign:26   # 在此处包括一个或多个其他配置文件。如果您有一个标准模板,该模板可用于所有Redis服务器,但也需要自定义一些每服务器设置,则这非常有用。包含文件可以包含其他文件,因此请明智地使用此选项。

compare:26-sign:31   # 注意选项“include”不会被来自admin或Redis Sentinel的命令“CONFIG REWRITE”重写。由于Redis总是使用最后处理的行作为配置指令的值,因此最好在该文件的开头放置includes,以避免在运行时覆盖配置更改。

compare:31-sign:34   # 如果您对使用include覆盖配置选项感兴趣,则最好使用include作为最后一行。
compare:34-sign:37   #  include /path/to/local.conf
compare:34-sign:37   #  include /path/to/other.conf
compare:37-sign:39   # #################################模块#####################################

compare:39-sign:42   # 启动时加载模块。如果服务器无法加载模块,它将中止。可以使用多个loadmodule指令。
compare:42-sign:45   #  loadmodule /path/to/my_module.so
compare:42-sign:45   #  loadmodule /path/to/other_module.so
compare:45-sign:47   # #################################网络#####################################

compare:47-sign:52   # 默认情况下,如果未指定“bind”配置指令,Redis将侦听服务器上所有可用网络接口的连接。可以使用“bind”配置指令只监听一个或多个选定接口,后跟一个或多个IP地址。

compare:52-sign:54   # 示例:

compare:54-sign:57   # 绑定192.168.1.100 10.0.0.1绑定127.0.0.1::1

compare:57-sign:65   # ~~~警告~~~如果运行Redis的计算机直接暴露在internet上,绑定到所有接口是危险的,并且会将实例暴露给internet上的所有人。因此,默认情况下,我们取消对以下bind指令的注释,该指令将强制Redis只侦听IPv4环回接口地址(这意味着Redis只能接受运行在其运行的同一台计算机上的客户端的连接)。
compare:65-sign:70   # 如果您确定要让您的实例监听所有接口,只需注释以下行。~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compare:65-sign:70   bind 127.0.0.1


compare:70-sign:73   # 保护模式是一个安全保护层,以避免访问和利用在internet上保持打开状态的Redis实例。

compare:73-sign:75   # 当“保护模式”处于启用状态时,如果:

compare:75-sign:79   # 1) 服务器没有使用“bind”指令显式绑定到一组地址。2) 未配置密码。

compare:79-sign:83   # 服务器只接受来自连接IPv4和IPv6环回地址127.0.0.1和::1的客户端以及来自Unix域套接字的连接。
compare:83-sign:89   # 默认情况下,已启用保护模式。只有当您确定希望来自其他主机的客户机连接到Redis时(即使未配置身份验证),或者使用“bind”指令显式列出特定的接口集时,才应禁用它。
compare:83-sign:89   protected-mode yes

compare:89-sign:93   # 接受指定端口上的连接,默认值为6379(IANA#815344)。如果指定端口0,Redis将不会侦听TCP套接字。
compare:89-sign:93   port 6379


compare:93-sign:95   # TCP listen()积压工作。
compare:95-sign:102   # 在每秒高请求的环境中,您需要高的积压工作,以避免客户端连接速度慢的问题。注意,Linux内核
compare:95-sign:102   #  will silently truncate it to the value of /proc/sys/net/core/somaxconn so
compare:95-sign:102   # 翻译失败,原因:Invalid Access Limit
compare:95-sign:102   tcp-backlog 511


compare:102-sign:104   # Unix套接字。

compare:104-sign:108   # 指定将用于侦听传入连接的Unix套接字的路径。没有默认值,因此未指定时,Redis不会侦听unix套接字。
compare:108-sign:111   #  unixsocket /tmp/redis.sock
compare:108-sign:111   # unixsocketperm 700型
compare:111-sign:114   # 在客户端空闲N秒后关闭连接(0表示禁用)
compare:111-sign:114   timeout 0


compare:114-sign:116   # TCP保持不变。

compare:116-sign:119   # 如果非零,则使用SO_KEEPALIVE在没有通信的情况下向客户端发送TCP ack。这有两个原因:

compare:119-sign:123   # 1) 检测死同伴。2)从中间网络设备的角度进行连接。

compare:123-sign:127   # 在Linux上,指定的值(以秒为单位)是用于发送ACK的周期。请注意,要关闭连接,需要两倍的时间。在其他内核上,周期取决于内核配置。
compare:127-sign:131   # 此选项的合理值为300秒,这是从Redis 3.2.1开始的新Redis默认值。
compare:127-sign:131   tcp-keepalive 300

compare:131-sign:133   # ################################总则#####################################
compare:133-sign:137   # 默认情况下,Redis不作为守护进程运行。如果需要,请使用“是”。
compare:133-sign:137   #  Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
compare:133-sign:137   daemonize yes

compare:137-sign:148   # 如果您从upstart或systemd运行Redis,Redis可以与您的监督树交互。选项:supervised no-no supervision interaction supervised upstart-通过将Redis置于SIGSTOP模式发出信号upstart supervised systemd-通过写入READY=1到$NOTIFY\u SOCKET supervised auto-基于upstart\u JOB或NOTIFY\u SOCKET环境变量检测upstart或systemd方法注意:这些监视方法只会发出“进程准备就绪”的信号。它们不会使连续的活动ping返回给您的主管。
compare:137-sign:148   supervised no


compare:148-sign:151   # 如果指定了pid文件,Redis会在启动时将其写入指定的位置,并在退出时将其删除。

compare:151-sign:155   # 当服务器运行非守护进程时,如果配置中未指定任何pid文件,则不会创建任何pid文件。当服务器被守护时,pid文件
compare:151-sign:155   #  is used even if not specified, defaulting to "/var/run/redis.pid".
compare:155-sign:159   # 创建一个pid文件是最大的努力:如果Redis不能创建它,没有什么不好的事情发生,服务器将启动并正常运行。
compare:155-sign:159   pidfile /var/run/redis_6379.pid

compare:159-sign:167   # 指定服务器详细级别。这可以是:
compare:159-sign:167   #  debug (a lot of information, useful for development/testing)
compare:159-sign:167   # 翻译失败,原因:Invalid Access Limit
compare:159-sign:167   #  warning (only very important / critical messages are logged)
compare:159-sign:167   loglevel notice

compare:167-sign:172   # 指定日志文件名。也可以使用空字符串强制Redis登录标准输出。请注意,如果使用标准
compare:167-sign:172   #  output for logging but daemonize, logs will be sent to /dev/null
compare:167-sign:172   logfile ""

compare:172-sign:176   # 要启用系统日志记录,只需将“syslog enabled”设置为yes,并根据需要更新其他syslog参数。系统日志已启用否
compare:176-sign:179   # 指定系统日志标识。系统日志标识redis
compare:179-sign:182   # 指定syslog工具。必须是用户或介于LOCAL0-LOCAL7之间。syslog工具local0
compare:182-sign:187   # 设置数据库数。默认数据库是DB 0,可以使用select<dbid>根据每个连接选择不同的数据库,其中dbid是介于0和“databases”之间的数字-1
compare:182-sign:187   databases 16


compare:187-sign:191   # 默认情况下,只有在开始登录到标准输出并且标准输出是TTY时,Redis才会显示ASCII艺术徽标。基本上,这意味着通常徽标只在交互会话中显示。
compare:191-sign:195   # 但是,通过将以下选项设置为“是”,可以强制4.0之前的行为,并始终在启动日志中显示ASCII艺术徽标。
compare:191-sign:195   always-show-logo yes


compare:195-sign:197   # ###############################快照################################

compare:197-sign:199   # 将数据库保存在磁盘上:

compare:199-sign:201   # 保存<seconds><changes>

compare:201-sign:204   # 如果给定的秒数和针对数据库的给定写操作数同时发生,则将保存数据库。

compare:204-sign:209   # 在下面的示例中,行为将是保存:900秒(15分钟)后,如果300秒(5分钟)后至少更改了1个密钥,如果60秒后至少更改了10个密钥,如果至少更改了10000个密钥

compare:209-sign:211   # 注意:您可以通过注释掉所有“保存”行来完全禁用保存。

compare:211-sign:215   # 还可以通过添加带有单个空字符串参数的save指令来删除所有先前配置的保存点,如下例所示:
compare:215-sign:217   # 保存“
compare:217-sign:221   save 900 1

save 300 10

save 60 10000


compare:221-sign:227   # 默认情况下,如果启用了RDB快照(至少一个保存点),并且最新的后台保存失败,则Redis将停止接受写入。这将使用户(以硬方式)意识到数据没有正确地保存在磁盘上,否则很可能没有人会注意到并发生一些灾难。

compare:227-sign:230   # 如果后台保存过程将重新开始工作,Redis将自动允许再次写入。
compare:230-sign:236   # 但是,如果您设置了对Redis服务器和持久性的正确监视,则可能需要禁用此功能,以便Redis可以继续正常工作,即使在磁盘、权限等方面存在问题。
compare:230-sign:236   stop-writes-on-bgsave-error yes

compare:236-sign:242   # 在转储.rdb数据库时使用LZF压缩字符串对象?默认设置为“是”,因为这几乎总是一场胜利。如果要在保存子项中保存一些CPU,请将其设置为“否”,但如果有可压缩的值或键,则数据集可能会更大。
compare:236-sign:242   rdbcompression yes


compare:242-sign:247   # 因为RDB版本5,所以CRC64校验和放在文件的末尾。这使得格式更能抵抗损坏,但在保存和加载RDB文件时,性能会受到影响(大约10%),因此可以禁用它以获得最大性能。
compare:247-sign:251   # 在禁用校验和的情况下创建的RDB文件的校验和为零,这将告诉加载代码跳过检查。
compare:247-sign:251   rdbchecksum yes

compare:251-sign:254   # 转储数据库的文件名
compare:251-sign:254   dbfilename dump.rdb


compare:254-sign:256   # 工作目录。

compare:256-sign:259   # 数据库将被写入这个目录中,使用上面使用'DB filename'配置指令指定的文件名。

compare:259-sign:261   # 只追加文件也将在此目录中创建。
compare:261-sign:264   # 请注意,必须在此处指定目录,而不是文件名。
compare:261-sign:264   dir ./

compare:264-sign:266   # ################################复制#################################

compare:266-sign:269   # 主副本复制。使用replicaof使Redis实例成为另一个Redis服务器的副本。关于Redis复制,有几点需要尽快理解。

compare:269-sign:274   #    +------------------+      +---------------+
compare:269-sign:274   #    |      Master      | ---> |    Replica    |
compare:269-sign:274   #    | (receive writes) |      |  (exact copy) |
compare:269-sign:274   #    +------------------+      +---------------+

compare:274-sign:285   # 1) Redis复制是异步的,但是您可以配置一个主服务器,如果它看起来与至少给定数量的副本没有连接,则停止接受写操作。2) 如果复制链接在相对较短的时间内丢失,Redis副本可以执行与主机的部分重新同步。您可能需要根据需要使用合理的值来配置复制积压大小(请参阅本文件的下一节)。3) 复制是自动的,不需要用户干预。在网络分区副本自动尝试重新连接到主机并与它们重新同步之后。
compare:285-sign:287   # 主计长,主计长

compare:287-sign:292   # 如果主机受密码保护(使用下面的“requirepass”配置指令),则可以在启动复制同步过程之前通知副本进行身份验证,否则主机将拒绝副本请求。
compare:292-sign:294   # masterauth<master password>

compare:294-sign:297   # 当复制副本失去与主服务器的连接时,或者当复制仍在进行时,复制副本可以以两种不同的方式操作:

compare:297-sign:301   # 1) 如果将replica serve stale data设置为“yes”(默认值),则replica仍将答复客户端请求,可能包含过期数据,或者如果这是第一次同步,则数据集可能只是空的。

compare:301-sign:307   # 2) 如果replica serve stale data设置为'no',则replica将对所有类型的命令(INFO、replicof、AUTH、PING、SHUTDOWN、REPLCONF、ROLE、CONFIG、SUBSCRIBE、UNSUBSCRIBE、PSUBSCRIBE、PUNSUBSCRIBE、PUBLISH、PUBSUB、COMMAND、POST、HOST:和LATENCY除外)回复错误“SYNC with master in progress”。
compare:307-sign:309   replica-serve-stale-data yes


compare:309-sign:315   # 您可以将副本实例配置为是否接受写入。对副本实例的写入可能有助于存储一些短暂的数据(因为在与主副本重新同步后,写入副本上的数据将很容易被删除),但如果客户端由于配置错误而写入副本实例,则也可能导致问题。

compare:315-sign:317   # 因为默认情况下,Redis 2.6副本是只读的。
compare:317-sign:325   # 注意:只读副本不设计为向internet上不受信任的客户端公开。它只是一个防止滥用实例的保护层。默认情况下,只读副本仍然导出所有管理命令,如CONFIG、DEBUG等。在一定程度上,您可以使用“rename command”来隐藏所有
compare:317-sign:325   #  administrative / dangerous commands.
compare:317-sign:325   replica-read-only yes


compare:325-sign:327   # 复制同步策略:磁盘或套接字。

compare:327-sign:331   #  -------------------------------------------------------
compare:327-sign:331   # 警告:无盘复制目前正在试验中
compare:327-sign:331   #  -------------------------------------------------------

compare:331-sign:336   # 新的复制副本和重新连接的复制副本不能继续复制过程只是接收到差异,需要做的是所谓的“完全同步”。RDB文件从主服务器传输到副本。传输可能以两种不同的方式发生:

compare:336-sign:342   # 1) 磁盘备份:Redis主机创建一个新进程,将RDB文件写入磁盘。稍后,父进程将文件以增量方式传输到副本。2) 无盘:Redis主机创建一个新进程,直接将RDB文件写入副本套接字,而根本不接触磁盘。

compare:342-sign:348   # 使用磁盘备份复制,在生成RDB文件时,只要生成RDB文件的当前子级完成其工作,就可以将更多副本排队并与RDB文件一起使用。在无盘复制中,一旦传输开始,新的到达的副本将排队,当当前副本终止时,新的传输将开始。

compare:348-sign:352   # 当使用无盘复制时,主服务器在开始传输之前等待一段可配置的时间(以秒为单位),希望多个副本将到达,并且传输可以并行化。
compare:352-sign:356   # 对于慢速磁盘和快速(大带宽)网络,无磁盘复制工作得更好。
compare:352-sign:356   repl-diskless-sync no


compare:356-sign:360   # 启用无盘复制后,可以配置服务器等待的延迟,以便生成通过套接字将RDB传输到副本的子项。

compare:360-sign:364   # 这一点很重要,因为一旦传输开始,就不可能为到达的新副本提供服务,这些副本将排队等待下一个RDB传输,因此服务器会等待一个延迟,以便让更多副本到达。
compare:364-sign:368   # 延迟以秒为单位指定,默认情况下为5秒。要完全禁用它,只需将其设置为0秒,传输将尽快开始。
compare:364-sign:368   repl-diskless-sync-delay 5


compare:368-sign:372   # 副本以预定义的间隔向服务器发送ping。可以使用repl_ping_replica_period选项更改此间隔。默认值为10秒。
compare:372-sign:374   # 复制周期10

compare:374-sign:376   # 以下选项设置的复制超时:

compare:376-sign:380   #  1) Bulk transfer I/O during SYNC, from the point of view of replica.
compare:376-sign:380   # 2) 从副本(数据、ping)的角度来看,主超时。3) 从主服务器的角度来看副本超时(REPLCONF ACK pings)。

compare:380-sign:384   # 必须确保此值大于为复制副本周期复制指定的值,否则每次主副本和复制副本之间的通信量较低时都会检测到超时。
compare:384-sign:386   # 复制超时60

compare:386-sign:388   # 同步后在副本套接字上禁用TCP节点?

compare:388-sign:393   # 如果选择“是”,Redis将使用较少的TCP数据包和较少的带宽向副本发送数据。但这会增加数据在副本端出现的延迟,对于使用默认配置的Linux内核,延迟时间可达40毫秒。

compare:393-sign:396   # 如果选择“否”,数据出现在副本端的延迟将减少,但复制将使用更多带宽。
compare:396-sign:401   # 默认情况下,我们会针对低延迟进行优化,但在流量非常大的情况下,或者当主副本和副本之间的跳数很多时,将其设置为“是”可能是一个好主意。
compare:396-sign:401   repl-disable-tcp-nodelay no


compare:401-sign:407   # 设置复制积压大小。backlog是一个缓冲区,当副本断开连接一段时间后,它会累积副本数据,因此当副本想要重新连接时,通常不需要完全重新同步,但是部分重新同步就足够了,只需传递副本在断开连接时丢失的数据部分。

compare:407-sign:410   # 复制backlog越大,复制副本断开连接的时间就越长,以后就可以执行部分重新同步。

compare:410-sign:412   # 只有当至少连接了一个副本时,才会分配backlog。
compare:412-sign:414   # repl积压大小1mb

compare:414-sign:419   # 在主服务器有一段时间没有连接副本后,将释放backlog。下面的选项配置从最后一个副本断开连接开始释放backlog缓冲区所需的秒数。

compare:419-sign:423   # 请注意,副本永远不会释放待办事项的超时时间,因为它们可能会在以后升级为主控,并且应该能够与副本正确地“部分重新同步”:因此它们应该始终累积待办事项。

compare:423-sign:425   # 值为0意味着永远不会释放backlog。
compare:425-sign:427   # 回复积压ttl 3600

compare:427-sign:431   # 副本优先级是Redis在信息输出中发布的整数。Redis Sentinel使用它来选择复制副本,以便在主副本不再正常工作时升级为主副本。

compare:431-sign:435   # 优先级较低的副本被认为更适合升级,因此,例如,如果有三个优先级为10、100、25的副本,哨兵将选择优先级为10的副本,即优先级最低的副本。

compare:435-sign:439   # 但是,0的特殊优先级会将副本标记为无法执行master角色,因此Redis Sentinel将永远不会选择优先级为0的副本进行升级。
compare:439-sign:442   # 默认情况下,优先级为100。
compare:439-sign:442   replica-priority 100


compare:442-sign:445   # 如果连接的副本少于N个,且延迟小于或等于M秒,则主服务器可以停止接受写入。

compare:445-sign:447   # N个副本需要处于“联机”状态。

compare:447-sign:450   # 以秒为单位的延迟(必须<=指定值)是根据从副本接收的最后一次ping(通常每秒发送一次)计算得出的。

compare:450-sign:454   # 此选项不保证N个副本将接受写操作,但会将丢失写操作的曝光时间限制在指定的秒数(如果没有足够的副本可用)。

compare:454-sign:456   # 例如,需要至少3个滞后时间小于等于10秒的副本,请使用:

compare:456-sign:459   # 最小复制写入3分钟最大复制第10层

compare:459-sign:461   # 将其中一个设置为0将禁用该功能。
compare:461-sign:464   # 默认情况下,要写入的最小副本设置为0(禁用功能),最小副本最大延迟设置为10。

compare:464-sign:471   # Redis主机能够以不同的方式列出附加副本的地址和端口。例如,“信息复制”部分提供了此信息,除其他工具外,Redis Sentinel还使用此信息来发现副本实例。此信息可用的另一个地方是主服务器的“ROLE”命令的输出。

compare:471-sign:474   # 复制副本通常报告的所列IP和地址按以下方式获取:

compare:474-sign:477   # IP:通过检查复制副本用于连接主服务器的套接字的对等地址,自动检测该地址。

compare:477-sign:481   # 端口:该端口在复制握手期间由复制副本通信,通常是复制副本用于侦听连接的端口。

compare:481-sign:487   # 然而,当使用端口转发或网络地址转换(NAT)时,副本实际上可以通过不同的IP和端口对访问。复制副本可以使用以下两个选项,以便向其主机报告一组特定的IP和端口,以便信息和角色都将报告这些值。

compare:487-sign:490   # 如果只需要覆盖端口或IP地址,则不需要同时使用这两个选项。
compare:490-sign:493   # 副本通告ip 5.5.5.5副本通告端口1234
compare:493-sign:495   # #################################安全###################################

compare:495-sign:499   # 要求客户端在处理任何其他命令之前发出AUTH<PASSWORD>。在您不信任其他人可以访问运行redis服务器的主机的环境中,这可能很有用。

compare:499-sign:502   # 对于向后兼容性和因为大多数人不需要auth(例如,他们运行自己的服务器),这应该保持注释。

compare:502-sign:506   # 警告:由于Redis速度相当快,外部用户可以在一个好的框中尝试每秒高达150k个密码。这意味着你应该使用一个非常强的密码,否则很容易破解。
compare:506-sign:508   # 要求食物

compare:508-sign:510   # 命令重命名。

compare:510-sign:515   # 可以在共享环境中更改危险命令的名称。例如,CONFIG命令可能会被重命名为一些难以猜测的内容,以便它仍然可以用于内部使用的工具,但不能用于一般客户机。

compare:515-sign:517   # 例子:

compare:517-sign:519   # 重命名命令配置B840FC02D524045429941C15F59E41CB7BE6C52

compare:519-sign:522   # 也可以通过将命令重命名为空字符串来完全终止命令:

compare:522-sign:524   # 重命名命令配置“
compare:524-sign:527   # 请注意,更改登录到AOF文件或传输到副本的命令的名称可能会导致问题。
compare:527-sign:529   # ##################################客户####################################

compare:529-sign:535   # 设置同时连接的最大客户端数。默认情况下,此限制设置为10000个客户端,但是如果Redis服务器无法将进程文件限制配置为允许指定的限制,则最大允许客户端数将设置为当前文件限制减去32(因为Redis保留一些文件描述符供内部使用)。

compare:535-sign:538   # 一旦达到限制,Redis将关闭所有新连接,并发送一个错误“max number of clients reached”。
compare:538-sign:540   # 最大客户机10000
compare:540-sign:542   # #############################内存管理################################

compare:542-sign:546   # 将内存使用限制设置为指定的字节数。当达到内存限制时,Redis将根据所选的逐出策略(请参阅maxmemory策略)尝试删除密钥。

compare:546-sign:551   # 如果Redis无法根据策略删除密钥,或者如果策略设置为“noeviction”,Redis将开始对使用更多内存的命令(如set、LPUSH等)进行错误应答,并继续回复GET等只读命令。

compare:551-sign:554   # 当将Redis用作LRU或LFU缓存或设置实例的硬内存限制(使用“noeviction”策略)时,此选项通常很有用。

compare:554-sign:561   # 警告:如果将副本附加到启用maxmemory的实例,则会减去提供副本所需的输出缓冲区的大小
compare:554-sign:561   #  from the used memory count, so that network problems / resyncs will
compare:554-sign:561   # 翻译失败,原因:Invalid Access Limit

compare:561-sign:565   # 简而言之。。。如果附加了副本,建议您设置maxmemory的下限,以便系统上有一些空闲RAM用于副本输出缓冲区(但如果策略为“noeviction”,则不需要此限制)。
compare:565-sign:567   # MaxMemory<bytes>

compare:567-sign:570   # MAXMEMORY策略:当达到MAXMEMORY时,Redis将如何选择要删除的内容。您可以在五种行为中进行选择:

compare:570-sign:579   #  volatile-lru -> Evict using approximated LRU among the keys with an expire set.
compare:570-sign:579   #  allkeys-lru -> Evict any key using approximated LRU.
compare:570-sign:579   #  volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
compare:570-sign:579   #  allkeys-lfu -> Evict any key using approximated LFU.
compare:570-sign:579   #  volatile-random -> Remove a random key among the ones with an expire set.
compare:570-sign:579   #  allkeys-random -> Remove a random key, any key.
compare:570-sign:579   #  volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
compare:570-sign:579   #  noeviction -> Don't evict anything, just return an error on write operations.

compare:579-sign:582   # LRU表示最近最少使用的LFU表示最近最少使用的LFU

compare:582-sign:585   # LRU、LFU和volatile-ttl都是用近似随机算法实现的。

compare:585-sign:588   # 注意:使用上述任何策略,当没有合适的密钥可收回时,Redis将在写操作时返回错误。

compare:588-sign:594   # 在编写这些命令的日期是:set setnx setex append incr decrpush lpush rpushx lpushx linsert lset rpoplpush sadd sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby zunionstore zinterststore hset hsetnx hmset hincrby incrby decrby getset mset msetnx exec sort

compare:594-sign:596   # 默认值为:
compare:596-sign:598   # 最大内存策略无效

compare:598-sign:604   # LRU、LFU和最小TTL算法不是精确算法,而是近似算法(为了节省内存),因此您可以调整它的速度或精度。默认情况下,Redis将检查五个键并选择最近使用较少的键,您可以使用以下配置指令更改样本大小。

compare:604-sign:607   # 默认值5产生足够好的结果。10非常接近真实的LRU,但需要更多的CPU。3更快,但不太准确。
compare:607-sign:609   # maxmemory示例5

compare:609-sign:614   # 从Redis 5开始,默认情况下,复制副本将忽略其maxmemory设置(除非在故障转移后升级为master或手动)。这意味着密钥的收回将由主服务器处理,将DEL命令作为主服务器端的密钥收回发送到副本。

compare:614-sign:620   # 此行为确保主副本和副本保持一致,并且通常是您所希望的,但是,如果您的副本是可写的,或者您希望副本具有不同的内存设置,并且您确定对副本执行的所有写入都是等幂的,则可以更改此默认值(但请确保了解您正在执行的操作)。

compare:620-sign:627   # 请注意,由于默认情况下副本不会退出,因此它可能会使用比通过maxmemory设置的内存更多的内存(副本上可能有某些缓冲区更大,或者数据结构有时可能占用更多内存等等)。因此,请确保监视复制副本,并确保它们有足够的内存,在主副本达到配置的maxmemory设置之前,不会出现真正的内存不足情况。
compare:627-sign:629   # 副本忽略最大内存是
compare:629-sign:631   # ############################懒散####################################

compare:631-sign:640   # Redis有两个原语用于删除键。一个称为DEL,是对对象的阻塞删除。这意味着服务器停止处理新命令,以便以同步方式回收与对象关联的所有内存。如果删除的键与一个小对象相关联,则执行DEL命令所需的时间非常短,与Redis中的大多数其他O(1)或O(log_N)命令相当。但是,如果密钥与包含数百万个元素的聚合值关联,则服务器可以阻塞很长时间(甚至几秒钟)以完成操作。

compare:640-sign:646   # 出于上述原因,Redis还提供了UNLINK(non-blocking DEL)等非阻塞删除原语以及FLUSHALL和FLUSHDB命令的异步选项,以便在后台回收内存。这些命令在固定时间内执行。另一个线程将尽可能快地增量释放背景中的对象。

compare:646-sign:653   # FLUSHALL和FLUSHDB的DEL、UNLINK和ASYNC选项由用户控制。这取决于应用程序的设计,以了解何时使用一个或另一个是一个好主意。然而,Redis服务器有时不得不删除密钥或刷新整个数据库,这是其他操作的副作用。具体来说,在以下情况下,Redis独立于用户调用删除对象:

compare:653-sign:668   # 1) 在逐出时,由于maxmemory和maxmemory策略配置,为了在不超过指定内存限制的情况下为新数据腾出空间。2) 因为过期:必须从内存中删除具有相关生存时间的密钥(请参阅expire命令)。3) 因为将数据存储在可能已经存在的密钥上的命令的副作用。例如,当旧的密钥内容被另一个密钥内容替换时,RENAME命令可能会删除它。类似地,SUNIONSTORE或SORT with STORE选项可以删除现有密钥。SET命令本身删除指定键的任何旧内容,以便用指定字符串替换它。4) 在复制过程中,当复制副本与其主服务器执行完全重新同步时,将删除整个数据库的内容,以便加载刚刚传输的RDB文件。
compare:668-sign:673   # 在上述所有情况下,默认情况是以阻塞的方式删除对象,就像调用DEL一样。但是,您可以使用以下配置指令,具体配置每种情况,以非阻塞方式释放内存,如调用UNLINK:
compare:673-sign:678   lazyfree-lazy-eviction no

lazyfree-lazy-expire no

lazyfree-lazy-server-del no

replica-lazy-flush no

compare:678-sign:680   # #############################仅附加模式###############################

compare:680-sign:685   # 默认情况下,Redis在磁盘上异步转储数据集。这种模式在许多应用程序中已经足够好了,但是Redis进程的问题或断电可能会导致几分钟的写操作丢失(取决于配置的保存点)。

compare:685-sign:692   # Append Only文件是一种替代的持久性模式,它提供了更好的持久性。例如,使用默认的数据fsync策略(请参阅配置文件的后面部分)Redis在服务器断电之类的剧烈事件中可能会丢失一秒钟的写入,或者在Redis进程本身出现问题时丢失一次写入,但操作系统仍在正常运行。

compare:692-sign:696   # 可以同时启用AOF和RDB持久性,而不会出现问题。如果在启动时启用了AOF,Redis将加载AOF,即具有更好的持久性保证的文件。
compare:696-sign:698   #  Please check http://redis.io/topics/persistence for more information.
compare:698-sign:700   appendonly no

compare:700-sign:702   # 仅追加文件的名称(默认值:“附录.aof")
compare:702-sign:704   appendfilename "appendonly.aof"


compare:704-sign:708   # fsync()调用告诉操作系统实际在磁盘上写入数据,而不是在输出缓冲区中等待更多数据。一些操作系统将真正刷新磁盘上的数据,一些其他操作系统将尝试尽快这样做。

compare:708-sign:710   # Redis支持三种不同的模式:

compare:710-sign:714   # 不:不要fsync,只要让操作系统在需要的时候刷新数据就行了。更快。总是:每次写入仅追加日志后fsync。慢点,最安全。每秒钟只同步一次。妥协。

compare:714-sign:722   # 默认值是“everysec”,因为这通常是速度和数据安全之间的正确折衷。这取决于您是否可以将其放宽到“否”,这将让操作系统在需要时刷新输出缓冲区,以获得更好的性能(但如果您能够接受某些数据丢失的想法,请考虑快照的默认持久性模式),或者相反,使用“始终”这一速度非常慢,但比everysec安全一些。

compare:722-sign:725   # 更多详情请查看以下文章:
compare:722-sign:725   #  http://antirez.com/post/redis-persistence-demystified.html
compare:725-sign:727   # 如果不确定,使用“everysec”。
compare:727-sign:731   # 始终附加同步
compare:727-sign:731   appendfsync everysec
 appendfsync no

compare:731-sign:738   # 当AOF fsync策略设置为always或everysec,并且后台保存过程(后台保存或AOF日志后台重写)为
compare:731-sign:738   #  performing a lot of I/O against the disk, in some Linux configurations
compare:731-sign:738   # 翻译失败,原因:Invalid Access Limit

compare:738-sign:742   # 为了缓解这个问题,可以使用以下选项,防止在BGSAVE或bgrowriteaof正在进行时在主进程中调用fsync()。

compare:742-sign:747   # 这意味着,当另一个子节点正在保存时,Redis的持久性与“appendfsync none”相同。实际上,这意味着在最坏的情况下(使用默认的Linux设置),可能会丢失长达30秒的日志。
compare:747-sign:750   # 如果您有延迟问题,请将其设置为“是”。否则,从耐久性的角度来看,这是最安全的选择。
compare:750-sign:752   no-appendfsync-on-rewrite no


compare:752-sign:756   # 自动重写仅追加文件。Redis能够在AOF日志大小增长指定百分比时隐式地调用BGREWRITEAOF来自动重写日志文件。

compare:756-sign:760   # 这就是它的工作原理:Redis会记住最近一次重写之后AOF文件的大小(如果自重新启动之后没有发生重写,则使用启动时AOF的大小)。

compare:760-sign:766   # 将此基础大小与当前大小进行比较。如果当前大小大于指定的百分比,则会触发重写。此外,还需要为要重写的AOF文件指定最小大小,这对于避免重写AOF文件很有用,即使达到了百分比增长,但仍然很小。
compare:766-sign:769   # 指定0的百分比以禁用自动AOF重写功能。
compare:769-sign:772   auto-aof-rewrite-percentage 100

auto-aof-rewrite-min-size 64mb


compare:772-sign:779   # 在Redis启动过程中,当AOF数据被加载回内存时,可能会发现AOF文件在末尾被截断。当Redis运行的系统崩溃时,可能会发生这种情况,特别是当ext4文件系统在没有data=ordered选项的情况下挂载时(但是,当Redis本身崩溃或中止但操作系统仍然正常工作时,这种情况不会发生)。

compare:779-sign:783   # Redis可以在发生这种情况时出错退出,或者加载尽可能多的数据(现在是默认值),如果发现AOF文件在结尾处被截断,则可以启动。以下选项控制此行为。

compare:783-sign:790   # 如果a of load truncated设置为yes,则加载一个截断的aof文件,Redis服务器开始发出一个日志来通知用户事件。否则,如果将该选项设置为“否”,服务器将因错误而中止并拒绝启动。当选项设置为no时,用户需要在重新启动服务器之前使用“redis check AOF”实用程序修复AOF文件。
compare:790-sign:796   # 请注意,如果在中间发现AOF文件已损坏,则服务器仍将退出并出现错误。此选项仅适用于Redis试图从AOF文件读取更多数据但找不到足够字节的情况。
compare:790-sign:796   aof-load-truncated yes


compare:796-sign:800   # 当重写AOF文件时,Redis能够在AOF文件中使用RDB前导码,以便更快地重写和恢复。启用此选项时,重写的AOF文件由两个不同的节组成:

compare:800-sign:802   # [RDB文件][AOF tail]
compare:802-sign:807   # 加载Redis时,会识别出AOF文件以“Redis”字符串开头并加载前缀RDB文件,然后继续加载AOF尾部。
compare:802-sign:807   aof-use-rdb-preamble yes

compare:807-sign:809   # ;\35;;

compare:809-sign:811   # Lua脚本的最长执行时间(毫秒)。

compare:811-sign:815   # 如果达到最大执行时间,Redis将记录脚本在允许的最大时间之后仍在执行中,并将开始答复带有错误的查询。

compare:815-sign:822   # 当长时间运行的脚本超过最大执行时间时,只有脚本KILL和SHUTDOWN NOSAVE命令可用。第一个脚本可用于停止尚未调用write命令的脚本。第二种是在脚本已经发出写命令但用户不希望等待脚本自然终止的情况下关闭服务器的唯一方法。
compare:822-sign:825   # 将其设置为0或负值,以便在没有警告的情况下无限执行。
compare:822-sign:825   lua-time-limit 5000

compare:825-sign:827   # ###############################REDIS集群###############################

compare:827-sign:831   # 普通的Redis实例不能是Redis集群的一部分;只有作为集群节点启动的节点才可以。为了将Redis实例作为群集节点启动,请启用群集支持取消注释以下内容:
compare:831-sign:833   # 群集已启用是

compare:833-sign:839   # 每个群集节点都有一个群集配置文件。此文件不打算手动编辑。它由Redis节点创建和更新。每个Redis集群节点都需要不同的集群配置文件。请确保在同一系统中运行的实例没有重叠的群集配置文件名。
compare:839-sign:841   # 群集配置文件nodes-6379.conf

compare:841-sign:845   # Cluster node timeout是节点在故障状态下必须不可访问的毫秒数。大多数其他内部时间限制是节点超时的倍数。
compare:845-sign:847   # 群集节点超时15000

compare:847-sign:850   # 如果发生故障的主服务器的数据看起来太旧,则其副本将避免启动故障转移。

compare:850-sign:853   # 对于副本来说,没有一种简单的方法可以精确测量其“数据期限”,因此执行以下两项检查:

compare:853-sign:859   # 1) 如果有多个副本能够进行故障切换,则它们会交换消息,以便尝试利用具有最佳复制偏移量的副本(来自已处理主服务器的更多数据)。副本将尝试按偏移量获取其列组,并在故障转移开始时应用与其列组成比例的延迟。

compare:859-sign:866   # 2) 每个副本都计算最后一次与其主副本交互的时间。这可以是最后收到的ping或命令(如果主服务器仍处于“已连接”状态),也可以是自与主服务器断开连接以来经过的时间(如果复制链接当前已关闭)。如果上一次交互太旧,则复制副本根本不会尝试故障转移。

compare:866-sign:870   # 点“2”可以由用户调整。特别是,如果自上次与主服务器交互以来,经过的时间大于:

compare:870-sign:872   # (节点超时*副本有效系数)+复制周期

compare:872-sign:877   # 因此,例如,如果节点超时时间为30秒,副本有效性因子为10,并且假设默认的repl ping副本周期为10秒,则如果副本无法与主副本通信超过310秒,则副本将不会尝试故障转移。

compare:877-sign:881   # 较大的复制副本有效性因子可能允许具有太旧数据的复制副本故障转移主服务器,而太小的值可能会阻止群集完全选择复制副本。

compare:881-sign:887   # 为了获得最大的可用性,可以将副本有效性因子设置为0,这意味着,无论副本上次与主副本交互的时间如何,副本都将始终尝试故障转移主副本。(然而,他们总是试图应用一个与其偏移秩成比例的延迟)。

compare:887-sign:890   # 零是唯一能够保证当所有分区恢复时群集始终能够继续的值。
compare:890-sign:892   # 群集副本有效因子10

compare:892-sign:897   # 群集副本能够迁移到孤立的主节点,这些主节点是没有工作副本的主节点。这提高了群集抵抗故障的能力,否则,如果孤立主服务器没有工作副本,则在发生故障时无法进行故障转移。

compare:897-sign:904   # 只有当旧主服务器仍有至少给定数量的其他工作副本时,副本才会迁移到孤立主服务器。这个数字就是“移民壁垒”。迁移屏障为1意味着只有当主副本至少有一个其他工作副本时,副本才会迁移,以此类推。它通常反映集群中每个主节点所需的副本数量。

compare:904-sign:909   # 默认值为1(副本仅在其主副本至少保留一个副本时迁移)。要禁用迁移,只需将其设置为一个非常大的值。可以设置值0,但该值仅用于调试和生产中的危险。
compare:909-sign:911   # 群集迁移屏障1

compare:911-sign:917   # 默认情况下,如果Redis集群节点检测到至少有一个未覆盖的哈希槽(没有可用的节点为其提供服务),那么它们将停止接受查询。这样,如果集群部分关闭(例如不再覆盖一系列散列槽),那么所有集群最终都将不可用。一旦所有插槽都被覆盖,它就会自动返回可用。

compare:917-sign:922   # 但是,有时您希望正在工作的集群的子集继续接受对仍然覆盖的部分密钥空间的查询。为此,只需将cluster require full coverage选项设置为no。
compare:922-sign:924   # 集群需要全覆盖是

compare:924-sign:928   # 当设置为“是”时,此选项可防止复制副本在主服务器故障期间尝试故障转移其主服务器。但是,如果强制执行手动故障转移,主服务器仍然可以执行手动故障转移。

compare:928-sign:932   # 这在不同的情况下非常有用,特别是在多个数据中心操作的情况下,如果在整个DC故障的情况下不希望一方被提升,我们希望它永远不会被提升。
compare:932-sign:934   # 群集副本无故障转移否
compare:934-sign:937   # 要设置集群,请确保阅读文档
compare:934-sign:937   #  available at http://redis.io web site.
compare:937-sign:939   # ######################### CLUSTER DOCKER/NAT support  ########################

compare:939-sign:943   # 在某些部署中,Redis集群节点地址发现失败,因为地址是NAT-ted或端口是转发的(典型的情况是Docker和其他容器)。

compare:943-sign:947   # 为了使Redis集群在这样的环境中工作,需要一个静态配置,其中每个节点都知道其公共地址。以下两个选项用于此作用域,分别是:

compare:947-sign:951   # *群集通告ip*群集通告端口*群集通告总线端口

compare:951-sign:956   # 每个节点都指示其地址、客户端端口和群集消息总线端口。然后,信息被发布在总线数据包的报头中,以便其他节点能够正确地映射发布信息的节点的地址。

compare:956-sign:959   # 如果不使用上述选项,将使用普通的Redis集群自动检测。

compare:959-sign:964   # 请注意,重新映射时,总线端口可能不在客户端端口+10000的固定偏移量处,因此可以根据重新映射的方式指定任何端口和总线端口。如果未设置总线端口,则通常使用10000的固定偏移量。

compare:964-sign:966   # 例子:
compare:966-sign:970   # 群集通告ip 10.1.1.5群集通告端口6379群集通告总线端口6380
compare:970-sign:972   # #################################慢日志###################################

compare:972-sign:979   # Redis Slow Log是一个系统到日志的查询,它超过了指定的
compare:972-sign:979   #  execution time. The execution time does not include the I/O operations
compare:972-sign:979   # 翻译失败,原因:Invalid Access Limit
compare:979-sign:985   # 您可以使用两个参数配置慢日志:一个参数告诉Redis要超过多少执行时间(微秒),以便记录命令;另一个参数是慢日志的长度。记录新命令时,最旧的命令将从记录的命令队列中删除。
compare:985-sign:990   # 以下时间以微秒表示,因此1000000等于1秒。请注意,负数将禁用慢日志,而值为零将强制记录每个命令。
compare:985-sign:990   slowlog-log-slower-than 10000

compare:990-sign:994   # 这个长度没有限制。只是要注意它会消耗内存。您可以使用slow log RESET回收慢日志使用的内存。
compare:990-sign:994   slowlog-max-len 128

compare:994-sign:996   # ###############################延迟监视器##############################

compare:996-sign:1000   # Redis延迟监控子系统在运行时对不同的操作进行采样,以便收集与Redis实例的可能延迟源相关的数据。

compare:1000-sign:1003   # 通过延迟命令,用户可以使用此信息打印图表和获取报告。

compare:1003-sign:1008   # 系统只记录在等于或大于通过延迟监视器阈值配置指令指定的毫秒数的时间内执行的操作。当其值设置为零时,延迟监视器将关闭。
compare:1008-sign:1015   # 默认情况下,延迟监视是禁用的,因为如果没有延迟问题,则通常不需要它,而且收集数据会对性能产生影响,虽然影响很小,但可以在大负载下测量。如果需要,可以在运行时使用命令“CONFIG SET Latency monitor threshold<millishes>”轻松启用延迟监视。
compare:1008-sign:1015   latency-monitor-threshold 0

compare:1015-sign:1017   # ############################事件通知##############################

compare:1017-sign:1020   #  Redis can notify Pub/Sub clients about events happening in the key space.
compare:1017-sign:1020   #  This feature is documented at http://redis.io/topics/notifications

compare:1020-sign:1024   # 例如,如果启用了keyspace事件通知,并且客户端对存储在数据库0中的键“foo”执行DEL操作,则
compare:1020-sign:1024   #  messages will be published via Pub/Sub:

compare:1024-sign:1027   # 出版这就是生活。页:1这就是生活。页:1

compare:1027-sign:1030   # 可以在一组类中选择Redis将通知的事件。每个类都由一个字符标识:

compare:1030-sign:1042   # K个键空间事件,以“Keyspace@<db>”前缀发布。E Keyevent事件,以“Keyevent@<db>”前缀发布。g通用命令(非特定类型),如DEL、EXPIRE、RENAME。。。$String commands l List commands s Set commands h Hash commands z Sorted Set commands x Expired events(每次密钥过期时生成的事件)e evirected events(为maxmemory收回密钥时生成的事件)g$lshzxe的别名,因此“AKE”字符串表示所有事件。

compare:1042-sign:1046   # “notify keyspace events”将零个或多个字符组成的字符串作为参数。空字符串表示通知被禁用。

compare:1046-sign:1049   # 示例:要启用列表和一般事件,从事件名称的角度来看,请使用:

compare:1049-sign:1051   # 通知密钥空间事件Elg

compare:1051-sign:1054   # 示例2:获取订阅通道名称的过期密钥流__关键事件@0__:过期使用:

compare:1054-sign:1056   # 通知密钥空间事件
compare:1056-sign:1061   # 默认情况下,所有通知都被禁用,因为大多数用户不需要此功能,而且此功能有一些开销。请注意,如果未指定K或E中的至少一个,则不会传递任何事件。
compare:1056-sign:1061   notify-keyspace-events ""

compare:1061-sign:1063   # ##############################高级配置###############################
compare:1063-sign:1069   # 当哈希有少量的条目,并且最大的条目不超过给定的阈值时,使用内存高效的数据结构对其进行编码。可以使用以下指令配置这些阈值。
compare:1063-sign:1069   hash-max-ziplist-entries 512

hash-max-ziplist-value 64

compare:1069-sign:1084   # 列表也以一种特殊的方式进行编码,以节省大量空间。每个内部列表节点允许的条目数可以指定为固定的最大大小或最大元素数。对于固定的最大大小,请使用-5到-1,这意味着:
compare:1069-sign:1084   #  -5: max size: 64 Kb  <-- not recommended for normal workloads
compare:1069-sign:1084   #  -4: max size: 32 Kb  <-- not recommended
compare:1069-sign:1084   #  -3: max size: 16 Kb  <-- probably not recommended
compare:1069-sign:1084   #  -2: max size: 8 Kb   <-- good
compare:1069-sign:1084   #  -1: max size: 4 Kb   <-- good
compare:1069-sign:1084   # 翻译失败,原因:Invalid Access Limit
compare:1069-sign:1084   list-max-ziplist-size -2

compare:1084-sign:1100   # 列表也可以压缩。Compress depth是从列表的*每*侧到*从压缩中排除*的快速列表ziplist节点数。名单的头尾
compare:1084-sign:1100   #  are always uncompressed for fast push/pop operations.  Settings are:
compare:1084-sign:1100   # 翻译失败,原因:Invalid Access Limit
compare:1084-sign:1100   #     So: [head]->node->node->...->node->[tail]
compare:1084-sign:1100   # 翻译失败,原因:Invalid Access Limit
compare:1084-sign:1100   #  2: [head]->[next]->node->node->...->node->[prev]->[tail]
compare:1084-sign:1100   #     2 here means: don't compress head or head->next or tail->prev or tail,
compare:1084-sign:1100   # 但是压缩它们之间的所有节点。
compare:1084-sign:1100   #  3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
compare:1084-sign:1100   # 翻译失败,原因:Invalid Access Limit
compare:1084-sign:1100   list-compress-depth 0

compare:1100-sign:1107   # 集合只有一种特殊的编码方式:集合只由64位有符号整数范围内碰巧是基数10的整数的字符串组成。以下配置设置设置集大小的限制,以便使用此特殊的内存保存编码。
compare:1100-sign:1107   set-max-intset-entries 512

compare:1107-sign:1113   # 与散列和列表类似,经过排序的集也经过特殊编码,以节省大量空间。此编码仅在排序集的长度和元素低于以下限制时使用:
compare:1107-sign:1113   zset-max-ziplist-entries 128

zset-max-ziplist-value 64


compare:1113-sign:1117   # 超日志稀疏表示字节数限制。限制包括16字节的头。当使用稀疏表示的超日志超过此限制时,它将转换为密集表示。

compare:1117-sign:1120   # 大于16000的值是完全无用的,因为在这一点上,密集表示更节省内存。
compare:1120-sign:1127   # 建议值为~3000,以便在不减慢过多PFADD(稀疏编码为O(N))的情况下获得空间效率编码的优点。如果不考虑CPU,但考虑空间,并且数据集由许多基数在0-15000范围内的超日志组成,则该值可以提高到~10000。
compare:1120-sign:1127   hll-sparse-max-bytes 3000

compare:1127-sign:1138   #  Streams macro node max size / items. The stream data structure is a radix
compare:1127-sign:1138   # 对内部多个项进行编码的大节点树。使用此配置,可以配置单个节点的大小(字节),以及在附加新流条目时切换到新节点之前可能包含的最大项目数。如果将以下任何设置设置为零,则会忽略该限制,因此,例如,可以通过将max bytes设置为0,将max entries设置为所需值,从而仅设置max entires限制。
compare:1127-sign:1138   stream-node-max-bytes 4096

stream-node-max-entries 100


compare:1138-sign:1146   # 活动的重新灰化每100毫秒CPU时间使用1毫秒,以帮助重新灰化主Redis哈希表(将顶级键映射到值的表)。Redis使用的哈希表实现(参见dict.c)执行一个延迟的重新灰化:在重新灰化的哈希表中运行的操作越多,执行的重新灰化“步骤”就越多,因此如果服务器空闲,则重新灰化永远不会完成,哈希表将使用更多的内存。

compare:1146-sign:1149   # 默认情况下,每秒使用此毫秒10次,以便主动重新刷新主词典,尽可能释放内存。

compare:1149-sign:1154   # 如果不确定:使用“activerehashing no”如果您有硬延迟要求,并且在您的环境中,Redis可以以2毫秒的延迟不时地回复查询不是一件好事。
compare:1154-sign:1158   # 如果您没有这么高的要求,但希望在可能的情况下尽快释放内存,请使用“activerehashing yes”。
compare:1154-sign:1158   activerehashing yes


compare:1158-sign:1163   # 客户端输出缓冲区限制可用于强制断开由于某些原因没有足够快地从服务器读取数据的客户端(a
compare:1158-sign:1163   #  common reason is that a Pub/Sub client can't consume messages as fast as the
compare:1158-sign:1163   # 翻译失败,原因:Invalid Access Limit

compare:1163-sign:1165   # 对于三种不同类型的客户机,可以设置不同的限制:

compare:1165-sign:1169   #  normal -> normal clients including MONITOR clients
compare:1165-sign:1169   #  replica  -> replica clients
compare:1165-sign:1169   #  pubsub -> clients subscribed to at least one pubsub channel or pattern

compare:1169-sign:1171   # 每个客户机输出缓冲区限制指令的语法如下:

compare:1171-sign:1173   # 客户端输出缓冲区限制<class><hard limit><soft limit><soft seconds>

compare:1173-sign:1182   # 一旦达到硬限制,或达到软限制并保持达到指定秒数(连续)时,客户端将立即断开连接。例如,如果硬限制是32兆字节,而软限制是
compare:1173-sign:1182   #  16 megabytes / 10 seconds, the client will get disconnected immediately
compare:1173-sign:1182   # 翻译失败,原因:Invalid Access Limit

compare:1182-sign:1187   # 默认情况下,普通客户机不受限制,因为它们不会在没有请求(以推送方式)的情况下接收数据,而是在请求之后才接收数据,因此只有异步客户机可能会创建这样一个场景:请求数据的速度比读取数据的速度快。

compare:1187-sign:1190   # 相反,pubsub和replica客户机有一个默认限制,因为订阅服务器和副本以推送方式接收数据。
compare:1190-sign:1195   # 硬限制或软限制都可以通过将其设置为零来禁用。
compare:1190-sign:1195   client-output-buffer-limit normal 0 0 0

client-output-buffer-limit replica 256mb 64mb 60

client-output-buffer-limit pubsub 32mb 8mb 60


compare:1195-sign:1201   # 客户端查询缓冲区累积新命令。默认情况下,它们被限制在一个固定的数量,以避免协议取消同步(例如,由于客户端的错误)将导致查询缓冲区中的未绑定内存使用。但是,如果您有非常特殊的
compare:1195-sign:1201   #  needs, such us huge multi/exec requests or alike.
compare:1201-sign:1203   # 客户端查询缓冲区限制1gb

compare:1203-sign:1207   # 在Redis协议中,批量请求(即表示单个字符串的元素)通常被限制为512mb。不过,您可以在这里更改此限制。
compare:1207-sign:1209   # 原始最大体积长度512mb

compare:1209-sign:1213   # Redis调用一个内部函数来执行许多后台任务,比如在超时时关闭客户端的连接,清除从未请求过的过期密钥,等等。

compare:1213-sign:1216   # 并非所有任务都以相同的频率执行,但Redis会根据指定的“hz”值检查要执行的任务。

compare:1216-sign:1221   # 默认情况下,“hz”设置为10。当Redis空闲时,提高这个值将使用更多的CPU,但同时当有许多密钥同时过期时,Redis将更具响应性,并且可以更精确地处理超时。
compare:1221-sign:1226   # 范围在1到500之间,但是值超过100通常不是一个好主意。大多数用户应该使用默认值10,并且仅在需要非常低延迟的环境中才将此值提高到100。
compare:1221-sign:1226   hz 10


compare:1226-sign:1231   # 通常,有一个与连接的客户机数量成比例的HZ值是有用的。例如,为了避免每次后台任务调用都要处理太多客户端,从而避免延迟峰值,这是很有用的。

compare:1231-sign:1235   # 由于默认的默认HZ值被保守地设置为10,Redis提供并默认启用一个自适应HZ值,当有许多连接的客户机时,该值将临时升高。
compare:1235-sign:1242   # 当启用动态赫兹时,实际配置的赫兹将用作基线,但一旦连接了更多的客户机,实际将根据需要使用配置的赫兹值的倍数。这样,一个空闲的实例只需要很少的CPU时间,而一个繁忙的实例则会有更高的响应速度。
compare:1235-sign:1242   dynamic-hz yes

compare:1242-sign:1248   # 当一个子对象重写a of文件时,如果启用了以下选项,则每生成32 MB的数据就会对该文件进行fsync。这对于更增量地将文件提交到磁盘并避免较大的延迟峰值非常有用。
compare:1242-sign:1248   aof-rewrite-incremental-fsync yes

compare:1248-sign:1254   # 当redis保存RDB文件时,如果启用了以下选项,则每生成32mb的数据就会对该文件进行fsync。这对于更增量地将文件提交到磁盘并避免较大的延迟峰值非常有用。
compare:1248-sign:1254   rdb-save-incremental-fsync yes


compare:1254-sign:1259   # 可以调整Redis LFU逐出(请参阅maxmemory设置)。但是,最好从默认设置开始,在研究如何提高性能以及密钥LFU如何随时间变化后才更改它们,这可以通过OBJECT FREQ命令进行检查。

compare:1259-sign:1263   # Redis-LFU实现中有两个可调参数:计数器对数因子和计数器衰减时间。在改变这两个参数之前,了解它们的含义是很重要的。

compare:1263-sign:1268   # LFU计数器仅为每个键8位,最大值为255,因此Redis使用具有对数行为的概率增量。给定旧计数器的值,当访问密钥时,计数器按以下方式递增:

compare:1268-sign:1272   # 一。提取0到1之间的随机数R。
compare:1268-sign:1272   #  2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
compare:1268-sign:1272   # 翻译失败,原因:Invalid Access Limit

compare:1272-sign:1276   # 默认的lfu日志因子是10。这是频率计数器如何随不同对数因子的访问次数而变化的表:

compare:1276-sign:1288   #  +--------+------------+------------+------------+------------+------------+
compare:1276-sign:1288   #  | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |
compare:1276-sign:1288   #  +--------+------------+------------+------------+------------+------------+
compare:1276-sign:1288   #  | 0      | 104        | 255        | 255        | 255        | 255        |
compare:1276-sign:1288   #  +--------+------------+------------+------------+------------+------------+
compare:1276-sign:1288   #  | 1      | 18         | 49         | 255        | 255        | 255        |
compare:1276-sign:1288   #  +--------+------------+------------+------------+------------+------------+
compare:1276-sign:1288   #  | 10     | 10         | 18         | 142        | 255        | 255        |
compare:1276-sign:1288   #  +--------+------------+------------+------------+------------+------------+
compare:1276-sign:1288   #  | 100    | 8          | 11         | 49         | 143        | 255        |
compare:1276-sign:1288   #  +--------+------------+------------+------------+------------+------------+

compare:1288-sign:1290   # 注:上表是通过运行以下命令获得的:

compare:1290-sign:1293   # redis benchmark-n 1000000 incr foo redis cli object freq foo

compare:1293-sign:1296   # 注2:计数器初始值为5,以便给新对象一个累积命中的机会。

compare:1296-sign:1300   # 计数器衰减时间是指密钥计数器必须经过的时间(以分钟为单位),以便除以2(如果其值小于等于10,则递减)。

compare:1300-sign:1303   # lfu衰减时间的默认值为1。特殊值为0意味着每次扫描计数器时都会使其衰减。
compare:1303-sign:1306   # lfu对数因子10 lfu衰减时间1

compare:1306-sign:1308   # ##########################活动碎片整理#######################

compare:1308-sign:1312   # 警告此功能是实验性的。然而,即使在生产过程中也进行了压力测试,并由多个工程师手动测试了一段时间。

compare:1312-sign:1315   # 什么是活动碎片整理?
compare:1312-sign:1315   #  -------------------------------

compare:1315-sign:1319   # 活动(联机)碎片整理允许Redis服务器压缩内存中少量数据分配和释放之间的剩余空间,从而允许回收内存。

compare:1319-sign:1326   # 碎片化是一个自然的过程,它发生在每个分配器(幸运的是,Jemalloc的情况不那么严重)和某些工作负载上。通常需要重新启动服务器以降低碎片,或者至少刷新掉所有数据并重新创建它。但是,由于Oran Agra for Redis 4.0实现了这个特性,这个过程可以在服务器运行时以“热”方式在运行时发生。

compare:1326-sign:1334   # 基本上,当碎片超过某个级别(见下面的配置选项)时,Redis将开始通过利用某些特定的Jemalloc特性(为了了解分配是否导致碎片并将其分配到更好的位置)在相邻的内存区域中创建值的新副本,同时,将发布旧的数据副本。对所有键递增地重复此过程将导致碎片回落到正常值。

compare:1334-sign:1336   # 需要了解的重要事项:

compare:1336-sign:1340   # 一。此功能在默认情况下是禁用的,并且仅当您编译Redis以使用随Redis源代码提供的Jemalloc副本时才起作用。这是Linux版本的默认设置。

compare:1340-sign:1343   # 2。如果没有碎片问题,则不需要启用此功能。

compare:1343-sign:1346   # 三。一旦遇到碎片,可以在需要时使用命令“CONFIG SET activedefrag yes”启用此功能。
compare:1346-sign:1350   # 配置参数能够微调碎片整理进程的行为。如果你不确定他们是什么意思,最好不要碰默认值。
compare:1350-sign:1353   # 已启用活动碎片整理活动碎片整理是
compare:1353-sign:1356   # 启动活动碎片整理活动碎片整理忽略100mb字节的最小碎片浪费量
compare:1356-sign:1359   # 启动活动碎片整理的最小碎片百分比活动碎片整理阈值低于10
compare:1359-sign:1362   # 使用最大努力的最大碎片百分比活动碎片整理阈值上限100
compare:1362-sign:1365   # 在CPU中进行碎片整理的最小工作量活动碎片整理周期百分比(最小5)
compare:1365-sign:1368   # 在CPU中进行碎片整理的最大工作量活动碎片整理周期百分比最大75
compare:1368-sign:1372   #  Maximum number of set/hash/zset/list fields that will be processed from
compare:1368-sign:1372   # 主字典扫描活动碎片整理最大扫描字段1000

附上原配置文件

# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.

################################## INCLUDES ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf

################################## MODULES #####################################

# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
#
# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so

################################## NETWORK #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 127.0.0.1

# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
#    "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511

# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300

################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize yes

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.
supervised no

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

# Specify the syslog identity.
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16

# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes

################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""

save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes

# The filename where to dump the DB
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./

################################# REPLICATION #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
#   +------------------+      +---------------+
#   |      Master      | ---> |    Replica    |
#   | (receive writes) |      |  (exact copy) |
#   +------------------+      +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
#    network partition replicas automatically try to reconnect to masters
#    and resynchronize with them.
#
# replicaof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth <master-password>

# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
#    SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
#    COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes

# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
#
# repl-ping-replica-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
#
# repl-timeout 60

# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
#
# repl-backlog-size 1mb

# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600

# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.

# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
#   IP: The address is auto detected by checking the peer address
#   of the socket used by the replica to connect with the master.
#
#   Port: The port is communicated by the replica during the replication
#   handshake, and is normally the port that the replica is using to
#   listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared

# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.

################################### CLIENTS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000

############################## MEMORY MANAGEMENT ################################

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes

############################# LAZY FREEING ####################################

# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
#    in order to make room for new data, without going over the specified
#    memory limit.
# 2) Because of expire: when a key with an associated time to live (see the
#    EXPIRE command) must be deleted from memory.
# 3) Because of a side effect of a command that stores data on a key that may
#    already exist. For example the RENAME command may delete the old key
#    content when it is replaced with another one. Similarly SUNIONSTORE
#    or SORT with STORE option may delete existing keys. The SET command
#    itself removes any old content of the specified key in order to replace
#    it with the specified string.
# 4) During replication, when a replica performs a full resynchronization with
#    its master, the content of the whole database is removed in order to
#    load the RDB file just transferred.
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.

no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
#   [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes

################################ LUA SCRIPTING  ###############################

# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000

################################ REDIS CLUSTER  ###############################

# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000

# A replica of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple replicas able to failover, they exchange messages
#    in order to try to give an advantage to the replica with the best
#    replication offset (more data from the master processed).
#    Replicas will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#
# 2) Every single replica computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the replica will not try to failover
#    at all.
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
#   (node-timeout * replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
#
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-replica-validity-factor 10

# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1

# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes

# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
#
# cluster-replica-no-failover no

# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.

########################## CLUSTER DOCKER/NAT support  ########################

# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380

################################## SLOW LOG ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128

################################ LATENCY MONITOR ##############################

# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0

############################# EVENT NOTIFICATION ##############################

# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
#  K     Keyspace events, published with __keyspace@<db>__ prefix.
#  E     Keyevent events, published with __keyevent@<db>__ prefix.
#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
#  $     String commands
#  l     List commands
#  s     Set commands
#  h     Hash commands
#  z     Sorted set commands
#  x     Expired events (events generated every time a key expires)
#  e     Evicted events (events generated when a key is evicted for maxmemory)
#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
#
#  The "notify-keyspace-events" takes as argument a string that is composed
#  of zero or multiple characters. The empty string means that notifications
#  are disabled.
#
#  Example: to enable list and generic events, from the point of view of the
#           event name, use:
#
#  notify-keyspace-events Elg
#
#  Example 2: to get the stream of the expired keys subscribing to channel
#             name __keyevent@0__:expired use:
#
#  notify-keyspace-events Ex
#
#  By default all notifications are disabled because most users don't need
#  this feature and the feature has some overhead. Note that if you don't
#  specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""

############################### ADVANCED CONFIG ###############################

# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb  <-- not recommended for normal workloads
# -4: max size: 32 Kb  <-- not recommended
# -3: max size: 16 Kb  <-- probably not recommended
# -2: max size: 8 Kb   <-- good
# -1: max size: 4 Kb   <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2

# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression.  The head and tail of the list
# are always uncompressed for fast push/pop operations.  Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
#    going from either the head or tail"
#    So: [head]->node->node->...->node->[tail]
#    [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
#    2 here means: don't compress head or head->next or tail->prev or tail,
#    but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000

# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
stream-node-max-bytes 4096
stream-node-max-entries 100

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# replica  -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
#
# client-query-buffer-limit 1gb

# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
#
# proto-max-bulk-len 512mb

# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10

# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used as
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
dynamic-hz yes

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes

# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
rdb-save-incremental-fsync yes

# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |
# +--------+------------+------------+------------+------------+------------+
# | 0      | 104        | 255        | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 1      | 18         | 49         | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 10     | 10         | 18         | 142        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 100    | 8          | 11         | 49         | 143        | 255        |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
#
#   redis-benchmark -n 1000000 incr foo
#   redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
# lfu-decay-time 1

########################### ACTIVE DEFRAGMENTATION #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
#
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
#
# Important things to understand:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
#    to use the copy of Jemalloc we ship with the source code of Redis.
#    This is the default with Linux builds.
#
# 2. You never need to enable this feature if you don't have fragmentation
#    issues.
#
# 3. Once you experience fragmentation, you can enable this feature when
#    needed with the command "CONFIG SET activedefrag yes".
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.

# Enabled active defragmentation
# activedefrag yes

# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10

# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100

# Minimal effort for defrag in CPU percentage
# active-defrag-cycle-min 5

# Maximal effort for defrag in CPU percentage
# active-defrag-cycle-max 75

# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# active-defrag-max-scan-fields 1000


评论

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×