Apache flink 什么是';connector.lookup.cache.max rows';意思是

Apache flink 什么是';connector.lookup.cache.max rows';意思是,apache-flink,Apache Flink,当我将mysql表定义为dimonsion表时,定义如下: CREATE TABLE MyUserTable ( ... ) WITH ( 'connector.type' = 'jdbc', -- required: specify this table type is jdbc 'connector.url' = 'jdbc:mysql://localhost:3306/flink-test', -- required: JDBC DB url 'connect

当我将mysql表定义为dimonsion表时,定义如下:


CREATE TABLE MyUserTable (
  ...
) WITH (
  'connector.type' = 'jdbc', -- required: specify this table type is jdbc
  
  'connector.url' = 'jdbc:mysql://localhost:3306/flink-test', -- required: JDBC DB url
  
  'connector.table' = 'jdbc_table_name',  -- required: jdbc table name
  
  'connector.driver' = 'com.mysql.jdbc.Driver', -- optional: the class name of the JDBC driver to use to connect to this URL. 
                                                -- If not set, it will automatically be derived from the URL.

  'connector.username' = 'name', -- optional: jdbc user name and password
  'connector.password' = 'password',
  'connector.lookup.cache.max-rows' = '5000', -- optional, max number of rows of lookup cache, over this value, the oldest rows will
                                              -- be eliminated. "cache.max-rows" and "cache.ttl" options must all be specified if any
                                              -- of them is specified. Cache is not enabled as default.
  'connector.lookup.cache.ttl' = '10s', -- optional, the max time to live for each rows in lookup cache, over this time, the oldest rows
                                        -- will be expired. "cache.max-rows" and "cache.ttl" options must all be specified if any of
                                        -- them is specified. Cache is not enabled as default.
  'connector.lookup.max-retries' = '3', -- optional, max retry times if lookup database failed

我已经定义了
connector.lookup.cache.max-rows=5000
connector.lookup.cache.ttl=10s

如果我的mysql表有10001行(超过
connector.lookup.cache.max行),这是否意味着

  • mysql表中的前5000行将被缓存
  • 10秒后,这5000行将过期,mysql表中的另外5000行将被缓存

  • 我认为它的行为与我上面描述的不同,因此,如果我的行数(这里是10001)超过了连接器.lookup.cache.max rows指定的行数(这里是500)

    我相信这种工作方式是每次缓存未命中时,连接器都会从数据库中读取行数

    当TTL过期时,行将从缓存中过期,当缓存已满时,最旧的行将被踢出,以便为新获取的行腾出空间


    将更详细地描述查找缓存行为。

    感谢@david anderson提供的有用答案!