Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/postgresql/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Node.js 使用数据库PG池的正确方法是什么_Node.js_Postgresql_Connection Pooling - Fatal编程技术网

Node.js 使用数据库PG池的正确方法是什么

Node.js 使用数据库PG池的正确方法是什么,node.js,postgresql,connection-pooling,Node.js,Postgresql,Connection Pooling,我正在使用node,我想使用它。我有这样一个简单的存储库: 初始化 示例选择代码 export default ( { pgPool, }: Object ) => { return { getExampleData: async (uid: string) => { const result = await pgPool.query('SELECT data FROM public.results_t

我正在使用node,我想使用它。我有这样一个简单的存储库:

初始化

示例选择代码

export default (
    {
        pgPool,
    }: Object
) => {
    return { 
        getExampleData: async (uid: string) => {
            const result = await pgPool.query('SELECT data FROM public.results_table WHERE uid = $1::text;', [uid])
            return result.rows
        },
    }
}
我的问题是,在压力下(很多请求),我得到了以下错误:

错误:剩余的连接插槽保留用于非复制 超级用户连接


我不确定我是否正确使用了pool。

我们pgpool.conf的一部分:

# - Pool size -

num_init_children = 100
                                   # Number of pools
                                   # (change requires restart)
max_pool = 3
                                   # (change requires restart)

# - Life time -

child_life_time = 120
                                   # Pool exits after being idle for this many seconds
child_max_connections = 0
                                   # Pool exits after receiving that many connections
                                   # 0 means no exit
connection_life_time = 90
                                   # Connection to backend closes after being idle for this many seconds
                                   # 0 means no close
client_idle_limit = 0
                                   # Client is disconnected after being idle for that many seconds
                                   # (even inside an explicit transactions!)
                                   # 0 means no disconnection
以下问题/答案也应该对您有所帮助:


相信这会有所帮助。

多少钱是“大量请求”?我们使用pgpool、haproxy和PHP。Apache和有很多客户端,没有任何问题。然而,我们已经调整了所有内容,使其拥有大约2000个同时运行的客户端。两台服务器,一个主服务器,一个从服务器和8个内核,带有16GB ram和2TB HDD,使用NFS。20/s:-D它正在测试DB,有25个连接插槽,1GB ram。。。
# - Pool size -

num_init_children = 100
                                   # Number of pools
                                   # (change requires restart)
max_pool = 3
                                   # (change requires restart)

# - Life time -

child_life_time = 120
                                   # Pool exits after being idle for this many seconds
child_max_connections = 0
                                   # Pool exits after receiving that many connections
                                   # 0 means no exit
connection_life_time = 90
                                   # Connection to backend closes after being idle for this many seconds
                                   # 0 means no close
client_idle_limit = 0
                                   # Client is disconnected after being idle for that many seconds
                                   # (even inside an explicit transactions!)
                                   # 0 means no disconnection