Node.js 节点、群集、express和Socket.io/sticky上的Socket.io创建多个连接

Node.js 节点、群集、express和Socket.io/sticky上的Socket.io创建多个连接,node.js,express,socket.io,Node.js,Express,Socket.io,我有一个node.js服务器,运行集群socket.io和express,并创建粘性@socket.io/sticky。(我们也使用Redis进行交叉交流,但我不确定这是否是一个问题)。我有一种客户端类型(iOS React Native)使用轮询,因此需要使用sticky。从该客户端创建的套接字连接使用轮询,并且正在与服务器建立多个套接字连接。下面是我日志中的一个示例(6680和6674是来自工人的PID) 它首先连接到6680上的进程,然后连接到6674,然后再连接回6680。我的理解是,“

我有一个node.js服务器,运行集群socket.io和express,并创建粘性@socket.io/sticky。(我们也使用Redis进行交叉交流,但我不确定这是否是一个问题)。我有一种客户端类型(iOS React Native)使用轮询,因此需要使用sticky。从该客户端创建的套接字连接使用轮询,并且正在与服务器建立多个套接字连接。下面是我日志中的一个示例(6680和6674是来自工人的PID)

它首先连接到6680上的进程,然后连接到6674,然后再连接回6680。我的理解是,“粘性”应该阻止这种跳跃

以下是我的服务器设置中的一些相关信息:

const express = require('express');
const app = express();
const cluster = require('cluster');
...
const { setupMaster, setupWorker } = require('@socket.io/sticky');
...
if (cluster.isMaster) {
  console.log(`Master is started with process ID ${process.pid}`);

  const serverhttp =
    process.env.NODE_ENV === 'production'
      ? https.createServer(options, app)
      : http.Server(app);

  redisCache.setRedisUsers([]);
  setupMaster(serverhttp, {
    loadBalancingMethod: 'least-connection', // either "random", "round-robin" or "least-connection"
  });
 for (let i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  cluster.on('exit', (worker) => {
    console.log(`Worker died on process ID ${worker.process.pid}`);
    cluster.fork();
  });
} else {
console.log(`Worker started at process ID ${process.pid}`);
  app.use(bodyParser.urlencoded({ extended: false }));
  app.use(bodyParser.json());

  const serverhttp =
    process.env.NODE_ENV === 'production'
      ? https.createServer(options, app)
      : http.Server(app);

/* app.use routes, cors, static routes; db connection initialization */

const io = socketManager(serverhttp, redisCache); // initialize socket
  setupWorker(io);
  serverhttp.listen(port, () => {
    console.log(`Starting server on ${port}`);
  });
  serverhttp.keepAliveTimeout = 65 * 1000;
  serverhttp.headersTimeout = 66 * 1000;

}
我的配置与代码示例没有太大的不同。除了两件事:

  • 我的主人不在听,但我的工人在听。(我承认这是一个很大的区别)
  • 我的io.sockets.on(或io.on)在调用setupWorker之前启动
  • 在1上,我尝试转换为仅在主机上侦听,但我没有获得静态路由,并且出现web错误(
    无法获得/
    )。当我移动app.use content以配置母版时,页面会出现,但我从未看到正在进行套接字连接,DB连接超时,即使它已初始化。(上面的“Hello”日志从未发布。)socket.io站点示例中缺少的一件事是express

    在第二个问题上,我不知道为什么这会产生不同

    那么,是否有办法改变我的主/工作配置,使其使用粘性功能并使用express,但也可以让工作实例在客户端连接上响应?
    我的express/https服务器/socket io实现是否以某种方式配置错误?有没有一种方法可以让express/https同时在主级和工作级进行侦听。在我看来,当我通过主连接时,它并没有被传递给工人

    谢谢,约瑟夫

    const express = require('express');
    const app = express();
    const cluster = require('cluster');
    ...
    const { setupMaster, setupWorker } = require('@socket.io/sticky');
    ...
    if (cluster.isMaster) {
      console.log(`Master is started with process ID ${process.pid}`);
    
      const serverhttp =
        process.env.NODE_ENV === 'production'
          ? https.createServer(options, app)
          : http.Server(app);
    
      redisCache.setRedisUsers([]);
      setupMaster(serverhttp, {
        loadBalancingMethod: 'least-connection', // either "random", "round-robin" or "least-connection"
      });
     for (let i = 0; i < numCPUs; i++) {
        cluster.fork();
      }
    
      cluster.on('exit', (worker) => {
        console.log(`Worker died on process ID ${worker.process.pid}`);
        cluster.fork();
      });
    } else {
    console.log(`Worker started at process ID ${process.pid}`);
      app.use(bodyParser.urlencoded({ extended: false }));
      app.use(bodyParser.json());
    
      const serverhttp =
        process.env.NODE_ENV === 'production'
          ? https.createServer(options, app)
          : http.Server(app);
    
    /* app.use routes, cors, static routes; db connection initialization */
    
    const io = socketManager(serverhttp, redisCache); // initialize socket
      setupWorker(io);
      serverhttp.listen(port, () => {
        console.log(`Starting server on ${port}`);
      });
      serverhttp.keepAliveTimeout = 65 * 1000;
      serverhttp.headersTimeout = 66 * 1000;
    
    }
    
     module.exports = function (serverhttp, redisClient) {
      const config = {
        pingTimeout: 60000,
        pingInterval: 25000,
        transports: ['polling','websocket'], 
      };
     
      const io = socketio.listen(serverhttp, config);
    //Redis Adapter
      io.adapter(
        redisAdapter({
          host: REDIS_HOST,
          port: REDIS_PORT,
        })
      );
    
    //Support functions....
    
    
     io.sockets.on('connection', function (client) {
        logIt('io.sockets.on(connect)', `Will broadcast HELLO ${client.id}`);
        io.emit('HELLO', `from io.emit -- ${client.id}`);
        client.broadcast.emit('HELLO', client.id);
    
        client.on(USER_CONNECTED, async (user) => {
          logIt('client.on(USER_CONNECTED)', ` ${user.name} ${client.id}`);
          user.socketId = client.id;
       
          const redisContent= await redisClient.addRedisUser({
            name: user.name,
            id: user.id,
            socketId: client.id,
          });
         io.emit(USER_CONNECTED, user); // broadcast to other clients
         });
         ... 
       //other events
      });
    return io; //End of module exports function
    };