Python 3.x 多处理中的Google Stackdriver不工作

Python 3.x 多处理中的Google Stackdriver不工作,python-3.x,stackdriver,google-cloud-stackdriver,google-cloud-logging,Python 3.x,Stackdriver,Google Cloud Stackdriver,Google Cloud Logging,我使用Flask构建了一个API端点,在这里数据是从其他API收集和组合的。为了有效地执行此操作,我使用多进程。为了保持控制,我想使用Google Stackdriver记录所有步骤 由于某些原因,在我的多进程环境中使用GoogleStackDriver时,我总是会出错。我在MWE中收到的错误和后续警告如下: Pickling client objects is explicitly not supported. Clients have non-trivial state that is lo

我使用Flask构建了一个API端点,在这里数据是从其他API收集和组合的。为了有效地执行此操作,我使用多进程。为了保持控制,我想使用Google Stackdriver记录所有步骤

由于某些原因,在我的多进程环境中使用GoogleStackDriver时,我总是会出错。我在MWE中收到的错误和后续警告如下:

Pickling client objects is explicitly not supported.
Clients have non-trivial state that is local and unpickleable.
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\...\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\Users\...\Anaconda3\lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
为什么不能将多处理与Google Stackdriver结合起来?我应该调整什么(我不太了解什么)来实现这一点?

截至今天(2019年4月),stackdriver日志仍然不支持多处理。解决办法是:

  • 确保您的进程是在
    spawn
    模式下启动的,而不是在
    fork
    (默认为*nix)模式下启动的,这会阻止共享任何内容
  • 通过在每个进程中分别配置日志对象,避免显式共享日志对象
在google库中使用
fork
多处理通常是个坏主意,stackdriver并不是唯一一个引起问题的

project_name = project_name = 'budget_service'
message = 'This is a test'
labels = {
    'deployment': 'develop',
    'severity': 'info'
}

# Import libs
from google.cloud import logging
import multiprocessing as mp

# Initialize logging
logging_client = logging.Client()
logger = logging_client.logger(project_name)

# Function to write log
def writeLog(logger):
    logger.log_text(
        text = message,
        labels = labels
    )
    print('logger succeeded')

def testFunction():
    print('test')

# Run without mp
writeLog(logger)

# Run with mp
print(__name__)
if __name__ == '__main__':       
    try:
        print('mp started')

        # Initialize
        manager = mp.Manager()
        return_dict = manager.dict()
        jobs = []

        # Set up workers
        worker_log1 = mp.Process(name='testFunction', target=testFunction, args=[])
        worker_log2 = mp.Process(name='writeLog', target=writeLog, args=[logger])

        # Store in jobs
        jobs.append(worker_log1)
        jobs.append(worker_log2)


        # Start workers
        worker_log1.start()
        worker_log2.start()

        for job in jobs:
            job.join()

        print('mp succeeded')

    except Exception as err:
         print(err)