Amazon ec2 Ray不分布在群集中的所有可用CPU上

Amazon ec2 Ray不分布在群集中的所有可用CPU上,amazon-ec2,multiprocessing,cluster-computing,distributed-computing,ray,Amazon Ec2,Multiprocessing,Cluster Computing,Distributed Computing,Ray,我的问题是ray不会分配给我的员工 我总共有16个内核,因为ubuntu的每个ec2 aws实例上都有8个CPU 然而,当我启动我的ray cluster并提交python脚本时,它只分布在8个内核上,因为只有8个PID显示出可用性 值得注意的是,我无法访问EC2实例上的ray dashboard,我只能通过打印正在使用的PID来获得这些信息 如何让我的脚本由所有16个CPU进行UT化,从而显示16PID用于执行脚本 这是我的剧本: import os import ray import tim

我的问题是ray不会分配给我的员工

我总共有16个内核,因为ubuntu的每个ec2 aws实例上都有8个CPU

然而,当我启动我的ray cluster并提交python脚本时,它只分布在8个内核上,因为只有8个PID显示出可用性

值得注意的是,我无法访问EC2实例上的ray dashboard,我只能通过打印正在使用的PID来获得这些信息

如何让我的脚本由所有16个CPU进行UT化,从而显示16PID用于执行脚本

这是我的剧本:

import os
import ray
import time
import xgboost
from xgboost.sklearn import XGBClassifier


def printer():
    print("INSIDE WORKER " + str(time.time()) +"  PID  :    "+  str(os.getpid()))

# decorators allow for futures to be created for parallelization
@ray.remote        
def func_1():
    #model = XGBClassifier()
    count = 0
    for i in range(100000000):
        count += 1
    printer()
    return count
        
@ray.remote        
def func_2():
    #model = XGBClassifier()
    count = 0
    for i in range(100000000):
        count += 1
    printer()
    return count

@ray.remote
def func_3():
    count = 0
    for i in range(100000000):
        count += 1
    printer()
    return count

def main():
    #model = XGBClassifier()

    start = time.time()
    results = []
    
    ray.init(address='auto')
    #append fuction futures
    for i in range(10):
        results.append(func_1.remote())
        results.append(func_2.remote())
        results.append(func_3.remote())
        
    #run in parrallel and get aggregated list
    a = ray.get(results)
    b = 0
    
    #add all values in list together
    for j in range(len(a)):
        b += a[j]
    print(b)
    
    #time to complete
    end = time.time()
    print(end - start)
    
    
if __name__ == '__main__':
    main()
这是我的配置:

# A unique identifier for the head node and workers of this cluster.
cluster_name: basic-ray-123454
# The maximum number of workers nodes to launch in addition to the head
# node. This takes precedence over min_workers. min_workers defaults to 0.
max_workers: 2 # this means zero workers
min_workers: 2 # this means zero workers
# Cloud-provider specific configuration.


provider:
    type: aws
    region: eu-west-2
    availability_zone: eu-west-2a

file_mounts_sync_continuously: False



auth:
    ssh_user: ubuntu
    ssh_private_key: /home/user/.ssh/aws_ubuntu_test.pem
head_node:
    InstanceType: c5.2xlarge
    ImageId: ami-xxxxxxa6b31fd2c
    KeyName: aws_ubuntu_test

    BlockDeviceMappings:
      - DeviceName: /dev/sda1
        Ebs:
          VolumeSize: 200

worker_nodes:
   InstanceType: c5.2xlarge
   ImageId: ami-xxxxx26a6b31fd2c
   KeyName: aws_ubuntu_test


file_mounts: {
  "/home/ubuntu": "/home/user/RAY_AWS_DOCKER/ray_example_2_4/conda_env.yaml"
   }

setup_commands:
  - echo "start initialization_commands"
  - sudo apt-get update
  - sudo apt-get upgrade
  - sudo apt-get install -y python-setuptools
  - sudo apt-get install -y build-essential curl unzip psmisc
  - pip install --upgrade pip
  - pip install ray[all]
  - echo "all files :"
  - ls

  # - conda install -c conda-forge xgboost


head_start_ray_commands:
  - ray stop
  - ulimit -n 65536; ray start --head --port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml


worker_start_ray_commands:

  - ray stop
  - ulimit -n 65536; ray start --address=$RAY_HEAD_IP:6379 --object-manager-port=8076

你能试试ray.get_available_resources()并确保你的集群实际上有16个CPU吗?另外,您是否可以尝试在每个任务中添加print(os.get_pid())以确保只有8个任务同时运行?您是否可以尝试ray.get_available_resources()并确保您的集群实际有16个CPU?另外,您是否可以尝试在每个任务中添加print(os.get_pid()),以确保只有8个任务同时运行?