Python 如何利用多个地理过程实现多处理?
我对Python比较陌生,我想尝试一下多处理。我有一个脚本,在空闲状态下运行良好,或者作为ArcMap工具箱脚本。在阅读了这些论坛和docs.python之后,我尝试将我的工作脚本合并到一个多处理脚本中。无论这个论坛上有多少类似的工作示例,都没有一个像我所希望的那样解决数据处理问题。我希望这是可行的 基本上,脚本通过一个高程光栅列表(ERDAS IMG格式),提取阈值以下的单元,最后将它们合并在一起。我当前正在命令提示符下运行该脚本,因为其他所有操作都会打开新窗口,或者在尝试运行时崩溃。脚本给人的错觉是它工作得很好,除了它似乎在等待工作人员完全完成之前进入最终合并 我已经看了几个例子,很少有例子在worker函数中有超过两个进程的。这些都不是arcpy地质过程。所以我想我的问题基本上是1)我应该使用pool.apply\u async以外的东西,比如pool.map或pool.apply吗?2) 我是否正确地将最终多边形的路径返回到结果列表 任何批评都是欢迎的,我们非常感谢。先谢谢你Python 如何利用多个地理过程实现多处理?,python,multiprocessing,raster,geo,arcpy,Python,Multiprocessing,Raster,Geo,Arcpy,我对Python比较陌生,我想尝试一下多处理。我有一个脚本,在空闲状态下运行良好,或者作为ArcMap工具箱脚本。在阅读了这些论坛和docs.python之后,我尝试将我的工作脚本合并到一个多处理脚本中。无论这个论坛上有多少类似的工作示例,都没有一个像我所希望的那样解决数据处理问题。我希望这是可行的 基本上,脚本通过一个高程光栅列表(ERDAS IMG格式),提取阈值以下的单元,最后将它们合并在一起。我当前正在命令提示符下运行该脚本,因为其他所有操作都会打开新窗口,或者在尝试运行时崩溃。脚本给人
# Import modules
import arcpy, os, math
from arcpy import env
from arcpy.sa import *
import multiprocessing
import time
# Check out licenses
arcpy.CheckOutExtension("spatial")
# Define functions
def worker_bee(inputRaster, scratch, addNum):
(path, lName) = os.path.split(inputRaster)
(sName, ext) = os.path.splitext(lName)
nameParts = sName.split("_")
nameNumber = nameParts[-1]
# Create scratch subfolder if not exists
subFolder = scratch + "\\" + nameNumber + "_output"
if not os.path.exists(subFolder):os.makedirs(subFolder)
# Set workspace to subfolder
arcpy.env.workspace = subFolder
arcpy.env.overwriteOutput=True
arcpy.env.extent = "MAXOF"
# Local Variables
Expression = "Shape_Area >= 100"
poly1 = subFolder + "\\poly1.shp"
poly2 = subFolder + "\\poly2.shp"
poly3 = subFolder + "\\poly3.shp"
poly4 = subFolder + "\\poly4.shp"
poly5 = subFolder + "\\poly5.shp"
poly6 = subFolder + "\\poly6.shp"
poly7 = subFolder + "\\poly7.shp"
outName = scratch + "\\ABL_" + nameNumber + ".shp"
#### Perform calculations ###
# Map Algebra (replace -9999 with 9999)
inRasterCon = Con(inputRaster, 9999, inputRaster, "Value = -9999")
# Filter DEM to smooth out low outliers
filterOut = Filter(inRasterCon, "LOW", "DATA")
# Determine raster MINIMUM value and calculate threshold
filterMinResult = arcpy.GetRasterProperties_management(filterOut, "MINIMUM")
filterMin = filterMinResult.getOutput(0)
threshold = (float(filterMin) + float(addNum))
# Map Algebra (values under threshold)
outCon = Con(filterOut <= threshold, 1, "")
arcpy.RasterToPolygon_conversion(outCon, poly1, "SIMPLIFY", "Value")
# Dissolve parts
arcpy.Dissolve_management(poly1, poly2, "", "", "SINGLE_PART", "DISSOLVE_LINES")
# Select parts larger than 100 sq m
arcpy.Select_analysis(poly2, poly3, Expression)
# Process: Eliminate Polygon Part
arcpy.EliminatePolygonPart_management(poly4, poly5, "PERCENT", "0 SquareMeters", "10", "CONTAINED_ONLY")
# Select parts larget than 100 sq m
arcpy.Select_analysis(poly5, poly6, Expression)
# Simplify Polygon
arcpy.SimplifyPolygon_cartography(poly6, poly7, "BEND_SIMPLIFY", "3 Meters", "3000 SquareMeters", "RESOLVE_ERRORS", "KEEP_COLLAPSED_POINTS")
# Smooth Polygon
outShape = arcpy.SmoothPolygon_cartography(poly7, outName, "PAEK", "3 Meters", "FIXED_ENDPOINT", "FLAG_ERRORS").getOutput(0)
### Calculations complete ###
# Delete scratch subfolder
arcpy.Delete_management(subFolder)
print("Completed " + outShape + "...")
return outShape
resultList = []
def log_result(result):
resultList.append(result)
if __name__ == "__main__":
arcpy.env.overwriteOutput=True
# Read in parameters
inFolder = raw_input("Input Folder: ")#arcpy.GetParameterAsText(0)
addElev = raw_input("Number of elevation units to add to minimum: ")
# Create scratch folder workspace
scratchFolder = inFolder + "\\scratch"
if not os.path.exists(scratchFolder):os.makedirs(scratchFolder)
# Local variables
dec_num = str(float(addElev) - int(float(addElev)))[1:]
outNameNum = dec_num.replace(".", "")
outMerge = inFolder + "\\ABL_" + outNameNum + ".shp"
# Print core usage
cores = multiprocessing.cpu_count()
print("Using " + str(cores) + " cores...")
#Start timing
start = time.clock()
# List input tiles
arcpy.env.workspace = inFolder
inTiles = arcpy.ListRasters("*", "IMG")
tileList = []
for tile in inTiles:
tileList.append(inFolder + "\\" + tile)
# Create a Pool of subprocesses
pool = multiprocessing.Pool(cores)
print("Adding jobs to multiprocessing pool...")
for tile in tileList:
# Add the job to the multiprocessing pool asynchronously
pool.apply_async(worker_bee, (tile, scratchFolder, addElev), callback = log_result)
pool.close()
pool.join()
# Merge the temporary outputs
print("Merging temporary outputs into shapefile " + outMerge + "...")
arcpy.Merge_management(resultList, outMerge)
# Clean up temporary data
print("Deleting temporary data ...")
for result in results:
try:
arcpy.Delete_management(result)
except:
pass
# Stop timing and report duration
end = time.clock()
duration = end - start
hours, remainder = divmod(duration, 3600)
minutes, seconds = divmod(remainder, 60)
print("Completed in " + hours + "hrs " + minutes + "min " + seconds + "sec")
#导入模块
导入arcpy、os、math
从arcpy导入环境
从arcpy.sa导入*
导入多处理
导入时间
#签出许可证
arcpy.CheckOutExtension(“空间”)
#定义函数
def worker_bee(输入、划痕、添加编号):
(路径,lName)=操作系统路径拆分(InputMaster)
(sName,ext)=os.path.splitext(lName)
nameParts=sName.split(“\u”)
名称编号=名称零件[-1]
#创建临时子文件夹(如果不存在)
子文件夹=暂存+“\\”+名称编号+“\u输出”
如果不存在os.path.exists(子文件夹):os.makedirs(子文件夹)
#将工作区设置为子文件夹
arcpy.env.workspace=子文件夹
arcpy.env.overwriteOutput=True
arcpy.env.extent=“MAXOF”
#局部变量
Expression=“形状面积>=100”
poly1=子文件夹+“\\poly1.shp”
poly2=子文件夹+“\\poly2.shp”
poly3=子文件夹+“\\poly3.shp”
poly4=子文件夹+“\\poly4.shp”
poly5=子文件夹+“\\poly5.shp”
poly6=子文件夹+“\\poly6.shp”
poly7=子文件夹+“\\poly7.shp”
outName=scratch+“\\ABL\u”+nameNumber+“.shp”
####计算###
#地图代数(用9999替换-9999)
inRasterCon=Con(inputRaster,9999,inputRaster,“值=-9999”)
#过滤DEM以平滑低异常值
过滤器输出=过滤器(在倒车档中,“低”、“数据”)
#确定光栅最小值并计算阈值
filterMinResult=arcpy.GetRasErrorProperties\u管理(filterOut,“最小值”)
filterMin=filterMinResult.getOutput(0)
阈值=(浮点(filterMin)+浮点(addNum))
#映射代数(阈值以下的值)
outCon=Con(filterOut to 2):据我所知,日志结果存在于每个进程中,并且只在本地调用。如果使用map,您会很好。地理进程有什么特别之处,什么是地理进程?地理进程是用于操作空间数据的GIS操作