Ruby事件机排队问题
我有一个用Ruby编写的Http客户端,可以对URL进行同步请求。然而,为了快速执行多个请求,我决定使用Eventmachine。我们的想法是 将所有请求排队,并使用eventmachine执行它们Ruby事件机排队问题,ruby,queue,eventmachine,Ruby,Queue,Eventmachine,我有一个用Ruby编写的Http客户端,可以对URL进行同步请求。然而,为了快速执行多个请求,我决定使用Eventmachine。我们的想法是 将所有请求排队,并使用eventmachine执行它们 class EventMachineBackend ... ... def execute(request) $q ||= EM.Queue.new $q.push(request) $q.pop {|request| request.invoke} EM
class EventMachineBackend
...
...
def execute(request)
$q ||= EM.Queue.new
$q.push(request)
$q.pop {|request| request.invoke}
EM.run{EM.next_tick {EM.stop}}
end
...
end
请原谅我使用了全局队列变量。稍后我将重构它。我在EventMachineBackend#execute
中所做的是使用Eventmachine队列的正确方法吗
我在实现中看到的一个问题是它本质上是同步的。我推送一个请求,弹出并执行该请求,然后等待它完成
有谁能建议一个更好的实现。您的请求逻辑必须是异步的,才能与EventMachine一起工作,我建议您使用。您可以发现,它显示了如何并行运行请求。用于并行运行多个连接的更好接口是来自同一gem的 如果要对请求进行排队,并且只并行运行固定数量的请求,可以执行以下操作:
EM.run do
urls = [...] # regular array with URLs
active_requests = 0
# this routine will be used as callback and will
# be run when each request finishes
when_done = proc do
active_requests -= 1
if urls.empty? && active_requests == 0
# if there are no more urls, and there are no active
# requests it means we're done, so shut down the reactor
EM.stop
elsif !urls.empty?
# if there are more urls launch a new request
launch_next.call
end
end
# this routine launches a request
launch_next = proc do
# get the next url to fetch
url = urls.pop
# launch the request, and register the callback
request = EM::HttpRequest.new(url).get
request.callback(&when_done)
request.errback(&when_done)
# increment the number of active requests, this
# is important since it will tell us when all requests
# are done
active_requests += 1
end
# launch three requests in parallel, each will launch
# a new requests when done, so there will always be
# three requests active at any one time, unless there
# are no more urls to fetch
3.times do
launch_next.call
end
end
launch_next = proc do
urls.pop do |url|
request = EM::HttpRequest.new(url).get
request.callback(&launch_next)
request.errback(&launch_next)
end
end
买主注意,在上面的代码中,我很可能遗漏了一些细节
如果您认为很难遵循我的示例中的逻辑,欢迎来到事件编程的世界。编写可读的事件代码确实很棘手。一切都是倒退的。有时从结尾开始阅读是有帮助的
我假设您不想在开始下载后添加更多请求,它看起来与您问题中的代码不同,但如果您想,您可以重写我的代码,使用EM::Queue
而不是常规数组,并删除执行EM.stop
的部分,因为您不会停止。您可能也可以删除跟踪活动请求数量的代码,因为这并不相关。重要部分如下所示:
EM.run do
urls = [...] # regular array with URLs
active_requests = 0
# this routine will be used as callback and will
# be run when each request finishes
when_done = proc do
active_requests -= 1
if urls.empty? && active_requests == 0
# if there are no more urls, and there are no active
# requests it means we're done, so shut down the reactor
EM.stop
elsif !urls.empty?
# if there are more urls launch a new request
launch_next.call
end
end
# this routine launches a request
launch_next = proc do
# get the next url to fetch
url = urls.pop
# launch the request, and register the callback
request = EM::HttpRequest.new(url).get
request.callback(&when_done)
request.errback(&when_done)
# increment the number of active requests, this
# is important since it will tell us when all requests
# are done
active_requests += 1
end
# launch three requests in parallel, each will launch
# a new requests when done, so there will always be
# three requests active at any one time, unless there
# are no more urls to fetch
3.times do
launch_next.call
end
end
launch_next = proc do
urls.pop do |url|
request = EM::HttpRequest.new(url).get
request.callback(&launch_next)
request.errback(&launch_next)
end
end
另外,请记住,我的代码实际上对响应没有任何作用。当_done例程(在第一个示例中)完成时,响应将作为参数传递给
。对于成功和错误,我也会做同样的事情,这在实际的应用程序中可能是不想做的。很好的解决方案,唯一的一点是,启动下一个进程时不知道何时完成进程,所以您需要将它们写出来。至少这对我来说是有效的。这会导致堆栈级别的错误太深吗?我有一个类似的递归解决方案,它可以在堆栈级别太深之前抓取大约1600页。您的解决方案是否通过使用procs排除了此问题?此方法不应导致堆栈溢出,因为它不是严格递归的(函数不调用它们自己,它们将自己安排在以后由反应器调用)。