C++ asio和活动对象

C++ asio和活动对象,c++,design-patterns,boost-asio,C++,Design Patterns,Boost Asio,我已经实现了一些基于模块的主动对象设计模式。这是一个非常简单的实现。我有调度程序、激活列表、请求和未来来获得响应。 我的要求是: 对活动对象的访问应通过执行其方法进行序列化 在它自己的线程内(主请求和活动对象的假设 (设计模式) 调用方应能够指定请求执行的优先级。这意味着,如果有超过零个请求等待执行,则应按照分配给每个请求的优先级排序。具有较高优先级的请求应首先执行,因此,如果ActivationList上始终有一些请求处于挂起状态,并且它们的优先级高于给定的请求,则该请求将永远不会被执行-对

我已经实现了一些基于模块的主动对象设计模式。这是一个非常简单的实现。我有调度程序、激活列表、请求和未来来获得响应。 我的要求是:

  • 对活动对象的访问应通过执行其方法进行序列化 在它自己的线程内(主请求和活动对象的假设 (设计模式)
  • 调用方应能够指定请求执行的优先级。这意味着,如果有超过零个请求等待执行,则应按照分配给每个请求的优先级排序。具有较高优先级的请求应首先执行,因此,如果ActivationList上始终有一些请求处于挂起状态,并且它们的优先级高于给定的请求,则该请求将永远不会被执行-对我来说是可以的
  • 应能够指定列表中待决请求的最大数量(限制内存使用)
  • 应可使所有未决请求无效
  • 请求应能够返回值(阻止调用方)或仅在不返回值的情况下执行,但调用方应被阻止,直到请求被处理或调用方不被阻止为止,如果给定的请求已被处理或未被处理,则对其不重要 g
  • 就在请求执行之前,应该执行一些保护方法来检查给定的请求是否应该执行。如果不是-它将向调用者返回一些未定义的值(在我当前的实现中,它是boost::none,因为每个请求返回类型都是boost::optional)
好的,现在提问:
可以使用boost::asio并满足我的所有需求吗?我的实现正在工作,但是我想使用一些可能比我现在做的更好的实现方式。另外,我希望将来能了解它,不要再“重新发明轮子”。

Boost.Asio可用于包含以下意图:将方法执行与方法调用分离。额外的需求需要在更高的层次上处理,但是当将Boost.Asio与其他Boost库结合使用时,它并不过于复杂

计划程序
可以使用:

  • 用于线程抽象
  • 管理线程的生存期
  • 提供一个线程池。将可能希望在没有挂起的工作时使用以保持线程活动
激活列表
可以实现为:

  • 用于获取最高优先级方法请求的。使用一个提示位置
    insert()
    ,插入顺序将保留为具有相同优先级的请求
  • 可以使用
    std::multiset
    std::multimap
    。但是,在C++03中,对于具有相同密钥(优先级)的请求顺序没有指定
  • 如果
    Request
    不需要保护方法,则可以使用
    std::priority\u queue
请求
可能是未指定的类型:

  • 并可用于提供类型擦除,同时绑定到可调用类型,而无需引入
    请求
    层次结构
Futures
可以使用Boost.Thread的支持

  • 如果
    Request
    已添加到
    ActivationList
    ,则
    future.valid()
    将返回true
  • future.wait()
    将阻止等待结果可用
  • future.get()
    将阻止等待结果
  • 若调用者对未来的
    不做任何操作,则调用者将不会被阻止
  • 使用Boost.Thread的未来的另一个好处是,来自
    请求
    的异常将被传递到
    未来

下面是一个利用各种Boost库的完整示例,应满足以下要求:

// Standard includes
#include <algorithm> // std::find_if
#include <iostream>
#include <string>

// 3rd party includes
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/function.hpp>
#include <boost/make_shared.hpp>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/ordered_index.hpp>
#include <boost/multi_index/member.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread.hpp>
#include <boost/utility/result_of.hpp>

/// @brief scheduler that provides limits with prioritized jobs.
template <typename Priority,
          typename Compare = std::less<Priority> >
class scheduler
{
public:
  typedef Priority priority_type;
private:

  /// @brief method_request is used to couple the guard and call
  ///        functions for a given method.
  struct method_request
  {
    typedef boost::function<bool()> ready_func_type;
    typedef boost::function<void()> run_func_type;

    template <typename ReadyFunctor,
              typename RunFunctor>
    method_request(ReadyFunctor ready,
                   RunFunctor run)
      : ready(ready),
        run(run)
    {}

    ready_func_type ready;
    run_func_type run;
  };

  /// @brief Pair type used to associate a request with its priority.
  typedef std::pair<priority_type,
                    boost::shared_ptr<method_request> > pair_type;

  static bool is_method_ready(const pair_type& pair)
  {
    return pair.second->ready();
  }

public:

  /// @brief Construct scheduler.
  ///
  /// @param max_threads Maximum amount of concurrent task.
  /// @param max_request Maximum amount of request.  
  scheduler(std::size_t max_threads,
            std::size_t max_request)
    : work_(io_service_),
      max_request_(max_request),
      request_count_(0)
  {
    // Spawn threads, dedicating them to the io_service.
    for (std::size_t i = 0; i < max_threads; ++i)
      threads_.create_thread(
        boost::bind(&boost::asio::io_service::run, &io_service_));
  }

  /// @brief Destructor.
  ~scheduler()
  {
    // Release threads from the io_service.
    io_service_.stop();
    // Cleanup.
    threads_.join_all();
  }

  /// @brief Insert a method request into the scheduler.
  ///
  /// @param priority Priority of job.
  /// @param ready_func Invoked to check if method is ready to run.
  /// @param run_func Invoked when ready to run.
  ///
  /// @return future associated with the method.
  template <typename ReadyFunctor,
            typename RunFunctor>
  boost::unique_future<typename boost::result_of<RunFunctor()>::type>
  insert(priority_type priority, 
         const ReadyFunctor& ready_func,
         const RunFunctor& run_func)
  {
    typedef typename boost::result_of<RunFunctor()>::type result_type;
    typedef boost::unique_future<result_type> future_type;

    boost::unique_lock<mutex_type> lock(mutex_);

    // If max request has been reached, then return an invalid future.
    if (max_request_ &&
        (request_count_ == max_request_))
      return future_type();

    ++request_count_;

    // Use a packaged task to handle populating promise and future.
    typedef boost::packaged_task<result_type> task_type;

    // Bind does not work with rvalue, and packaged_task is only moveable,
    // so allocate a shared pointer.
    boost::shared_ptr<task_type> task = 
      boost::make_shared<task_type>(run_func);

    // Create method request.
    boost::shared_ptr<method_request> request =
      boost::make_shared<method_request>(
        ready_func,
        boost::bind(&task_type::operator(), task));

    // Insert into priority.  Hint to inserting as close to the end as
    // possible to preserve insertion order for request with same priority.
    activation_list_.insert(activation_list_.end(),
                            pair_type(priority, request));

    // There is now an outstanding request, so post to dispatch.
    io_service_.post(boost::bind(&scheduler::dispatch, this));

    return task->get_future();
  }

  /// @brief Insert a method request into the scheduler.
  ///
  /// @param ready_func Invoked to check if method is ready to run.
  /// @param run_func Invoked when ready to run.
  ///
  /// @return future associated with the method.
  template <typename ReadyFunctor,
            typename RunFunctor>
  boost::unique_future<typename boost::result_of<RunFunctor()>::type>
  insert(const ReadyFunctor& ready_func,
         const RunFunctor& run_func)
  {
    return insert(priority_type(), ready_func, run_func);
  }

  /// @brief Insert a method request into the scheduler.
  ///
  /// @param priority Priority of job.
  /// @param run_func Invoked when ready to run.
  ///
  /// @return future associated with the method.
  template <typename RunFunctor>
  boost::unique_future<typename boost::result_of<RunFunctor()>::type>
  insert(priority_type priority, 
         const RunFunctor& run_func)
  {
    return insert(priority, &always_ready, run_func);
  }

  /// @brief Insert a method request with default priority into the
  ///        scheduler.
  ///
  /// @param run_func Invoked when ready to run.
  ///
  /// @param functor Job to run.
  ///
  /// @return future associated with the job.
  template <typename RunFunc>
  boost::unique_future<typename boost::result_of<RunFunc()>::type>
  insert(const RunFunc& run_func)
  {
    return insert(&always_ready, run_func);
  }

  /// @brief Cancel all outstanding request.
  void cancel()
  {
    boost::unique_lock<mutex_type> lock(mutex_);
    activation_list_.clear();
    request_count_ = 0;
  } 

private:

  /// @brief Dispatch a request.
  void dispatch()
  {
    // Get the current highest priority request ready to run from the queue.
    boost::unique_lock<mutex_type> lock(mutex_);
    if (activation_list_.empty()) return;

    // Find the highest priority method ready to run.
    typedef typename activation_list_type::iterator iterator;
    iterator end = activation_list_.end();
    iterator result = std::find_if(
      activation_list_.begin(), end, &is_method_ready);

    // If no methods are ready, then post into dispatch, as the
    // method may have become ready.
    if (end == result)
    {
      io_service_.post(boost::bind(&scheduler::dispatch, this));
      return;
    }

    // Take ownership of request.
    boost::shared_ptr<method_request> method = result->second;
    activation_list_.erase(result);

    // Run method without mutex.
    lock.unlock();
    method->run();    
    lock.lock();

    // Perform bookkeeping.
    --request_count_;
  }

  static bool always_ready() { return true; }

private:

  /// @brief List of outstanding request.
  typedef boost::multi_index_container<
    pair_type,
    boost::multi_index::indexed_by<
      boost::multi_index::ordered_non_unique<
        boost::multi_index::member<pair_type,
                                   typename pair_type::first_type,
                                   &pair_type::first>,
        Compare
      >
    >
  > activation_list_type;
  activation_list_type activation_list_;

  /// @brief Thread group managing threads servicing pool.
  boost::thread_group threads_;

  /// @brief io_service used to function as a thread pool.
  boost::asio::io_service io_service_;

  /// @brief Work is used to keep threads servicing io_service.
  boost::asio::io_service::work work_;

  /// @brief Maximum amount of request.
  const std::size_t max_request_;

  /// @brief Count of outstanding request.
  std::size_t request_count_;

  /// @brief Synchronize access to the activation list.
  typedef boost::mutex mutex_type;
  mutex_type mutex_;
};

typedef scheduler<unsigned int, 
                  std::greater<unsigned int> > high_priority_scheduler;

/// @brief adder is a simple proxy that will delegate work to
///        the scheduler.
class adder
{
public:
  adder(high_priority_scheduler& scheduler)
    : scheduler_(scheduler)
  {}

  /// @brief Add a and b with a priority.
  ///
  /// @return Return future result.
  template <typename T>
  boost::unique_future<T> add(
    high_priority_scheduler::priority_type priority,
    const T& a, const T& b)
  {
    // Insert method request
    return scheduler_.insert(
      priority,
      boost::bind(&adder::do_add<T>, a, b));
  }

  /// @brief Add a and b.
  ///
  /// @return Return future result.
  template <typename T>
  boost::unique_future<T> add(const T& a, const T& b)
  {
    return add(high_priority_scheduler::priority_type(), a, b);
  }

private:

  /// @brief Actual add a and b.
  template <typename T>
  static T do_add(const T& a, const T& b)
  {
    std::cout << "Starting addition of '" << a 
              << "' and '" << b << "'" << std::endl;
    // Mimic busy work.
    boost::this_thread::sleep_for(boost::chrono::seconds(2));
    std::cout << "Finished addition" << std::endl;
    return a + b;
  }

private:
  high_priority_scheduler& scheduler_;
};

bool get(bool& value) { return value; }
void guarded_call()
{
  std::cout << "guarded_call" << std::endl; 
}

int main()
{
  const unsigned int max_threads = 1;
  const unsigned int max_request = 4;

  // Sscheduler
  high_priority_scheduler scheduler(max_threads, max_request);

  // Proxy
  adder adder(scheduler);

  // Client

  // Add guarded method to scheduler.
  bool ready = false;
  std::cout << "Add guarded method." << std::endl;
  boost::unique_future<void> future1 = scheduler.insert(
    boost::bind(&get, boost::ref(ready)),
    &guarded_call);

  // Add 1 + 100 with default priority.
  boost::unique_future<int> future2 = adder.add(1, 100);

  // Force sleep to try to get scheduler to run request 2 first.
  boost::this_thread::sleep_for(boost::chrono::seconds(1));

  // Add:
  //   2 + 200 with low priority (5)
  //   "test" + "this" with high priority (99)
  boost::unique_future<int> future3 = adder.add(5, 2, 200);
  boost::unique_future<std::string> future4 = adder.add(99,
    std::string("test"), std::string("this"));

  // Max request should have been reached, so add another.
  boost::unique_future<int> future5 = adder.add(3, 300);

  // Check if request was added.
  std::cout << "future1 is valid: " << future1.valid()
          << "\nfuture2 is valid: " << future2.valid()
          << "\nfuture3 is valid: " << future3.valid()
          << "\nfuture4 is valid: " << future4.valid()
          << "\nfuture5 is valid: " << future5.valid()
          << std::endl;

  // Get results for future2 and future3.  Do nothing with future4's results.
  std::cout << "future2 result: " << future2.get()
          << "\nfuture3 result: " << future3.get()
          << std::endl;

  std::cout << "Unguarding method." << std::endl;
  ready = true;
  future1.wait();
}
//标准包括
#include//std::find_if
#包括
#包括
//第三方包括
#包括
#包括
#包括
#包括
#包括
#包括
#包括
#包括
#包括
#包括
///@brief调度程序,提供优先级作业的限制。
模板
类调度程序
{
公众:
类型定义优先级\u类型;
私人:
///@brief method_request用于连接警卫和呼叫
///给定方法的函数。
结构方法请求
{
typedef boost::函数就绪函数类型;
typedef boost::函数运行函数类型;
模板
方法_请求(ReadyFunctor就绪,
RunFunctor(函数运行)
:就绪(就绪),
跑(跑)
{}
就绪功能类型就绪;
运行函数式运行;
};
///@用于将请求与其优先级关联的简短对类型。
typedef std::成对类型;
静态bool方法就绪(常数对类型和对)
{
返回对.second->ready();
}
公众:
///@brief-Construct调度程序。
///
///@param max_线程并发任务的最大数量。
///@param max_请求最大请求量。
调度程序(标准::大小\u t最大\u线程,
标准::大小(最大请求)
:工作(io\U服务),
最大请求(最大请求),
请求计数(0)
{
//生成线程,将其专用于io_服务。
对于(标准::尺寸i=0;i<最大线程数;++i)
线程。创建线程(
boost::绑定(&boost::asio::io_服务::运行,&io_服务);
}
///@简短的析构函数。
~scheduler()
{
//从io_服务中释放线程。
io_服务_u.stop();
//清理。
线程。连接所有线程();
}
///@brief将方法请求插入计划程序。
///
///@param作业的优先级。
///@param ready_func被调用以检查方法是否已准备好运行。
///@param run_func在准备运行时调用。
///
///@返回与该方法关联的未来。
模板
boost::独特的未来
插入(优先级)\类型优先级,
常量ReadyFunctor和ready_func,
常量runfunc和run_func)
{
typedef typename boost::结果(共有):
Add guarded method.
Starting addition of '1' and '100'
future1 is valid: 1
future2 is valid: 1
future3 is valid: 1
future4 is valid: 1
future5 is valid: 0
Finished addition
Starting addition of 'test' and 'this'
Finished addition
Starting addition of '2' and '200'
Finished addition
future2 result: 101
future3 result: 202
Unguarding method.
guarded_call