C++ 如何从std::vector中自动删除已完成的期货

C++ 如何从std::vector中自动删除已完成的期货,c++,c++11,vector,C++,C++11,Vector,在下面的示例中,mEventExecutors是一个std::vector。我希望能够从向量中删除未来,因为它们已经完成。这能做到吗 void RaiseEvent(EventID messageID) { mEventExecutors.push_back(std::move(std::async([=]{ auto eventObject = mEventListeners.find(messageID); if (eventObjec

在下面的示例中,mEventExecutors是一个
std::vector
。我希望能够从向量中删除未来,因为它们已经完成。这能做到吗

void RaiseEvent(EventID messageID)
{
    mEventExecutors.push_back(std::move(std::async([=]{
            auto eventObject = mEventListeners.find(messageID);
            if (eventObject != mEventListeners.end())
            {
                for (auto listener : eventObject->second)
                {
                    listener();
                }
            }
        })
    ));
}

在我看来,不要使用
std::async
是一个简单的解决方案,而是使用
std::thread

不过,您需要小心,您的代码目前有很多数据竞争。考虑使用另一个互斥体或其他技术来防止它们。

std::thread{[=]() {
    // Task is running...
    auto eventObject = mEventListeners.find(messageID);
    if (eventObject != mEventListeners.end())
    {
        for (auto listener : eventObject->second)
        {
            listener();
        }
    }
}.detach(); // detach thread so that it continues

这个问题本身已经被另一个人回答了,但它激起了我的好奇心,我想知道如何在最少的代码行中实现一个功能齐全、线程安全的任务管理器

我还想知道是否可以将任务作为未来等待,或者可选地提供回调函数

当然,这就引出了一个问题:这些期货是否可以使用性感的延续语法
。然后(xxx)
,而不是阻止代码

这是我的尝试

克里斯托弗·科霍夫(Christopher Kohlhoff)是《boost::asio》一书的作者。通过学习他出色的作品,我了解了将班级分为以下几类的价值:

  • 句柄-控制对象的生存期
  • 服务-提供对象逻辑、对象impl之间共享的状态,并管理实现对象的生命周期(如果它们超过句柄(任何依赖回调的对象通常都会超过句柄),以及
  • 实现提供了每个对象的状态
下面是一个调用代码的示例:

int main() {
    task_manager mgr;

    // an example of using async callbacks to indicate completion and error
    mgr.submit([] {
                   emit("task 1 is doing something");
                   std::this_thread::sleep_for(1s);
                   emit("task 1 done");
               },
               [](auto err) {
                   if (not err) {
                       emit("task 1 completed");
                   } else {
                       emit("task 1 failed");
                   }
               });

    // an example of returning a future (see later)
    auto f = mgr.submit([] {
        emit("task 2 doing something");
        std::this_thread::sleep_for(1500ms);
        emit("task 2 is going to throw");
        throw std::runtime_error("here is an error");
    }, use_future);

    // an example of returning a future and then immediately using its continuation.
    // note that the continuation happens on the task_manager's thread pool
    mgr.submit([]
               {
                   emit("task 3 doing something");
                   std::this_thread::sleep_for(500ms);
                   emit("task 3 is done");
               },
               use_future)
            .then([](auto f) {
                try {
                    f.get();
                }
                catch(std::exception const& e) {
                    emit("task 3 threw an exception: ", e.what());
                }
            });

    // block on the future of the second example
    try {
        f.get();
    }
    catch (std::exception &e) {
        emit("task 2 threw: ", e.what());
    }
}
这将导致以下输出:

task 1 is doing something
task 2 doing something
task 3 doing something
task 3 is done
task 1 done
task 1 completed
task 2 is going to throw
task 2 threw: here is an error
下面是完整的代码(在apple clang上测试,它比gcc更混乱,因此如果我在lambda中错过了一个this->,我道歉):

\define BOOST\u THREAD\u提供了\u FUTURE 1
#定义BOOST\u线程\u提供\u未来\u延续1
#定义BOOST\u线程\u提供\u执行器1
/*Richard Hodges于2017年撰写
*您可以自由使用该代码,但请在到期时给予积分:)
*/
#包括
#包括
#包括
#包括
#包括
#包括
#包括
//我将一个任务设置为对象,因为我想我可能想在其中存储状态。
//事实证明,这并不是绝对必要的
结构任务{
};
/*
*这是一个任务管理器的实施数据
*/
结构任务\u管理器\u impl{
使用mutex_type=std::mutex;
使用lock\u type=std::unique\u lock;
自动获取锁定()->锁定类型{
返回锁类型(互斥锁);
}
自动添加任务(锁定类型const&lock,std::unique\u ptr t){
自动id=t.get();
任务地图安置(id,std::move(t));
}
自动删除任务(锁定类型锁定,任务*任务id){
任务映射擦除(任务id);
if(任务映射为空(){
lock.unlock();
没有更多的任务。通知所有();
}
}
自动等待(锁定类型锁定){
没有更多的任务等待(锁定,[this](){返回任务映射\ uu.empty();});
}
//对于这个示例,我选择将错误表示为异常
使用错误类型=标准::异常类型;
互斥类型互斥;
std::条件变量无更多任务;
std::无序映射任务映射;
};
/*
*这是决定是否返回未来的协议
*或者只是调用回调。
*完全尊重克里斯托弗Kohlhoff的亚洲著名的发现这一点
*在这里,我只是步他的后尘,因为c++11的缘故,我做了一些简化
*/
结构使用未来{
};
constexpr auto use_future=use_future_t();
模板
结构生成异步处理程序{
自动换行(处理程序){
返回处理程序;
}
结构结果类型{
自动获取()->void{}
};
结构结果\类型结果;
};
模板
结构生成异步处理程序{
结构共享状态类型{
承诺;承诺;
};
使\u异步\u处理程序(){
}
模板
自动换行(处理程序&&){
返回[共享\u状态=此->共享\u状态](自动错误){
//boost承诺在boost::exception\u ptr方面进行交易,因此我们需要封送。
//这对于boost::promise的额外效用来说是一个很小的代价
//承诺
如果(错误){
试一试{
std::重试异常(错误);
}
捕获(…){
共享_状态->承诺。设置_异常(boost::current_exception());
}
}否则{
共享_状态->承诺。设置_值();
}
};
}
结构结果类型{
auto get()->boost::future{return shared_state->promise.get_future();}
std::shared_ptr shared_state;
};
std::shared_ptr shared_state=std::make_shared();
结果类型结果{共享状态};
};
/*
*提供任务管理器的逻辑。还请注意,它维护一个boost::basic_线程池
*在所有任务完成之前,基本线程池的析构函数不会完成。所以我们
*程序在退出时不会严重崩溃。
*/
结构任务管理器服务{
/*
*通过此功能,服务可以完全控制其创建和销毁方式。
*/
静态自动使用()->任务管理器服务&
{
静态任务管理器服务me{};
还我;
}
使用impl\u class=task\u manager\u impl;
结构删除器{
void运算符()(impl_类*p){
服务->销毁(p);
}
任务管理器服务*服务;
};
/*
*根据唯一的ptr定义impl_类型可确保句柄
*可移动但不可复制。
*如果我们使用了一个共享的ptr,那么句柄就可以通过共享语义进行复制。
*这也很有用。
*/
使用impl_type=std::unique_ptr;
自动构造()->impl_类型{
返回impl_类型(新的impl_类(),
删除器{this});
}
自动销毁(impl_类*impl)->void{
等待(*impl);
删除impl;
}
模板
自动提交(impl_类)
#define BOOST_THREAD_PROVIDES_FUTURE 1
#define BOOST_THREAD_PROVIDES_FUTURE_CONTINUATION 1
#define BOOST_THREAD_PROVIDES_EXECUTORS 1

/* written by Richard Hodges 2017
 * You're free to use the code, but please give credit where it's due :)
 */
#include <boost/thread/future.hpp>
#include <boost/thread/executors/basic_thread_pool.hpp>
#include <thread>
#include <utility>
#include <unordered_map>
#include <stdexcept>
#include <condition_variable>

// I made a task an object because I thought I might want to store state in it.
// it turns out that this is not strictly necessary

struct task {

};

/*
 * This is the implementation data for one task_manager
 */
struct task_manager_impl {

    using mutex_type = std::mutex;
    using lock_type = std::unique_lock<mutex_type>;

    auto get_lock() -> lock_type {
        return lock_type(mutex_);
    }

    auto add_task(lock_type const &lock, std::unique_ptr<task> t) {
        auto id = t.get();
        task_map_.emplace(id, std::move(t));
    }

    auto remove_task(lock_type lock, task *task_id) {
        task_map_.erase(task_id);
        if (task_map_.empty()) {
            lock.unlock();
            no_more_tasks_.notify_all();
        }
    }

    auto wait(lock_type lock) {
        no_more_tasks_.wait(lock, [this]() { return task_map_.empty(); });
    }

    // for this example I have chosen to express errors as exceptions
    using error_type = std::exception_ptr;

    mutex_type mutex_;
    std::condition_variable no_more_tasks_;


    std::unordered_map<task *, std::unique_ptr<task>> task_map_;
};

/*
 * This stuff is the protocol to figure out whether to return a future
 * or just invoke a callback.
 * Total respect to Christopher Kohlhoff of asio fame for figuring this out
 * I merely step in his footsteps here, with some simplifications because of c++11
 */
struct use_future_t {
};
constexpr auto use_future = use_future_t();

template<class Handler>
struct make_async_handler {
    auto wrap(Handler handler) {
        return handler;
    }

    struct result_type {
        auto get() -> void {}
    };

    struct result_type result;
};

template<>
struct make_async_handler<const use_future_t &> {
    struct shared_state_type {
        boost::promise<void> promise;
    };

    make_async_handler() {
    }

    template<class Handler>
    auto wrap(Handler &&) {
        return [shared_state = this->shared_state](auto error) {
            // boost promises deal in terms of boost::exception_ptr so we need to marshal.
            // this is a small price to pay for the extra utility of boost::promise over
            // std::promise
            if (error) {
                try {
                    std::rethrow_exception(error);
                }
                catch (...) {
                    shared_state->promise.set_exception(boost::current_exception());
                }
            } else {
                shared_state->promise.set_value();
            }
        };
    }


    struct result_type {
        auto get() -> boost::future<void> { return shared_state->promise.get_future(); }

        std::shared_ptr<shared_state_type> shared_state;
    };

    std::shared_ptr<shared_state_type> shared_state = std::make_shared<shared_state_type>();
    result_type result{shared_state};

};

/*
 * Provides the logic of a task manager. Also notice that it maintains a boost::basic_thread_pool
 * The destructor of a basic_thread_pool will not complete until all tasks are complete. So our
 * program will not crash horribly at exit time.
 */
struct task_manager_service {

    /*
     * through this function, the service has full control over how it is created and destroyed.
     */

    static auto use() -> task_manager_service&
    {
        static task_manager_service me {};
        return me;
    }

    using impl_class = task_manager_impl;

    struct deleter {
        void operator()(impl_class *p) {
            service_->destroy(p);
        }

        task_manager_service *service_;
    };

    /*
     * defining impl_type in terms of a unique_ptr ensures that the handle will be
     * moveable but not copyable.
     * Had we used a shared_ptr, the handle would be copyable with shared semantics.
     * That can be useful too.
     */
    using impl_type = std::unique_ptr<impl_class, deleter>;

    auto construct() -> impl_type {
        return impl_type(new impl_class(),
                         deleter {this});
    }

    auto destroy(impl_class *impl) -> void {
        wait(*impl);
        delete impl;
    }

    template<class Job, class Handler>
    auto submit(impl_class &impl, Job &&job, Handler &&handler) {

        auto make_handler = make_async_handler<Handler>();


        auto async_handler = make_handler.wrap(std::forward<Handler>(handler));

        auto my_task = std::make_unique<task>();
        auto task_ptr = my_task.get();

        auto task_done = [
                this,
                task_id = task_ptr,
                &impl,
                async_handler
        ](auto error) {
            async_handler(error);
            this->remove_task(impl, task_id);
        };
        auto lock = impl.get_lock();
        impl.add_task(lock, std::move(my_task));
        launch(impl, task_ptr, std::forward<Job>(job), task_done);

        return make_handler.result.get();
    };

    template<class F, class Handler>
    auto launch(impl_class &, task *task_ptr, F &&f, Handler &&handler) -> void {
        this->thread_pool_.submit([f, handler] {
            auto error = std::exception_ptr();
            try {
                f();
            }
            catch (...) {
                error = std::current_exception();
            }
            handler(error);
        });
    }


    auto wait(impl_class &impl) -> void {
        impl.wait(impl.get_lock());
    }

    auto remove_task(impl_class &impl, task *task_id) -> void {
        impl.remove_task(impl.get_lock(), task_id);
    }


    boost::basic_thread_pool thread_pool_{std::thread::hardware_concurrency()};

};

/*
 * The task manage handle. Holds the task_manager implementation plus provides access to the
 * owning task_manager_service. In this case, the service is a global static object. In an io loop environment
 * for example, asio, the service would be owned by the io loop.
 */
struct task_manager {

    using service_type = task_manager_service;
    using impl_type = service_type::impl_type;
    using impl_class = decltype(*std::declval<impl_type>());

    task_manager()
            : service_(std::addressof(service_type::use()))
            , impl_(get_service().construct()) {}

    template<class Job, class Handler>
    auto submit(Job &&job, Handler &&handler) {
        return get_service().submit(get_impl(),
                                    std::forward<Job>(job),
                                    std::forward<Handler>(handler));
    }

    auto get_service() -> service_type & {
        return *service_;
    }

    auto get_impl() -> impl_class & {
        return *impl_;
    }

private:

    service_type* service_;
    impl_type impl_;
};


/*
 * helpful thread-safe emitter
 */
std::mutex thing_mutex;

template<class...Things>
void emit(Things &&...things) {
    auto lock = std::unique_lock<std::mutex>(thing_mutex);
    using expand = int[];
    void(expand{0,
                ((std::cout << things), 0)...
    });
    std::cout << std::endl;
}

using namespace std::literals;

int main() {
    task_manager mgr;

    // an example of using async callbacks to indicate completion and error
    mgr.submit([] {
                   emit("task 1 is doing something");
                   std::this_thread::sleep_for(1s);
                   emit("task 1 done");
               },
               [](auto err) {
                   if (not err) {
                       emit("task 1 completed");
                   } else {
                       emit("task 1 failed");
                   }
               });

    // an example of returning a future (see later)
    auto f = mgr.submit([] {
        emit("task 2 doing something");
        std::this_thread::sleep_for(1500ms);
        emit("task 2 is going to throw");
        throw std::runtime_error("here is an error");
    }, use_future);

    // an example of returning a future and then immediately using its continuation.
    // note that the continuation happens on the task_manager's thread pool
    mgr.submit([] {
                   emit("task 3 doing something");
                   std::this_thread::sleep_for(500ms);
                   emit("task 3 is done");
               },
               use_future)
            .then([](auto f) {
                try {
                    f.get();
                }
                catch (std::exception const &e) {
                    emit("task 3 threw an exception: ", e.what());
                }
            });

    // block on the future of the second example
    try {
        f.get();
    }
    catch (std::exception &e) {
        emit("task 2 threw: ", e.what());
    }
}