Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/cplusplus/155.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C++ boost::循环缓冲区如何处理覆盖移位_C++_Boost_Circular Buffer_Boost Interprocess - Fatal编程技术网

C++ boost::循环缓冲区如何处理覆盖移位

C++ boost::循环缓冲区如何处理覆盖移位,c++,boost,circular-buffer,boost-interprocess,C++,Boost,Circular Buffer,Boost Interprocess,我有两个进程:生产者和“消费者”,它们仍然将值保留在缓冲区中,它们将被覆盖 但让消费者跟踪是一个问题。当缓冲区已满且值被覆盖时,指向索引0的值正好位于刚刚被覆盖的值(即下一个最早的值)的前面,而刚刚插入的值是最后一个索引,在这两个索引之间移动所有值 cb.push_back(0) cb.push_back(1) cb.push_back(2) consumer reads to cb[1], cb[2] should == 2 when next read cb.push_back(3)

我有两个进程:生产者和“消费者”,它们仍然将值保留在缓冲区中,它们将被覆盖

但让消费者跟踪是一个问题。当缓冲区已满且值被覆盖时,指向索引0的值正好位于刚刚被覆盖的值(即下一个最早的值)的前面,而刚刚插入的值是最后一个索引,在这两个索引之间移动所有值

cb.push_back(0)
cb.push_back(1)
cb.push_back(2)

consumer reads to cb[1], cb[2] should == 2 when next read

cb.push_back(3)

cb[2] now == 1 effectively reading the old value

有趣的是,循环缓冲区上的迭代器即使在缓冲区开始被覆盖时也会保持相同的值,这可以正常工作,除非在读取时到达
end()
迭代器,即使插入了更多的值,它也始终等于
end()
迭代器,因此您必须
std::prev(iter,1)
在您完成消费后,然后在插入更多值后再次读取时,请执行
std::next(iter,1)
,这样您就不会读取已经读取的值。

我相信循环缓冲区的存在正是为了从您那里提取迭代器的位置

缓冲区是循环的这一事实对您来说并不重要:它只是一个队列接口

在本例中可以非常清楚地看到循环缓冲区的使用方式:

如果你想要那种程度的控制,你要么

  • 想要使用更简单的容器原语并构建自己的逻辑吗

  • 您可以在循环缓冲区之上编写有界缓冲区。下面是一个完整的例子:

    报告提到:

    有界缓冲区通常用于生产者-消费者模式[…]

    [……]

    bounded buffer::pop_back()方法不会删除该项,但该项保留在循环_缓冲区中,当循环_缓冲区已满时,循环_缓冲区会将其替换为新的(由生产者插入)。这种技术比通过调用循环缓冲区的循环缓冲区::pop_back()方法显式删除项更有效

听起来应该对你有很大帮助

更新 下面是一个适用于使用共享内存的演示:

#define BOOST_CB_DISABLE_DEBUG

#include <boost/circular_buffer.hpp>
#include <boost/thread/thread.hpp>
#include <boost/call_traits.hpp>
#include <boost/bind.hpp>
#include <boost/interprocess/allocators/allocator.hpp>
#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/sync/interprocess_condition.hpp>
#include <boost/interprocess/sync/interprocess_mutex.hpp>
#include <iostream>

const unsigned long QUEUE_SIZE     = 1000L;
const unsigned long TOTAL_ELEMENTS = QUEUE_SIZE * 1000L;

namespace bip = boost::interprocess;

template <class T, class Alloc, typename CV = boost::condition_variable, typename Mutex = boost::mutex>
class bounded_buffer {
public:
    typedef boost::circular_buffer<T, Alloc> container_type;
    typedef typename container_type::size_type                  size_type;
    typedef typename container_type::value_type                 value_type;
    typedef typename container_type::allocator_type             allocator_type;
    typedef typename boost::call_traits<value_type>::param_type param_type;

    bounded_buffer(size_type capacity, Alloc alloc = Alloc()) : m_unread(0), m_container(capacity, alloc) {}

    void push_front(param_type item) {
        boost::unique_lock<Mutex> lock(m_mutex);

        m_not_full.wait(lock, boost::bind(&bounded_buffer::is_not_full, this));
        m_container.push_front(item);
        ++m_unread;
        lock.unlock();

        m_not_empty.notify_one();
    }

    void pop_back(value_type* pItem) {
        boost::unique_lock<Mutex> lock(m_mutex);

        m_not_empty.wait(lock, boost::bind(&bounded_buffer::is_not_empty, this));
        *pItem = m_container[--m_unread];
        lock.unlock();

        m_not_full.notify_one();
    }

private:
    bounded_buffer(const bounded_buffer&);              // Disabled copy constructor
    bounded_buffer& operator = (const bounded_buffer&); // Disabled assign operator

    bool is_not_empty() const { return m_unread > 0; }
    bool is_not_full() const { return m_unread < m_container.capacity(); }

    size_type m_unread;
    container_type m_container;
    Mutex m_mutex;
    CV m_not_empty;
    CV m_not_full;
};

namespace Shared {
    using segment = bip::managed_shared_memory;
    using smgr    = segment::segment_manager;
    template <typename T> using alloc = bip::allocator<T, smgr>;
    template <typename T> using bounded_buffer = ::bounded_buffer<T, alloc<T>, bip::interprocess_condition, bip::interprocess_mutex >;
}

template<class Buffer>
class Consumer {

    typedef typename Buffer::value_type value_type;
    Buffer* m_container;
    value_type m_item;

public:
    Consumer(Buffer* buffer) : m_container(buffer) {}

    void operator()() {
        for (unsigned long i = 0L; i < TOTAL_ELEMENTS; ++i) {
            m_container->pop_back(&m_item);
        }
    }
};

template<class Buffer>
class Producer {

    typedef typename Buffer::value_type value_type;
    Buffer* m_container;

public:
    Producer(Buffer* buffer) : m_container(buffer) {}

    void operator()() {
        for (unsigned long i = 0L; i < TOTAL_ELEMENTS; ++i) {
            m_container->push_front(value_type());
        }
    }
};

int main(int argc, char**) {
    using Buffer = Shared::bounded_buffer<int>;

    if (argc>1) {
        std::cout << "Creating shared buffer\n";
        Shared::segment mem(bip::create_only, "test_bounded_buffer", 10<<20); // 10 MiB
        Buffer* buffer = mem.find_or_construct<Buffer>("shared_buffer")(QUEUE_SIZE, mem.get_segment_manager());

        assert(buffer);

        // Initialize the buffer with some values before launching producer and consumer threads.
        for (unsigned long i = QUEUE_SIZE / 2L; i > 0; --i) {
            buffer->push_front(BOOST_DEDUCED_TYPENAME Buffer::value_type());
        }

        std::cout << "running producer\n";
        Producer<Buffer> producer(buffer);
        boost::thread(producer).join();
    } else {
        std::cout << "Opening shared buffer\n";

        Shared::segment mem(bip::open_only, "test_bounded_buffer");
        Buffer* buffer = mem.find_or_construct<Buffer>("shared_buffer")(QUEUE_SIZE, mem.get_segment_manager());

        assert(buffer);

        std::cout << "running consumer\n";
        Consumer<Buffer> consumer(buffer);
        boost::thread(consumer).join();
    }
}

我相信循环缓冲区的存在正是为了从您那里提取迭代器的位置

缓冲区是循环的这一事实对您来说并不重要:它只是一个队列接口

在本例中可以非常清楚地看到循环缓冲区的使用方式:

如果你想要那种程度的控制,你要么

  • 想要使用更简单的容器原语并构建自己的逻辑吗

  • 您可以在循环缓冲区之上编写有界缓冲区。下面是一个完整的例子:

    报告提到:

    有界缓冲区通常用于生产者-消费者模式[…]

    [……]

    bounded buffer::pop_back()方法不会删除该项,但该项保留在循环_缓冲区中,当循环_缓冲区已满时,循环_缓冲区会将其替换为新的(由生产者插入)。这种技术比通过调用循环缓冲区的循环缓冲区::pop_back()方法显式删除项更有效

听起来应该对你有很大帮助

更新 下面是一个适用于使用共享内存的演示:

#define BOOST_CB_DISABLE_DEBUG

#include <boost/circular_buffer.hpp>
#include <boost/thread/thread.hpp>
#include <boost/call_traits.hpp>
#include <boost/bind.hpp>
#include <boost/interprocess/allocators/allocator.hpp>
#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/sync/interprocess_condition.hpp>
#include <boost/interprocess/sync/interprocess_mutex.hpp>
#include <iostream>

const unsigned long QUEUE_SIZE     = 1000L;
const unsigned long TOTAL_ELEMENTS = QUEUE_SIZE * 1000L;

namespace bip = boost::interprocess;

template <class T, class Alloc, typename CV = boost::condition_variable, typename Mutex = boost::mutex>
class bounded_buffer {
public:
    typedef boost::circular_buffer<T, Alloc> container_type;
    typedef typename container_type::size_type                  size_type;
    typedef typename container_type::value_type                 value_type;
    typedef typename container_type::allocator_type             allocator_type;
    typedef typename boost::call_traits<value_type>::param_type param_type;

    bounded_buffer(size_type capacity, Alloc alloc = Alloc()) : m_unread(0), m_container(capacity, alloc) {}

    void push_front(param_type item) {
        boost::unique_lock<Mutex> lock(m_mutex);

        m_not_full.wait(lock, boost::bind(&bounded_buffer::is_not_full, this));
        m_container.push_front(item);
        ++m_unread;
        lock.unlock();

        m_not_empty.notify_one();
    }

    void pop_back(value_type* pItem) {
        boost::unique_lock<Mutex> lock(m_mutex);

        m_not_empty.wait(lock, boost::bind(&bounded_buffer::is_not_empty, this));
        *pItem = m_container[--m_unread];
        lock.unlock();

        m_not_full.notify_one();
    }

private:
    bounded_buffer(const bounded_buffer&);              // Disabled copy constructor
    bounded_buffer& operator = (const bounded_buffer&); // Disabled assign operator

    bool is_not_empty() const { return m_unread > 0; }
    bool is_not_full() const { return m_unread < m_container.capacity(); }

    size_type m_unread;
    container_type m_container;
    Mutex m_mutex;
    CV m_not_empty;
    CV m_not_full;
};

namespace Shared {
    using segment = bip::managed_shared_memory;
    using smgr    = segment::segment_manager;
    template <typename T> using alloc = bip::allocator<T, smgr>;
    template <typename T> using bounded_buffer = ::bounded_buffer<T, alloc<T>, bip::interprocess_condition, bip::interprocess_mutex >;
}

template<class Buffer>
class Consumer {

    typedef typename Buffer::value_type value_type;
    Buffer* m_container;
    value_type m_item;

public:
    Consumer(Buffer* buffer) : m_container(buffer) {}

    void operator()() {
        for (unsigned long i = 0L; i < TOTAL_ELEMENTS; ++i) {
            m_container->pop_back(&m_item);
        }
    }
};

template<class Buffer>
class Producer {

    typedef typename Buffer::value_type value_type;
    Buffer* m_container;

public:
    Producer(Buffer* buffer) : m_container(buffer) {}

    void operator()() {
        for (unsigned long i = 0L; i < TOTAL_ELEMENTS; ++i) {
            m_container->push_front(value_type());
        }
    }
};

int main(int argc, char**) {
    using Buffer = Shared::bounded_buffer<int>;

    if (argc>1) {
        std::cout << "Creating shared buffer\n";
        Shared::segment mem(bip::create_only, "test_bounded_buffer", 10<<20); // 10 MiB
        Buffer* buffer = mem.find_or_construct<Buffer>("shared_buffer")(QUEUE_SIZE, mem.get_segment_manager());

        assert(buffer);

        // Initialize the buffer with some values before launching producer and consumer threads.
        for (unsigned long i = QUEUE_SIZE / 2L; i > 0; --i) {
            buffer->push_front(BOOST_DEDUCED_TYPENAME Buffer::value_type());
        }

        std::cout << "running producer\n";
        Producer<Buffer> producer(buffer);
        boost::thread(producer).join();
    } else {
        std::cout << "Opening shared buffer\n";

        Shared::segment mem(bip::open_only, "test_bounded_buffer");
        Buffer* buffer = mem.find_or_construct<Buffer>("shared_buffer")(QUEUE_SIZE, mem.get_segment_manager());

        assert(buffer);

        std::cout << "running consumer\n";
        Consumer<Buffer> consumer(buffer);
        boost::thread(consumer).join();
    }
}

有界缓冲区看起来很不错,但我不确定它是否与boost.interprocess共享内存兼容。可能需要在结构或其他东西中滚动我自己的。我不明白为什么不可以?而且,带着这个拒绝回来有点可笑,因为我看不出你在问题中提到了共享内存。因此,我假设您使用“进程”来表示在线程中运行的逻辑进程。下面是一个实时演示:用
托管共享内存替换
托管映射文件
(Coliru不支持shm)并不是拒绝,我仍在尝试查看我的所有选项。我不认为使用进程间是被问问题的一部分,所以我省略了它以保持问题的严密性。我只是更假设有界缓冲区不能自动兼容,因为它必须围绕循环缓冲区进行包装,而boost已经有了兼容的vector、deque、循环缓冲区等。这似乎是可能的,但我需要了解更多关于创建使用共享内存的自定义容器的信息。谢谢这个例子。我仍然有点不确定如何创建分配器并将其分配给结构,但这应该对我有很大帮助。有界缓冲区看起来很不错,但我不确定它是否与boost.interprocess共享内存兼容。可能需要在结构或其他东西中滚动我自己的。我不明白为什么不可以?而且,带着这个拒绝回来有点可笑,因为我看不出你在问题中提到了共享内存。因此,我假设您使用“进程”来表示在线程中运行的逻辑进程。下面是一个实时演示:用
托管共享内存替换
托管映射文件
(Coliru不支持shm)并不是拒绝,我仍在尝试查看我的所有选项。我不认为使用进程间是被问问题的一部分,所以我省略了它以保持问题的严密性。我只是更假设有界缓冲区不能自动兼容,因为它必须围绕循环缓冲区进行包装,而boost已经有了兼容的vector、deque、循环缓冲区等。这似乎是可能的,但我需要了解更多关于创建使用共享内存的自定义容器的信息。谢谢这个例子。我仍然有点不确定如何创建分配器并将其分配给结构,但这应该对我有很大帮助。