Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/cplusplus/160.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
C++ C+中具有双缓冲区的单生产者、单消费者数据结构+;_C++_Concurrency_Real Time_Producer Consumer_Double Buffering - Fatal编程技术网

C++ C+中具有双缓冲区的单生产者、单消费者数据结构+;

C++ C+中具有双缓冲区的单生产者、单消费者数据结构+;,c++,concurrency,real-time,producer-consumer,double-buffering,C++,Concurrency,Real Time,Producer Consumer,Double Buffering,我有一个$work的应用程序,我必须在两个以不同频率调度的实时线程之间移动。(实际调度超出我的控制。)应用程序是硬实时的(其中一个线程必须驱动硬件接口),因此线程之间的数据传输应尽可能无锁且无等待 需要注意的是,只需要传输一个数据块:因为两个线程以不同的速率运行,所以在两次唤醒较慢线程之间,有时会完成较快线程的两次迭代;在这种情况下,可以覆盖写缓冲区中的数据,以便较慢的线程只获取最新的数据 换句话说,使用双缓冲解决方案代替队列就足够了。这两个缓冲区是在初始化期间分配的,读线程和写线程可以调用类的

我有一个$work的应用程序,我必须在两个以不同频率调度的实时线程之间移动。(实际调度超出我的控制。)应用程序是硬实时的(其中一个线程必须驱动硬件接口),因此线程之间的数据传输应尽可能无锁且无等待

需要注意的是,只需要传输一个数据块:因为两个线程以不同的速率运行,所以在两次唤醒较慢线程之间,有时会完成较快线程的两次迭代;在这种情况下,可以覆盖写缓冲区中的数据,以便较慢的线程只获取最新的数据

换句话说,使用双缓冲解决方案代替队列就足够了。这两个缓冲区是在初始化期间分配的,读线程和写线程可以调用类的方法来获取指向其中一个缓冲区的指针

C++代码:

#include <mutex>

template <typename T>
class ProducerConsumerDoubleBuffer {
public:
    ProducerConsumerDoubleBuffer() {
        m_write_busy = false;
        m_read_idx = m_write_idx = 0;
    }

    ~ProducerConsumerDoubleBuffer() { }

    // The writer thread using this class must call
    // start_writing() at the start of its iteration
    // before doing anything else to get the pointer
    // to the current write buffer.
    T * start_writing(void) {
        std::lock_guard<std::mutex> lock(m_mutex);

        m_write_busy = true;
        m_write_idx = 1 - m_read_idx;

        return &m_buf[m_write_idx];
    }
    // The writer thread must call end_writing()
    // as the last thing it does
    // to release the write busy flag.
    void end_writing(void) {
        std::lock_guard<std::mutex> lock(m_mutex);

        m_write_busy = false;
    }

    // The reader thread must call start_reading()
    // at the start of its iteration to get the pointer
    // to the current read buffer.
    // If the write thread is not active at this time,
    // the read buffer pointer will be set to the 
    // (previous) write buffer - so the reader gets the latest data.
    // If the write buffer is busy, the read pointer is not changed.
    // In this case the read buffer may contain stale data,
    // it is up to the user to deal with this case.
    T * start_reading(void) {
        std::lock_guard<std::mutex> lock(m_mutex);

        if (!m_write_busy) {
            m_read_idx = m_write_idx;
        }

        return &m_buf[m_read_idx];
    }
    // The reader thread must call end_reading()
    // at the end of its iteration.
    void end_reading(void) {
        std::lock_guard<std::mutex> lock(m_mutex);

        m_read_idx = m_write_idx;
    }

private:
    T m_buf[2];
    bool m_write_busy;
    unsigned int m_read_idx, m_write_idx;
    std::mutex m_mutex;
};
#包括
模板
类ProducerConsumerDoubleBuffer{
公众:
ProducerConsumerDoubleBuffer(){
m_write_busy=false;
m_read_idx=m_write_idx=0;
}
~ProducerConsumerDoubleBuffer(){}
//使用此类的编写器线程必须调用
//在迭代开始时开始写入()
//在做任何其他事情获取指针之前
//到当前写入缓冲区。
T*开始写入(无效){
std::锁和保护锁(mumutex);
m_write_busy=true;
m_write_idx=1-m_read_idx;
返回&m_buf[m_write_idx];
}
//写入线程必须调用end_writing()
//就像它做的最后一件事一样
//释放写忙标志。
无效结束写入(无效){
std::锁和保护锁(mumutex);
m_write_busy=false;
}
//读卡器线程必须调用start_reading()
//在迭代开始时获取指针
//到当前读取缓冲区。
//如果写入线程此时未处于活动状态,
//读取缓冲区指针将设置为
//(上一个)写入缓冲区-以便读取器获取最新数据。
//如果写入缓冲区正忙,则不会更改读取指针。
//在这种情况下,读取缓冲区可能包含过时数据,
//这是由用户来处理这种情况。
T*开始读取(无效){
std::锁和保护锁(mumutex);
如果(!m_write_busy){
m_read_idx=m_write_idx;
}
返回&m_buf[m_read_idx];
}
//读取器线程必须调用end_reading()
//在迭代结束时。
无效结束\u读数(无效){
std::锁和保护锁(mumutex);
m_read_idx=m_write_idx;
}
私人:
T m_buf[2];
你忙着写东西;
无符号整数m_read_idx,m_write_idx;
std::mutex m_mutex;
};
为了避免读卡器线程中的过时数据,对有效负载结构进行了版本控制。 为了促进线程之间的双向数据传输,在相反方向上使用了上述怪物的两个实例

问题:

  • 这个方案安全吗?如果它坏了,在哪里
  • 没有互斥锁可以完成吗?也许只有内存障碍或CAS指令
  • 能做得更好吗

  • 是的,我想它坏了

    如果读卡器连续执行start/end/start,它将把读索引更新为写索引,并可能从写索引读取数据,即使写操作很忙

    问题本质上是,编写器不知道读取器将使用哪个缓冲区,因此编写器应该确保两个缓冲区始终有效。如果需要花费任何时间将数据写入缓冲区,它就无法做到这一点[除非我误解了此处未显示的某些逻辑。]

    是的,我认为它可以在没有锁的情况下完成,使用CAS或等效逻辑。我不打算在这个空间中表达一个算法。我相信它是存在的,但不是我第一次就能正确地写出它。通过网络搜索找到了一些貌似可信的候选人。使用CAS的无等待IPC似乎是一个非常有趣的话题,也是一些研究的主题


    经过进一步思考,算法如下。你需要:

    • 3个缓冲区:一个供编写器使用,一个供读者使用,另外一个。 缓冲区是有序的:它们形成一个环(但请参见注释)
    • 每个缓冲区的状态:空闲、已满、写入、读取
    • 一种函数,可以检查缓冲区的状态,并在单个原子操作中将状态有条件地更改为不同的值。我将使用CSET来实现这一点
    作者:

    Find the first buffer that is FREE or FULL
      Fail: assert (should never fail, reader can only use one buffer)
      CSET buffer to WRITING
    Write into the buffer
    CSET buffer to FULL
    
    读者:

    Find first buffer that is FULL
        Fail: wait (writer may be slow)
        CSET buffer to READING
    Read and consume buffer
    CSET buffer to FREE
    
    注意:此算法不能保证缓冲区严格按照到达顺序处理,并且没有简单的更改会使它这样做。如果这很重要,那么应该在缓冲区上增加一个序列号来增强算法,该序列号由编写器设置,以便读取器可以选择最近的缓冲区

    我将代码留作实现细节


    CSET函数是非常重要的。它必须以原子方式测试特定的共享内存位置是否等于预期值,如果是,则将其更改为新值。如果成功进行更改,则返回true,否则返回false。如果两个线程同时(可能在不同的处理器上)访问同一位置,则实现必须避免争用条件


    C++标准原子操作库包含一组原子-函数交换函数,如果可用的话,它可以用于目的。

    < P>非常有趣的问题!比我最初想的要复杂得多:-) 我喜欢无锁解决方案,所以我试着在下面找到一个

    有很多方法可以考虑这个系统。你可以做模特 它是一个固定大小的循环缓冲区/队列(有两个条目),但是 您将无法更新下一个可用的消费值
    #include <atomic>
    #include <cstdint>
    
    template <typename T>
    class ProducerConsumerDoubleBuffer {
    public:
        ProducerConsumerDoubleBuffer() : m_state(0) { }
        ~ProducerConsumerDoubleBuffer() { }
    
        // Never returns nullptr
        T* start_writing() {
            // Increment active users; once we do this, no one
            // can swap the active cell on us until we're done
            auto state = m_state.fetch_add(0x2, std::memory_order_relaxed);
            return &m_buf[state & 1];
        }
    
        void end_writing() {
            // We want to swap the active cell, but only if we were the last
            // ones concurrently accessing the data (otherwise the consumer
            // will do it for us when *it's* done accessing the data)
    
            auto state = m_state.load(std::memory_order_relaxed);
            std::uint32_t flag = (8 << (state & 1)) ^ (state & (8 << (state & 1)));
            state = m_state.fetch_add(flag - 0x2, std::memory_order_release) + flag - 0x2;
            if ((state & 0x6) == 0) {
                // The consumer wasn't in the middle of a read, we should
                // swap (unless the consumer has since started a read or
                // already swapped or read a value and is about to swap).
                // If we swap, we also want to clear the full flag on what
                // will become the active cell, otherwise the consumer could
                // eventually read two values out of order (it reads a new
                // value, then swaps and reads the old value while the
                // producer is idle).
                m_state.compare_exchange_strong(state, (state ^ 0x1) & ~(0x10 >> (state & 1)), std::memory_order_release);
            }
        }
    
        // Returns nullptr if there appears to be no more data to read yet
        T* start_reading() {
            m_readState = m_state.load(std::memory_order_relaxed);
            if ((m_readState & (0x10 >> (m_readState & 1))) == 0) {
                // Nothing to read here!
                return nullptr;
            }
    
            // At this point, there is guaranteed to be something to
            // read, because the full flag is never turned off by the
            // producer thread once it's on; the only thing that could
            // happen is that the active cell changes, but that can
            // only happen after the producer wrote a value into it,
            // in which case there's still a value to read, just in a
            // different cell.
    
            m_readState = m_state.fetch_add(0x2, std::memory_order_acquire) + 0x2;
    
            // Now that we've incremented the user count, nobody can swap until
            // we decrement it
            return &m_buf[(m_readState & 1) ^ 1];
        }
    
        void end_reading() {
            if ((m_readState & (0x10 >> (m_readState & 1))) == 0) {
                // There was nothing to read; shame to repeat this
                // check, but if these functions are inlined it might
                // not matter. Otherwise the API could be changed.
                // Or just don't call this method if start_reading()
                // returns nullptr -- then you could also get rid
                // of m_readState.
                return;
            }
    
            // Alright, at this point the active cell cannot change on
            // us, but the active cell's flag could change and the user
            // count could change. We want to release our user count
            // and remove the flag on the value we read.
    
            auto state = m_state.load(std::memory_order_relaxed);
            std::uint32_t sub = (0x10 >> (state & 1)) | 0x2;
            state = m_state.fetch_sub(sub, std::memory_order_relaxed) - sub;
            if ((state & 0x6) == 0 && (state & (0x8 << (state & 1))) == 1) {
                // Oi, we were the last ones accessing the data when we released our cell.
                // That means we should swap, but only if the producer isn't in the middle
                // of producing something, and hasn't already swapped, and hasn't already
                // set the flag we just reset (which would mean they swapped an even number
                // of times).  Note that we don't bother swapping if there's nothing to read
                // in the other cell.
                m_state.compare_exchange_strong(state, state ^ 0x1, std::memory_order_relaxed);
            }
        }
    
    private:
        T m_buf[2];
    
        // The bottom (lowest) bit will be the active cell (the one for writing).
        // The active cell can only be switched if there's at most one concurrent
        // user. The next two bits of state will be the number of concurrent users.
        // The fourth bit indicates if there's a value available for reading
        // in m_buf[0], and the fifth bit has the same meaning but for m_buf[1].
        std::atomic<std::uint32_t> m_state;
    
        std::uint32_t m_readState;
    };
    
    ProducerConsumerDoubleBuffer<int> buf;
    std::thread producer([&]() {
        for (int i = 0; i != 500000; ++i) {
            int* item = buf.start_writing();
            if (item != nullptr) {      // Always true
                *item = i;
            }
            buf.end_writing();
        }
    });
    std::thread consumer([&]() {
        int prev = -1;
        for (int i = 0; i != 500000; ++i) {
            int* item = buf.start_reading();
            if (item != nullptr) {
                assert(*item > prev);
                prev = *item;
            }
            buf.end_reading();
        }
    });
    producer.join();
    consumer.join();
    
    const size_t MAX_DATA_SIZE = 512;
    typedef
    //__declspec(align(MEMORY_ALLOCATION_ALIGNMENT))
    struct DataItem_tag
    {
        SLIST_ENTRY listNode;
        uint8_t data[MAX_DATA_SIZE];
        size_t length;
    } DataItem_t;
    
    class CDoubleBuffer
    {
        SLIST_HEADER m_writePointers;
        DataItem_t m_buffers[2];
        volatile DataItem_t *m_readPointer;
    
    public:
        CDoubleBuffer()
            : m_writePointers()
            , m_buffers()
            , m_readPointer(NULL)
        {
            InitializeSListHead(&m_writePointers);
            InterlockedPushEntrySList(&m_writePointers, &m_buffers[0].listNode);
            InterlockedPushEntrySList(&m_writePointers, &m_buffers[1].listNode);
        }
        DataItem_t *beginRead()
        {
            DataItem_t *result = reinterpret_cast<DataItem_t*>(InterlockedExchangePointer((volatile PVOID*)&m_readPointer, NULL));
            return result;
        }
        void endRead(DataItem_t *dataItem)
        {
            if (NULL != dataItem)
            {
                InterlockedPushEntrySList(&m_writePointers, &dataItem->listNode);
            }
        }
        DataItem_t *beginWrite()
        {
            DataItem_t *result = reinterpret_cast<DataItem_t*>(InterlockedPopEntrySList(&m_writePointers));
            return result;
        }
        void endWrite(DataItem_t *dataItem)
        {
            DataItem_t *oldReadPointer = reinterpret_cast<DataItem_t*>(InterlockedExchangePointer((volatile PVOID*)&m_readPointer, dataItem));
            if (NULL != oldReadPointer)
            {
                InterlockedPushEntrySList(&m_writePointers, &oldReadPointer->listNode);
            }
        }
    };
    
    CDoubleBuffer doubleBuffer;
    
    DataItem_t *readValue;
    DataItem_t *writeValue;
    
    // nothing to read yet. Make sure NULL is returned.
    assert(NULL == doubleBuffer.beginRead());
    doubleBuffer.endRead(NULL); // we got nothing, we return nothing.
    
    // First write without read
    writeValue = doubleBuffer.beginWrite();
    assert(NULL != writeValue); // if we get NULL here it is a bug.
    writeValue->length = 0;
    doubleBuffer.endWrite(writeValue);
    
    // Second write without read
    writeValue = doubleBuffer.beginWrite();
    assert(NULL != writeValue); // if we get NULL here it is a bug.
    writeValue->length = 1;
    doubleBuffer.endWrite(writeValue);
    
    // Third write without read - works because it reuses the old buffer for the new write.
    writeValue = doubleBuffer.beginWrite();
    assert(NULL != writeValue); // if we get NULL here it is a bug.
    writeValue->length = 2;
    doubleBuffer.endWrite(writeValue);
    
    readValue = doubleBuffer.beginRead();
    assert(NULL != readValue); // NULL would obviously be a terrible bug.
    assert(2 == readValue->length); // We got the latest and greatest?
    doubleBuffer.endRead(readValue);
    
    readValue = doubleBuffer.beginRead();
    assert(NULL == readValue); // We expect NULL here. Re-reading is not a feature of this implementation!
    doubleBuffer.endRead(readValue);