Timer 使用gettimeofday()的自校正周期计时器

Timer 使用gettimeofday()的自校正周期计时器,timer,gettimeofday,Timer,Gettimeofday,我有一个循环,它每运行一次xusecs,它包括执行一些I/O,然后为剩余的xusecs休眠。为了(粗略地)计算睡眠时间,我所做的只是在I/O前后取一个时间戳,并从X中减去差值。下面是我用于时间戳的函数: long long getus () { struct timeval time; gettimeofday(&time, NULL); return (long long) (time.tv_sec + time.tv_usec); }

我有一个循环,它每运行一次xusecs,它包括执行一些I/O,然后为剩余的xusecs休眠。为了(粗略地)计算睡眠时间,我所做的只是在I/O前后取一个时间戳,并从X中减去差值。下面是我用于时间戳的函数:

long long getus ()
{
        struct timeval time;
        gettimeofday(&time, NULL);
        return (long long) (time.tv_sec + time.tv_usec);
}
正如您可以想象的,这开始漂移得非常快,I/O突发之间的实际时间通常比X长几毫秒。 为了让它更准确一点,我想如果我记录下上一个开始的时间戳,每次我开始一个新的周期,我都可以计算出上一个周期花了多长时间(这个开始时间戳和上一个之间的时间)。然后,我知道它比X长了多少,我可以调整我的睡眠来补偿这个周期

下面是我如何尝试实现它的:

    long long start, finish, offset, previous, remaining_usecs;
    long long delaytime_us = 1000000;

    /* Initialise previous timestamp as 1000000us ago*/
    previous = getus() - delaytime_us;
    while(1)
    {
            /* starting timestamp */
            start = getus();

            /* here is where I would do some I/O */

            /* calculate how much to compensate */
            offset = (start - previous) - delaytime_us;

            printf("(%lld - %lld) - %lld = %lld\n", 
                    start, previous, delaytime_us, offset);

            previous = start;

            finish = getus();

            /* calculate to our best ability how long we spent on I/O.
             * We'll try and compensate for its inaccuracy next time around!*/
            remaining_usecs = (delaytime_us - (finish - start)) - offset;

            printf("start=%lld,finish=%lld,offset=%lld,previous=%lld\nsleeping for %lld\n",
                    start, finish, offset, previous, remaining_usecs);

            usleep(remaining_usecs);

    }
它似乎在循环的第一次迭代中起作用,但是在那之后事情就变得一团糟了

以下是循环5次迭代的输出:

(1412452353 - 1411452348) - 1000000 = 5
start=1412452353,finish=1412458706,offset=5,previous=1412452353
sleeping for 993642

(1412454788 - 1412452353) - 1000000 = -997565
start=1412454788,finish=1412460652,offset=-997565,previous=1412454788
sleeping for 1991701

(1412454622 - 1412454788) - 1000000 = -1000166
start=1412454622,finish=1412460562,offset=-1000166,previous=1412454622
sleeping for 1994226

(1412457040 - 1412454622) - 1000000 = -997582
start=1412457040,finish=1412465861,offset=-997582,previous=1412457040
sleeping for 1988761

(1412457623 - 1412457040) - 1000000 = -999417
start=1412457623,finish=1412463533,offset=-999417,previous=1412457623
sleeping for 1993507
输出的第一行显示如何计算上一个循环时间。看起来前两个时间戳基本上相隔1000000us(1412452353-1411452348=1000005)。然而,在此之后,开始时间戳之间的距离以及偏移量开始看起来不那么合理。 有人知道我做错了什么吗

编辑:我也希望能有更好的方法来获得一个准确的计时器,并得到更多的建议
能够在延迟期间睡眠

经过进一步的研究,我发现这里有两件事不对- 首先,我把时间戳计算错了。getus()应该像这样返回:

uint64_t getus ()
{
        struct timeval time;
        gettimeofday(&time, NULL);
        return (uint64_t) 1000000 * (time.tv_sec + time.tv_usec);
}
返回(长-长)
1000000*
(time.tv\u sec+time.tv\u usec)

其次,我应该将时间戳存储在
unsigned long long
uint64\u t
中。 因此getus()应该如下所示:

uint64_t getus ()
{
        struct timeval time;
        gettimeofday(&time, NULL);
        return (uint64_t) 1000000 * (time.tv_sec + time.tv_usec);
}
我要到明天才能测试,所以我会回来报告