Php curl_multi_exec:下载的某些图像缺少某些数据/流不完整

Php curl_multi_exec:下载的某些图像缺少某些数据/流不完整,php,curl,php-curl,curl-multi,Php,Curl,Php Curl,Curl Multi,我已经实现了一个PHP函数,它使用PHPcurl\u multi\u init()方法检查并下载了大量图像(>1000),并使用数组传递给它 经过几次修改,因为我得到了像0字节文件之类的东西。我现在有了一个解决方案,可以下载所有图像-但下载的其他图像文件都不完整 在我看来,似乎我使用了file\u put\u contents()“太早了”,意思是在使用curl\u multi\u exec()完全接收到一些图像数据之前 不幸的是,在我的案例中,我没有发现任何类似的问题,也没有找到任何谷歌结果,

我已经实现了一个PHP函数,它使用PHP
curl\u multi\u init()
方法检查并下载了大量图像(>1000),并使用数组传递给它

经过几次修改,因为我得到了像0字节文件之类的东西。我现在有了一个解决方案,可以下载所有图像-但下载的其他图像文件都不完整

在我看来,似乎我使用了
file\u put\u contents()
“太早了”,意思是在使用
curl\u multi\u exec()完全接收到一些图像数据之前

不幸的是,在我的案例中,我没有发现任何类似的问题,也没有找到任何谷歌结果,我需要使用curl_multi_exec,但不想使用curl opt头“
CURLOPT_FILE
”检索和保存图像

希望有人能够帮助我解决我缺少的内容以及为什么我会在本地保存一些损坏的图像。

以下是检索到的损坏图像的一些示例:

下面是我传递给Multi-CURL函数的示例数组: 我的多旋度PHP函数 现在,对于我目前正在使用的函数,除了一些部分下载的文件外,还有一些“工作”,代码如下:

function cURLfetch(array $resources)
{
    /** Disable PHP timelimit, because this could take a while... */
    set_time_limit(0);

    /** Validate the $resources Array (not empty, null, or alike) */
    $resources_num = count($resources);
    if ( empty($resources) || $resources_num <= 0 ) return false;

    /** Callback-Function for writing data to file */
    $callback = function($resource, $filepath)
    {
        file_put_contents($filepath, $resource);
        /** For Debug only: output <img>-Tag with saved $resource */
        printf('<img src="%s"><br>', str_replace('/srv/www', '', $filepath));
    };

    /**
     * Initialize CURL process for handling multiple parallel requests 
     */
    $curl_instance = curl_multi_init();
    $curl_multi_exec_active = null;
    $curl_request_options = [
                                CURLOPT_USERAGENT => 'PHP-Script/1.0 (+https://website.com/)',
                                CURLOPT_TIMEOUT => 10,
                                CURLOPT_FOLLOWLOCATION => true,
                                CURLOPT_VERBOSE => false,
                                CURLOPT_RETURNTRANSFER => true,
                            ];

    /**
     * Looping through all $resources
     *   $resources[$i][0] = HTTP resource
     *   $resources[$i][1] = Target Filepath
     */
    for ($i = 0; $i < $resources_num; $i++)
    {
        $curl_requests[$i] = curl_init($resources[$i][0]);
        curl_setopt_array($curl_requests[$i], $curl_request_options);
        curl_multi_add_handle($curl_instance, $curl_requests[$i]);
    }

    do {
        try {
            $curl_execute = curl_multi_exec($curl_instance, $curl_multi_exec_active);
        } catch (Exception $e) {
            error_log($e->getMessage());
        }
    } while ($curl_execute == CURLM_CALL_MULTI_PERFORM);


    /** Wait until data arrives on all sockets */
    $h = 0; // initialise a counter
    while ($curl_multi_exec_active && $curl_execute == CURLM_OK)
    {
        if (curl_multi_select($curl_instance) != -1)
        {
            do {
              $curl_data = curl_multi_exec($curl_instance, $curl_multi_exec_active);
              $curl_done = curl_multi_info_read($curl_instance);
              /** Check if there is data... */
              if ($curl_done['handle'] !== NULL)
              {
                  /** Continue ONLY if HTTP statuscode was OK (200) */
                  $info = curl_getinfo($curl_done['handle']);
                  if ($info['http_code'] == 200)
                  {
                      if (!empty(curl_multi_getcontent($curl_requests[$h]))) {
                          /** Curl request successful. Process data using the callback function. */
                          $callback(curl_multi_getcontent($curl_requests[$h]), $resources[$h][1]);
                      }
                      $h++; // count up
                   }
               }
            } while ($curl_data == CURLM_CALL_MULTI_PERFORM);
        }
    }

    /** Close all $curl_requests */
    foreach($curl_requests as $request) {
        curl_multi_remove_handle($curl_instance, $request);
    }
    curl_multi_close($curl_instance);

    return true;
}

/** Start fetching images from an Array */
cURLfetch($curl_httpresources);
函数cURLfetch(数组$resources)
{
/**禁用PHP timelimit,因为这可能需要一段时间*/
设置时间限制(0);
/**验证$resources数组(不是空的、null的或类似的)*/
$resources\u num=计数($resources);

if(empty($resources)| |$resources_num我在一个经典循环中只使用常规的cURL请求,查询所有>1000个图像并下载带有“HTTP 200 OK”的图像我最初担心的是,服务器可能会因为潜在的错误识别的DDoS而切断连接,但这一担心没有起到任何作用,为什么这种方法对我的案例有效

下面是我使用的带有常规cURL请求的最后一个函数:

function cURLfetchUrl($url, $save_as_file)
{
    /** Validate $url & $save_as_file (not empty, null, or alike) */
    if ( empty($url) || is_numeric($url) ) return false;
    if ( empty($save_as_file) || is_numeric($save_as_file) ) return false;

    /** Disable PHP timelimit, because this could take a while... */
    set_time_limit(0);

    try {
        /**
         * Set cURL options to be passed to a single request
         */
        $curl_request_options = [
                                    CURLOPT_USERAGENT => 'PHP-Script/1.0 (+https://website.com/)',
                                    CURLOPT_TIMEOUT => 5,
                                    CURLOPT_FOLLOWLOCATION => true,
                                    CURLOPT_RETURNTRANSFER => true,
                                ];

        /** Initialize & execute cURL-Request */
        $curl_instance = curl_init($url);
        curl_setopt_array($curl_instance, $curl_request_options);
        $curl_data = curl_exec($curl_instance);
        $curl_done = curl_getinfo($curl_instance);

        /** cURL request successful */
        if ($curl_done['http_code'] == 200)
        {
            /** Open a new file handle, write the file & close the file handle */
            if (file_put_contents($save_as_file, $curl_data) !== false) {
                // logging if file_put_contents was OK
            } else {
                // logging if file_put_contents FAILED
            }
        }

        /** Close the $curl_instance */
        curl_close($curl_instance);

        return true;

    } catch (Exception $e) {
        error_log($e->getMessage());
        return false;
    }
}
并执行:

$curl_httpresources = [
    [ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
    ,'/srv/www/data/images/1_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
    ,'/srv/www/data/images/2_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
    ,'/srv/www/data/images/3_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
    ,'/srv/www/data/images/4_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
    ,'/srv/www/data/images/5_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
    ,'/srv/www/data/images/6_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
    ,'/srv/www/data/images/7_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
    ,'/srv/www/data/images/8_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
    ,'/srv/www/data/images/9_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
    ,'/srv/www/data/images/10_unsplash.jpg' ],
];

/** cURL all request from the $curl_httpresources Array */
if (count($curl_httpresources) > 0)
{
    foreach ($curl_httpresources as $resource)
    {
        cURLfetchUrl($resource[0], $resource[1]);
    }
}

不过,如果有人知道如何使用curl_multi正确检索文件数据流,那就太好了,因为我对初始问题的回答只是展示了一种不同的方法,而不是解决初始方法。

只是一个猜测-您可能对服务器打击太大,所以有时它会断开连接(而不会发出错误)。循环并同时下载不超过10张图片。(然后尝试增加…)感谢@Paolo的猜测。有趣的是,通常第一张图片会被认为是不完整的,而它已经发生了,同时只请求总共10张图片(请参见初始问题中的示例数组)。。。
$curl_httpresources = [
    [ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
    ,'/srv/www/data/images/1_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
    ,'/srv/www/data/images/2_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
    ,'/srv/www/data/images/3_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
    ,'/srv/www/data/images/4_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
    ,'/srv/www/data/images/5_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=mm&r=x&s=427'
    ,'/srv/www/data/images/6_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=identicon&r=x&s=427'
    ,'/srv/www/data/images/7_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=monsterid&r=x&s=427'
    ,'/srv/www/data/images/8_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=wavatar&r=x&s=427'
    ,'/srv/www/data/images/9_unsplash.jpg' ],
    [ 'http://www.gravatar.com/avatar/example?d=retro&r=x&s=427'
    ,'/srv/www/data/images/10_unsplash.jpg' ],
];

/** cURL all request from the $curl_httpresources Array */
if (count($curl_httpresources) > 0)
{
    foreach ($curl_httpresources as $resource)
    {
        cURLfetchUrl($resource[0], $resource[1]);
    }
}