Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/android/214.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
为什么在android中从google drive下载PDF文件需要这么长时间?_Android_Performance_Google Drive Api_Google Drive Android Api - Fatal编程技术网

为什么在android中从google drive下载PDF文件需要这么长时间?

为什么在android中从google drive下载PDF文件需要这么长时间?,android,performance,google-drive-api,google-drive-android-api,Android,Performance,Google Drive Api,Google Drive Android Api,我正在尝试使用以下代码从google drive下载PDF文件: try { URL url = new URL(fileUrl); HttpURLConnection urlConnection = (HttpURLConnection)url.openConnection(); urlConnection.connect(); if (urlConnection.getResponseCode(

我正在尝试使用以下代码从google drive下载PDF文件:

try {
            URL url = new URL(fileUrl);
            HttpURLConnection urlConnection = (HttpURLConnection)url.openConnection();
            urlConnection.connect();

            if (urlConnection.getResponseCode() != HttpURLConnection.HTTP_OK){
                Log.v(TAG,"server returned http " + urlConnection.getResponseCode()
                        + urlConnection.getResponseMessage());
            }
            else{
                Log.v(TAG + "downloading" ,"server returned http " + urlConnection.getResponseCode()
                        + urlConnection.getResponseMessage());
            }

            InputStream inputStream = urlConnection.getInputStream();
            FileOutputStream fileOutputStream = new FileOutputStream(directory);
            int totalSize = urlConnection.getContentLength();
            int total = 0;

            byte[] buffer = new byte[MEGABYTE];
            int bufferLength = 0;
            int i = 0;
            while((bufferLength = inputStream.read(buffer))!= -1 ) 
            {
                i+=1;
                Log.v(TAG + "downloading","downloading mega #"+i);
                total += bufferLength;
                fileOutputStream.write(buffer, 0, bufferLength);
            }

            fileOutputStream.close();
            inputStream.close();
        }
        catch (FileNotFoundException e) {
            e.printStackTrace();
        }
        catch (MalformedURLException e) {
            e.printStackTrace();
        }
        catch (IOException e) {
            e.printStackTrace();
        }
    }
如代码所示,while循环中的每个迭代下载一个mega,但只需2650次迭代即可下载2个mega文件!!!!!!
你知道怎么解决这个问题吗

我不能确切地说,为什么会这样,但引用javadoc:

读取的字节数最多为,等于b的长度

(重点由我补充)

期望与现实不符的可能原因:

  • MB
    设置为错误大小(应为1000000)
  • 连接速度不够高
  • 其他障碍,“read”无法读取每个循环的完整缓冲区大小


您可以检查每个循环读取的字节数,但可能不会有多大帮助。

-我将MB定义为1024*1024-当我从普通浏览器下载时,速度会更快,所以我不认为这是连接速度问题-实际上可能是“读取”无法读取每个循环的完整缓冲区大小。知道如何解决这个问题吗?哦,好吧,这个定义也适用。不,很遗憾我不知道怎么解决这个问题。与浏览器下载相比,下载也需要更长的时间吗?