python – ConnectionResetError:远程主机强制关闭现有连接

前端之家收集整理的这篇文章主要介绍了python – ConnectionResetError:远程主机强制关闭现有连接前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

我正在编写一个脚本来下载一组文件.我成功完成了这项工作并且工作正常.现在我尝试添加下载进度的动态打印输出.

对于小型下载(顺便说一下是.mp4文件),例如5MB,进展很有效,文件成功关闭,从而生成完整且有效的下载.mp4文件.对于较大的文件,如250MB及以上,它无法正常工作,我收到以下错误

enter image description here

这是我的代码

import urllib.request
import shutil
import os
import sys
import io

script_dir = os.path.dirname('C:/Users/Kenny/Desktop/')
rel_path = 'stupid_folder/video.mp4'
abs_file_path = os.path.join(script_dir,rel_path)
url = 'https://archive.org/download/SF145/SF145_512kb.mp4'
# Download the file from `url` and save it locally under `file_name`:

with urllib.request.urlopen(url) as response,open(abs_file_path,'wb') as out_file:

    eventID = 123456

    resp = urllib.request.urlopen(url)
    length = resp.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096,length//100)
    else:
        blocksize = 1000000 # just made something up

    # print(length,blocksize)

    buf = io.BytesIO()
    size = 0
    while True:
        buf1 = resp.read(blocksize)
        if not buf1:
            break
        buf.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100,eventID),end='')#print('\rDownloading: {:.1f}%'.format(size/length*100),end='')
    print()

    shutil.copyfileobj(response,out_file)

这适用于小文件,但较大的文件我得到错误.现在,如果我注释掉进度指示器代码,我不会得到更大的文件错误

with urllib.request.urlopen(url) as response,'wb') as out_file:

    # eventID = 123456
    # 
    # resp = urllib.request.urlopen(url)
    # length = resp.getheader('content-length')
    # if length:
    #     length = int(length)
    #     blocksize = max(4096,length//100)
    # else:
    #     blocksize = 1000000 # just made something up
    # 
    # # print(length,blocksize)
    # 
    # buf = io.BytesIO()
    # size = 0
    # while True:
    #     buf1 = resp.read(blocksize)
    #     if not buf1:
    #         break
    #     buf.write(buf1)
    #     size += len(buf1)
    #     if length:
    #         print('\r[{:.1f}%] Downloading: {}'.format(size/length*100,end='')
    # print()

    shutil.copyfileobj(response,out_file)

有没有人有任何想法?这是我项目的最后一部分,我真的希望能够看到进展.再一次,这是Python 3.5.感谢您提供的任何帮助!

最佳答案
你打开你的网址两次,一次是响应,一次是响应.使用进度条的东西,你正在消耗数据,所以当使用copyfileobj复制文件时,数据是空的(也许这是不准确的,因为它适用于小文件,但你在这里做了两次,它可能是问题的原因)

获取进度条和有效文件,请执行以下操作:

with urllib.request.urlopen(url) as response,'wb') as out_file:

    eventID = 123456

    length = response.getheader('content-length')
    if length:
        length = int(length)
        blocksize = max(4096,length//100)
    else:
        blocksize = 1000000 # just made something up


    size = 0
    while True:
        buf1 = response.read(blocksize)
        if not buf1:
            break
        out_file.write(buf1)
        size += len(buf1)
        if length:
            print('\r[{:.1f}%] Downloading: {}'.format(size/length*100,end='')
    print()

简化您的代码

>只有一个urlopen,作为回应
>没有BytesIO,直接写入out_file

猜你在找的Python相关文章