python – Aiohttp,Asyncio:RuntimeError:事件循环已关闭

前端之家收集整理的这篇文章主要介绍了python – Aiohttp,Asyncio:RuntimeError:事件循环已关闭前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我有两个脚本,scraper.py和db_control.py.在scraper.py我有这样的事情:
...
def scrap(category,field,pages,search,use_proxy,proxy_file):
    ...
    loop = asyncio.get_event_loop()

    to_do = [ get_pages(url,params,conngen) for url in urls ]
    wait_coro = asyncio.wait(to_do)
    res,_ = loop.run_until_complete(wait_coro)
    ...
    loop.close()

    return [ x.result() for x in res ]

...

在db_control.py中:

from scraper import scrap
...
while new < 15:
    data = scrap(category,proxy_file)
    ...
...

从理论上讲,刮板应该在未知时间开始,直到获得足够的数据.但是当新的不是imidiatelly> 15然后发生此错误

File "/usr/lib/python3.4/asyncio/base_events.py",line 293,in run_until_complete
self._check_closed()
  File "/usr/lib/python3.4/asyncio/base_events.py",line 265,in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed

但是如果我只运行一次scrap(),脚本就可以了.所以我想重新创建loop = asyncio.get_event_loop()有一些问题,我尝试了this但没有任何改变.我怎么解决这个问题?当然这些只是我的代码的片段,如果您认为问题可以在其他地方,完整代码可用here.

解决方法

方法 run_until_complete,run_forever,@L_403_4@,create_task,call_at明确检查
如果它已关闭,则循环并抛出异常.

从文档引用 – BaseEvenLoop.close

This is idempotent and irreversible

除非你有一些(好的)理由,否则你可能只是省略了关闭线:

def scrap(category,proxy_file):
    #...
    loop = asyncio.get_event_loop()

    to_do = [ get_pages(url,_ = loop.run_until_complete(wait_coro)
    #...
    # loop.close()
    return [ x.result() for x in res ]

如果您希望每次都有一个全新的循环,您必须手动创建它并设置为默认值:

def scrap(category,proxy_file):
    #...
    loop = asyncio.new_event_loop()
    asyncio.set_event_loop(loop)    
    to_do = [ get_pages(url,_ = loop.run_until_complete(wait_coro)
    #...
    return [ x.result() for x in res ]
原文链接:https://www.f2er.com/python/185555.html

猜你在找的Python相关文章