scrapy [boto] ERROR: Caught exception reading instance data URLError: <urlopen error [Errno 10051

前端之家收集整理的这篇文章主要介绍了scrapy [boto] ERROR: Caught exception reading instance data URLError: <urlopen error [Errno 10051前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

履行进程中出现毛病:

2015-09-09 11:13:26 [boto] DEBUG: Retrieving credentials from Metadata server. 2015-09-09 11:13:27 [boto] ERROR: Caught exception reading instance data Traceback (most recent call last): File "D:anzhuangAnacondalibsite-packagesotoutils.py",line 210,in retry_url r = opener.open(req,timeout=timeout) File "D:anzhuangAnacondaliburllib2.py",line 431,in open response = self._open(req,data) File "D:anzhuangAnacondaliburllib2.py",line 449,in _open _open,req) File "D:anzhuangAnacondaliburllib2.py",line 409,in _call_chain result = func(*args) File "D:anzhuangAnacondaliburllib2.py",line 1227,in http_open return self.do_open(httplib.HTTPConnection,line 1197,in do_open raise URLError(err) URLError:2015-09-09 11:13:27 [boto] ERROR: Unable to read instance data,giving up

在setting.py中禁用s3 download就能够了
解决方法

DOWNLOAD_HANDLERS = {S3: None,}

stackoverflow上有些解释,有时间可以看看这篇毛病分析毛病
下面是我贴的部份内容
That particular error message is being generated by boto (boto 2.38.0 py27_0),which is used to connect to Amazon S3. Scrapy doesn’t have this enabled by default.

EDIT: In reply to the comments,this appears to be a bug with Scrapy when boto is present (bug here).

In response “how to disable the Download handler”,add the following to your settings.py file:

DOWNLOAD_HANDLERS = { S3:None,}

Your settings.py file should be in the root of your Scrapy project folder,(one level deeper than your scrapy.cfg file).

If you’ve already got DOWNLOAD_HANDLERS in your settings.py file,just add a new entry for ‘s3’ with a None value.

EDIT 2: I’d highly recommend looking at setting up virtual environments for your projects. Look into virtualenv,and it’s usage. I’d make this recommendation regardless of packages used for this project,but doubly so with your extreme number of packages.

猜你在找的PHP相关文章