我刚刚开始了解scrapy.现在我正在尝试通过以下教程进行爬网.但我很难从div抓取文本.
这是items.py
from scrapy.item import Item,Fied
class DmozItem(Item):
name = Field()
title = Field()
pass
这是dmoz_spider.py
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from dmoz.items import DmozItem
class DmozSpider(BaseSpider):
name = "dmoz"
allowed_domains = ["roxie.com"]
start_urls = ["http://www.roxie.com/events/details.cfm?eventID=4921702B-9E3D-8678-50D614177545A594"]
def parse(self,response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//div[@id="eventdescription"]')
items = []
for site in sites:
item = DmozItem()
item['name'] = hxs.select("text()").extract()
items.append(item)
return items
现在我试图通过命令这个从顶级文件夹爬行:
scrapy crawl dmoz -o scraped_data.json -t json
但该文件仅使用'[‘创建.
它完美地在控制台中工作(通过命令每个选择),但不知何故它不能用作脚本.我真的是scrapy的首发.你们能告诉我如何获得“div”中的数据吗?提前致谢.
*此外,这是我得到的.
2013-06-19 08:43:56-0700 [scrapy] INFO: Scrapy 0.16.5 started (bot: dmoz)
/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/twisted/web/microdom.py:181: SyntaxWarning: assertion is always true,perhaps remove parentheses?
assert (oldChild.parentNode is self,2013-06-19 08:43:56-0700 [scrapy] DEBUG: Enabled extensions: FeedExporter,LogStats,TelnetConsole,CloseSpider,WebService,CoreStats,SpiderState
2013-06-19 08:43:56-0700 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware,DownloadTimeoutMiddleware,UserAgentMiddleware,RetryMiddleware,DefaultHeadersMiddleware,RedirectMiddleware,CookiesMiddleware,HttpCompressionMiddleware,ChunkedTransferMiddleware,DownloaderStats
2013-06-19 08:43:56-0700 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware,OffsiteMiddleware,RefererMiddleware,UrlLengthMiddleware,DepthMiddleware
2013-06-19 08:43:56-0700 [scrapy] DEBUG: Enabled item pipelines:
2013-06-19 08:43:56-0700 [dmoz] INFO: Spider opened
2013-06-19 08:43:56-0700 [dmoz] INFO: Crawled 0 pages (at 0 pages/min),scraped 0 items (at 0 items/min)
2013-06-19 08:43:56-0700 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-06-19 08:43:56-0700 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-06-19 08:43:56-0700 [dmoz] DEBUG: Crawled (200)
最佳答案