股票数据scrapy爬虫(出自MOOC的课程《python网络爬虫与信息提取》),抓取数据失败

a华丽的冒险 2017-10-16 10:55:05
我是跟着MOOC上《python网络爬虫与信息提取》学爬虫(http://www.icourse163.org/course/BIT-1001870001)。其中第四周课程有个用scrapy爬取股票数据的示例代码。我按课件给出的代码运行后发现没有抓取到数据
以下为spider配置代码:

# -*- coding: utf-8 -*-
import scrapy
import re


class StocksSpider(scrapy.Spider):
name = 'stocks'
start_urls = ['http://quote.eastmoney.com/stocklist.html']

def parse(self, response):
for href in response.css('a::attr(href)').extract():
try:
stock=re.findall(r'[s][hz]\d{6}',href)
url='https://gupiao.baidu.com/stock/'+stock+'.html'
yield scrapy.Request(url,callback=self.parse_stock)
except:
continue

def parse_stock(self,response):
infoDict={}
stockInfo=response.css('.stock-bets')
name=stockInfo.css('.bets-name').extract()[0]
keyList=stockInfo.css('dt').extract()
valueList=stockInfo.css('dd').extract()
for i in range(len(keyList)):
key=re.findall(r'>.*</dt>',keyList[i])[0][1:-5]#
#key=keyList[i][0]
try:
val=re.findall(r'\d+\.?.*</dd>',valueList[i])[0][0:-5]#
#val=valueList[i][0]
except:
val='--'
infoDict[key]=val

infoDict.update({'股票名称':re.findall(r'\s.*\(',name)[0].split()[0]+re.findall(r'\>.*\<',name)[0][1:-1]})#
#infoDict.update({'股票名称':name.split()[0]})
yield infoDict




下面是命令行窗口在运行时的信息:

C:\Users\DONG LONG RUI\.spyder-py3\myscrapy\BaiduStocks>scrapy crawl stocks
2017-10-16 22:46:55 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: BaiduStocks)
2017-10-16 22:46:55 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'BaiduStocks', 'NEWSPIDER_MODULE': 'BaiduStocks.spiders', 'SPIDER_MODULES': ['BaiduStocks.spiders']}
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled item pipelines:
['BaiduStocks.pipelines.BaidustocksInfoPipeline']
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Spider opened
2017-10-16 22:46:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-10-16 22:46:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-10-16 22:46:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quote.eastmoney.com/stocklist.html> (referer: None)
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Closing spider (finished)
2017-10-16 22:46:55 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 231,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 90141,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 10, 16, 14, 46, 55, 946144),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 10, 16, 14, 46, 55, 345506)}
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Spider closed (finished)



求大神解答,谢谢~
...全文
276 1 打赏 收藏 转发到动态 举报
写回复
用AI写文章
1 条回复
切换为时间正序
请发表友善的回复…
发表回复
Cyouno 2017-10-20
  • 打赏
  • 举报
回复
stock=re.findall(r'[s][hz]\d{6}',href) url='https://gupiao.baidu.com/stock/'+stock+'.html' 这里stock是个list,所以在设置url的时候出错。

37,719

社区成员

发帖
与我相关
我的任务
社区描述
JavaScript,VBScript,AngleScript,ActionScript,Shell,Perl,Ruby,Lua,Tcl,Scala,MaxScript 等脚本语言交流。
社区管理员
  • 脚本语言(Perl/Python)社区
  • IT.BOB
加入社区
  • 近7日
  • 近30日
  • 至今

试试用AI创作助手写篇文章吧