用scrapy做的爬虫总是抓不到数据,这是交互环境下的信息,哪位大神给看看问题出在哪

Jangle_ 2017-02-13 08:52:11
D:\python程序\example2>scrapy crawl country2 --output=123.csv -s LOG_LEVEL=INFO
2017-02-13 20:45:15 [scrapy.utils.log] INFO: Scrapy 1.3.1 started (bot: example
)
2017-02-13 20:45:16 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_M
DULE': 'example2.spiders', 'FEED_URI': '123.csv', 'LOG_LEVEL': 'INFO', 'CONCURR
NT_REQUESTS_PER_DOMAIN': 1, 'SPIDER_MODULES': ['example2.spiders'], 'BOT_NAME':
'example2', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'csv', 'DOWNLOAD_DELAY': 5}
2017-02-13 20:45:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-02-13 20:45:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-02-13 20:45:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-02-13 20:45:16 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-02-13 20:45:16 [scrapy.core.engine] INFO: Spider opened
2017-02-13 20:45:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pag
es/min), scraped 0 items (at 0 items/min)
2017-02-13 20:45:23 [scrapy.core.engine] INFO: Closing spider (finished)
2017-02-13 20:45:23 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 512,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 14440,
'downloader/response_count': 2,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/404': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 2, 13, 12, 45, 23, 543000),
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 2, 13, 12, 45, 16, 493000)}
2017-02-13 20:45:23 [scrapy.core.engine] INFO: Spider closed (finished)
...全文
6389 7 打赏 收藏 转发到动态 举报
AI 作业
写回复
用AI写文章
7 条回复
切换为时间正序
请发表友善的回复…
发表回复
简单点あ 2019-04-15
  • 打赏
  • 举报
回复
引用 6 楼 a华丽的冒险 的回复:
我和楼主情况类似:
我是跟着MOOC上《python网络爬虫与信息提取》学爬虫(http://www.icourse163.org/course/BIT-1001870001)。其中第四周课程有个用scrapy爬取股票数据的示例代码。我按课件给出的代码运行后发现没有抓取到数据。
以下为spider配置代码:

# -*- coding: utf-8 -*-
import scrapy
import re


class StocksSpider(scrapy.Spider):
name = 'stocks'
start_urls = ['http://quote.eastmoney.com/stocklist.html']

def parse(self, response):
for href in response.css('a::attr(href)').extract():
try:
stock=re.findall(r'[s][hz]\d{6}',href)
url='https://gupiao.baidu.com/stock/'+stock+'.html'
yield scrapy.Request(url,callback=self.parse_stock)
except:
continue

def parse_stock(self,response):
infoDict={}
stockInfo=response.css('.stock-bets')
name=stockInfo.css('.bets-name').extract()[0]
keyList=stockInfo.css('dt').extract()
valueList=stockInfo.css('dd').extract()
for i in range(len(keyList)):
key=re.findall(r'>.*</dt>',keyList[i])[0][1:-5]#
#key=keyList[i][0]
try:
val=re.findall(r'\d+\.?.*</dd>',valueList[i])[0][0:-5]#
#val=valueList[i][0]
except:
val='--'
infoDict[key]=val

infoDict.update({'股票名称':re.findall(r'\s.*\(',name)[0].split()[0]+re.findall(r'\>.*\<',name)[0][1:-1]})#
#infoDict.update({'股票名称':name.split()[0]})
yield infoDict




下面是命令行窗口在运行时的信息:

C:\Users\DONG LONG RUI\.spyder-py3\myscrapy\BaiduStocks>scrapy crawl stocks
2017-10-16 22:46:55 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: BaiduStocks)
2017-10-16 22:46:55 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'BaiduStocks', 'NEWSPIDER_MODULE': 'BaiduStocks.spiders', 'SPIDER_MODULES': ['BaiduStocks.spiders']}
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled item pipelines:
['BaiduStocks.pipelines.BaidustocksInfoPipeline']
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Spider opened
2017-10-16 22:46:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-10-16 22:46:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-10-16 22:46:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quote.eastmoney.com/stocklist.html> (referer: None)
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Closing spider (finished)
2017-10-16 22:46:55 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 231,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 90141,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 10, 16, 14, 46, 55, 946144),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 10, 16, 14, 46, 55, 345506)}
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Spider closed (finished)

求大神解答,谢谢~



同求解决方法
a华丽的冒险 2017-10-16
  • 打赏
  • 举报
回复
我和楼主情况类似: 我是跟着MOOC上《python网络爬虫与信息提取》学爬虫(http://www.icourse163.org/course/BIT-1001870001)。其中第四周课程有个用scrapy爬取股票数据的示例代码。我按课件给出的代码运行后发现没有抓取到数据。 以下为spider配置代码:
# -*- coding: utf-8 -*-
import scrapy
import re


class StocksSpider(scrapy.Spider):
    name = 'stocks'
    start_urls = ['http://quote.eastmoney.com/stocklist.html']

    def parse(self, response):
        for href in response.css('a::attr(href)').extract():
            try:
                stock=re.findall(r'[s][hz]\d{6}',href)
                url='https://gupiao.baidu.com/stock/'+stock+'.html'
                yield scrapy.Request(url,callback=self.parse_stock)
            except:
                continue
    
    def parse_stock(self,response):
        infoDict={}
        stockInfo=response.css('.stock-bets')
        name=stockInfo.css('.bets-name').extract()[0]
        keyList=stockInfo.css('dt').extract()
        valueList=stockInfo.css('dd').extract()
        for i in range(len(keyList)):
            key=re.findall(r'>.*</dt>',keyList[i])[0][1:-5]#
            #key=keyList[i][0]
            try:
                val=re.findall(r'\d+\.?.*</dd>',valueList[i])[0][0:-5]#
                #val=valueList[i][0]
            except:
                val='--'
            infoDict[key]=val
                    
        infoDict.update({'股票名称':re.findall(r'\s.*\(',name)[0].split()[0]+re.findall(r'\>.*\<',name)[0][1:-1]})#
        #infoDict.update({'股票名称':name.split()[0]})
        yield infoDict
        
        
下面是命令行窗口在运行时的信息:
C:\Users\DONG LONG RUI\.spyder-py3\myscrapy\BaiduStocks>scrapy crawl stocks
2017-10-16 22:46:55 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: BaiduStocks)
2017-10-16 22:46:55 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'BaiduStocks', 'NEWSPIDER_MODULE': 'BaiduStocks.spiders', 'SPIDER_MODULES': ['BaiduStocks.spiders']}
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-10-16 22:46:55 [scrapy.middleware] INFO: Enabled item pipelines:
['BaiduStocks.pipelines.BaidustocksInfoPipeline']
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Spider opened
2017-10-16 22:46:55 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-10-16 22:46:55 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-10-16 22:46:55 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quote.eastmoney.com/stocklist.html> (referer: None)
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Closing spider (finished)
2017-10-16 22:46:55 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 231,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 90141,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 10, 16, 14, 46, 55, 946144),
 'log_count/DEBUG': 2,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 10, 16, 14, 46, 55, 345506)}
2017-10-16 22:46:55 [scrapy.core.engine] INFO: Spider closed (finished)
求大神解答,谢谢~
gdky005 2017-03-28
  • 打赏
  • 举报
回复 1
我也遇到了,是这样解决的: 1. 抓取微博数据出现: 2017-03-28 17:52:49 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://weibo.com/robots.txt> (referer: None) 2017-03-28 17:52:49 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET http://weibo.com/p/10050583018062> 2017-03-28 17:52:49 [scrapy.core.engine] INFO: Closing spider (finished) 2. 解决办法是 在 setting.py 中: ''# Obey robots.txt rules ROBOTSTXT_OBEY = True //设置为 False 即可 就能抓出来了,问题是因为 scrapy 默认检测 robots.txt ,看是否可以抓取,如果不行,就不能用了哦!
nieoding 2017-02-14
  • 打赏
  • 举报
回复
country2 怎么写的
Jangle_ 2017-02-14
  • 打赏
  • 举报
回复
惭愧,因为要做个实习笔试,买了本lawson的爬虫书看,不太看得懂,想照葫芦画瓢,发现自己懂得太少
nieoding 2017-02-14
  • 打赏
  • 举报
回复
蛮多小错误 最大的错误是用了rules,这是用来爬网站的,你这是爬单页而已,触发不了parse_item的

class Country2Spider(scrapy.Spider):
    name = "country2"
    start_urls = ['http://swu.edu.cn/glfw_jxdw.html']

    def parse(self, response):
        lst=response.xpath('/html/body/div/div/div/div/div/table/tr')
        for x in lst:
            print x.xpath('td[1]/a/text()')[0].extract()
            print x.xpath('td[2]/a/text()')[0].extract()

Jangle_ 2017-02-14
  • 打赏
  • 举报
回复
# -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from example2.items import Example2Item class Country2Spider(CrawlSpider): name = 'country2' allowed_domains = ['swu.edu.cn'] start_urls = ['http://swu.edu.cn/glfw_jxdw.html'] rules = ( Rule(LinkExtractor(allow='/index/',deny='/user/'), follow=True), Rule(LinkExtractor(allow='/view/', deny='/user/'), callback='parse_item') ) def parse_item(self, response): lst=response.xpath('/html/body/div[5]/div[1]/div[2]/div[3]/div[1]/table/tbody') trs=lst.xpath('tr') items=[] for x in trs: item=Example2Item() item['name1']=x.xpath('td[1]/a.text()')[0].extract() item['name2']=x.xpath('td[2]/a.text()')[0].extract() items.append(item) return items

37,743

社区成员

发帖
与我相关
我的任务
社区描述
JavaScript,VBScript,AngleScript,ActionScript,Shell,Perl,Ruby,Lua,Tcl,Scala,MaxScript 等脚本语言交流。
社区管理员
  • 脚本语言(Perl/Python)社区
  • WuKongSecurity@BOB
加入社区
  • 近7日
  • 近30日
  • 至今

试试用AI创作助手写篇文章吧