scrapy crawl cnblog时为什么总出现下面的这堆错误

zouxfbj 2017-08-02 08:52:46
[size=14px]今天开始学习scrapy框架,当执行爬虫敲入命令scrapy crawl cnblogs时,总是提示下面这一大堆,网上也搜不到相关的解释,忘论坛大家帮忙解答一下。

scrapy crawl cnblogs ,其中cnblogs是我的爬虫name

我的笔记本是64位,安装的是python2.7 ,安装目录在c:\python27

安装scrapy前,各依赖项都安装成功,在python命令行下
执行import lxml,没报错,
执行import twisted,没报错,
执行import OpenSSL,没报错,
执行import zope.interface,没报错,

然后安装的是Scrapy 0.14.4

在D盘创建目录cnblog,然后在此目录下执行scrapy startproject cnblogSpider
然后用pcCharm打开一个新项目,把目录设置成上述目录,然后随便写了一个爬虫模块的代码,保存在cnblogspider/spiders目录下
文件取名,cnblogs_spider.py, 文件里的代码就这几行

#coding:utf-8
import scrapy
class CnblogsSpider(scrapy.Spider):
name = "cnblogs"
allowd_domains = ["cnblogs.com"]
start_urls = ["http://www.cnblogs.com/qiyeboy/default.html?page=1"]
def parse(self,response):
pass

然后在D:\cnblogs\cnblogsSpider>目录下运行 scrapy crawl cnblogs,然后总是报这堆错误

D:\cnblogs\cnblogsSpider>scrapy crawl cnblogs
2017-08-02 19:16:34+0800 [scrapy] INFO: Scrapy 0.14.4 started (bot: cnblogsSpider)
2017-08-02 19:16:35+0800 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
Traceback (most recent call last):
File "C:\Python27\Scripts\scrapy", line 4, in <module>
__import__('pkg_resources').run_script('Scrapy==0.14.4', 'scrapy')
File "C:\Python27\lib\site-packages\pkg_resources\__init__.py", line 743, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "C:\Python27\lib\site-packages\pkg_resources\__init__.py", line 1498, in run_script
exec(code, namespace, namespace)
File "c:\python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\EGG-INFO\scripts\scrapy", line 4, in <module>
execute()
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\cmdline.py", line 132, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\cmdline.py", line 97, in _run_print_help
func(*a, **kw)
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\cmdline.py", line 139, in _run_command
cmd.run(args, opts)
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\commands\crawl.py", line 43, in run
spider = self.crawler.spiders.create(spname, **opts.spargs)
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\command.py", line 34, in crawler
self._crawler.configure()
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\crawler.py", line 36, in configure
self.spiders = spman_cls.from_crawler(self)
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\spidermanager.py", line 37, in from_crawler
return cls.from_settings(crawler.settings)
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\spidermanager.py", line 33, in from_settings
return cls(settings.getlist('SPIDER_MODULES'))
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\spidermanager.py", line 23, in __init__
for module in walk_modules(name):
File "C:\Python27\lib\site-packages\scrapy-0.14.4-py2.7.egg\scrapy\utils\misc.py", line 65, in walk_modules
submod = __import__(fullpath, {}, {}, [''])
File "D:\cnblogs\cnblogsSpider\cnblogsSpider\spiders\cnblogs_spider.py", line 3, in <module>
class CnblogsSpider(scrapy.Spider):
AttributeError: 'module' object has no attribute 'Spider'
[/size]



头疼死了,不知道问题出在哪里
...全文
261 1 打赏 收藏 转发到动态 举报
写回复
用AI写文章
1 条回复
切换为时间正序
请发表友善的回复…
发表回复
zouxfbj 2017-08-03
  • 打赏
  • 举报
回复
问题解决了,应该是安装的scrapy版本过低的原因,到官网 https://scrapy.org/download/ 重新下载了最新版本scrapy1.4,重新安装 问题解决,不再出现上述提示

37,720

社区成员

发帖
与我相关
我的任务
社区描述
JavaScript,VBScript,AngleScript,ActionScript,Shell,Perl,Ruby,Lua,Tcl,Scala,MaxScript 等脚本语言交流。
社区管理员
  • 脚本语言(Perl/Python)社区
  • IT.BOB
加入社区
  • 近7日
  • 近30日
  • 至今

试试用AI创作助手写篇文章吧