pyspider爬链家网入库遇到的坑
【摘要】 今天不详细讲代码逻辑,只讲如何入库 mongo的入库
from pyspider.libs.base_handler import *
from lxml import etree
import pymongo
class Handler(BaseHandler): crawl_config = { } def __init__(self): connection ...
今天不详细讲代码逻辑,只讲如何入库
mongo的入库
from pyspider.libs.base_handler import *
from lxml import etree
import pymongo
class Handler(BaseHandler): crawl_config = { } def __init__(self): connection = pymongo.MongoClient(host='192.168.180.128',port=27017) client = connection['lianjia'] self.db = client['items'] @every(minutes=24 * 60) def on_start(self): for i in range(1,101): self.crawl('https://dg.lianjia.com/ershoufang/pg{}'.format(i), callback=self.index_page) @config(age=10 * 24 * 60 * 60) def index_page(self, response): seletor = etree.HTML(response.text) urls =seletor.xpath('//li[@class ="clear LOGCLICKDATA"]/a/@href') for url in urls : self.crawl(url, callback=self.detail_page) @config(priority=2) def detail_page(self,
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
文章来源: maoli.blog.csdn.net,作者:刘润森!,版权归原作者所有,如需转载,请联系作者。
原文链接:maoli.blog.csdn.net/article/details/88974150
【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
评论(0)