Python爬虫:Scrapy链接解析器LinkExtractor返回Link对象
【摘要】 LinkExtractor
from scrapy.linkextractors import LinkExtractor
12
Link
from scrapy.link import Link
1
Link四个属性
url text fragment nofollow
12
如果需要解析出文本,需要在 LinkExtractor 的参数中添加参数:a...
LinkExtractor
from scrapy.linkextractors import LinkExtractor
- 1
- 2
Link
from scrapy.link import Link
- 1
Link四个属性
url text fragment nofollow
- 1
- 2
如果需要解析出文本,需要在 LinkExtractor 的参数中添加参数:attrs
link_extractor = LinkExtractor(attrs=('href','text'))
links = link_extractor.extract_links(response)
- 1
- 2
- 3
使用示例
import scrapy
from scrapy.linkextractors import LinkExtractor
class DemoSpider(scrapy.Spider): name = 'spider' start_urls = [ "https://book.douban.com/" ] def parse(self, response): # 参数是正则表达式 link_extractor = LinkExtractor(allow="https://www.tianyancha.com/brand/b.*") links = link_extractor.extract_links(response) for link in links: print(link.text, link.url)
if __name__ == '__main__': cmdline.execute("scrapy crawl spider".split())
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
文章来源: pengshiyu.blog.csdn.net,作者:彭世瑜,版权归原作者所有,如需转载,请联系作者。
原文链接:pengshiyu.blog.csdn.net/article/details/80538752
【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
评论(0)