Python爬虫抓取图片和文字01编码

举报
孙小北 发表于 2022/08/31 21:27:16 2022/08/31
【摘要】 Python数据库操作,爬虫等相关代码是实现。以及相关数据库的购买和创建。

创建云数据库RDS

购买RDS

  • 创建云数据库RDS在已登录的华为云控制台,“服务列表”->“数据库”->“云数据库 RDS”,点击“购买数据库实例”
    image-20220310204428655.png

创建数据库及数据库表

  • 点击云数据库RDS“rds-spider”

  • “连接管理”->“公网地址”->单击“绑定”->“确定“

  • 弹性公网IP绑定完成,点击“登录”,输入用户名:root,密码:创建云数据库RDS时设置的密码
    image-20220513205700525.png

创建数据表

  • 在新建的数据库右侧点击“新建表”

  • 进入数据库“vmall”的表管理页,点击“+新建表”,表名:“product”,其他参数默认

  • 添加3个字段分别如下:①列名id,类型int,长度11,勾选主键,扩展信息如下图(id自增长);②列名title,类型varchar,长度255,勾选可空;③列名image,类型varchar,长度255,勾选可空。

  • 设置完成点击“立即创建”,弹出SQL预览页面

image-20220513210146000.png

查看目的网页并编写爬虫代码

image-20220513210322263.png

创建爬虫项目并导入

cd Desktop
scrapy startproject vmall_spider
cd vmall_spider
scrapy genspider -t crawl vmall "vmall.com"
  • 编写爬虫代码在项目“vmall_spider”->“spiders”下,双击打开“vmall.py”文件
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from vmall_spider.items import VmallSpiderItemclass 
VamllSpider(CrawlSpider):
	name = 'vamll'
	allowed_domains = ['vmall.com']
	start_urls = ['https://sale.vmall.com/huaweizone.html']
	rules = (
		Rule(LinkExtractor(allow=r'.*/product/.*'), callback='parse_item', follow=True),
	) 
	
	def parse_item(self, response):
		title=response.xpath("//div[@class='product-meta product-global']/h1/text()").get()
		price=response.xpath("//div[@class='product-price-info']/span/text()").get()
		image=response.xpath("//a[@id='product-img']/img/@src").get()
		item=VmallSpiderItem( 
			title=title,
			image=image,
		)
		print("="*30)
		print(title)
		print(image)
		print("="*30)
		yield item

image-20220310205610241.png

# Define here the models for your scraped items#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy

class VmallSpiderItem(scrapy.Item):
	title=scrapy.Field()
	image=scrapy.Field()
# Define your item pipelines here#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
import pymysql
import osfrom urllib 
import request
class VmallSpiderPipeline:
	def __init__(self):
		dbparams={
			'host':'124.70.15.164', #云数据库弹性公网IP
			'port':3306, #云数据库端口 
			'user':'root', #云数据库用户
			'password':'rIDM7g4nl5VxRUpI', #云数据库RDS密码
			'database':'vmall', #数据库名称
			'charset':'utf8'
		} 
		self.conn=pymysql.connect(**dbparams)
		self.cursor=self.conn.cursor()
		self._sql=None 
		self.path=os.path.join(os.path.dirname(os.path.dirname(__file__)),'images')
		if not os.path.exists(self.path):
			os.mkdir(self.path)
		
	def process_item(self,item,spider):
		url=item['image'] 
		image_name=url.split('_')[-1] 

		print("--------------------------image_name-----------------------------")
		print(image_name)
		print(url)
		request.urlretrieve(url,os.path.join(self.path,image_name)) 
		self.cursor.execute(self.sql,(item['title'], item['image']))
		self.conn.commit()
		return item
		
	@property
	def sql(self): 
		if not self._sql:
				self._sql="""insert into product(id,title,image) values(null,s,s)""" 
				return self._sql 
		return self._sql
# Scrapy settings for vmall_spider project#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'vmall_spider'
SPIDER_MODULES = ['vmall_spider.spiders']
NEWSPIDER_MODULE = 'vmall_spider.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'vmall_spider (+http://www.yourdomain.com)'
# Obey robots.txt rulesROBOTSTXT_OBEY = True
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'vmall_spider.middlewares.VmallSpiderSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#DOWNLOADER_MIDDLEWARES = {
#    'vmall_spider.middlewares.VmallSpiderDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'vmall_spider.pipelines.VmallSpiderPipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

小结

  • Python数据库操作,爬虫等相关代码是实现。以及相关数据库的购买和创建
  • 第一次使用python链接数据库,过程有点繁琐
【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。