一起来豆瓣看书吧!

举报
Python爱好者 发表于 2021/02/02 23:57:36 2021/02/02
【摘要】 豆瓣的图书的书一直是比较全的,最近有的小伙伴想去豆瓣看看IT有关的书籍,说走就走,豆瓣我来了! 首先我们看看我们要爬的网址: https://www.douban.com 那我们看看计算机相关的书籍: 再看看与深度学习相关的???: ok,不多说了,我们开始吧! 准备工作:需要导入的包有:(如果没有的话自行pip安装吧...

豆瓣的图书的书一直是比较全的,最近有的小伙伴想去豆瓣看看IT有关的书籍,说走就走,豆瓣我来了!


640?wx_fmt=png

首先我们看看我们要爬的网址:

https://www.douban.com

640?wx_fmt=png


那我们看看计算机相关的书籍:


640?wx_fmt=png


再看看与深度学习相关的???:

640?wx_fmt=png



ok,不多说了,我们开始吧!

准备工作:需要导入的包有:(如果没有的话自行pip安装吧!)

import importlib
import sys
import time
import urllib
import numpy as np
from bs4 import BeautifulSoup
from openpyxl import Workbook

这里使用urllib而不用requests的原因是因为 如果使用requests包,IP容易被封。


首先我们要准备一件很重要的事情,多准备几个header,那header是在哪里获取的呢?

我们需要打开开发者模式,选择Network,在里面选择一条请求:

640?wx_fmt=png

四步走,我们一步一步来:

我们需要多个user-agent来防止反爬,

我们把它都放到header里面:

hds = [{'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'},
      {'User-Agent': 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11'},
      {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)'}]


下面我们开始获取图书信息了:

这里说明一下,我们要爬没个页数的时间采用随机休眠来控制反爬,

我们先来观察一下url:

https://www.douban.com/tag/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/book?start=0

url中固定的是https://www.douban.com/tag/和/book?start=

那下面我们就来拼接url吧:

url = 'http://www.douban.com/tag/' \
+ urllib.parse.quote(book_tag) \
+ '/book?start=' + str(page_num * 15)
print(url)


之后我们开去网上获取数据了:

# 随机休眠时间,防止反爬
time.sleep(np.random.rand() * 3)
req = urllib.request.Request(url, headers=hds[page_num % len(hds)])
source_code = urllib.request.urlopen(req).read()
plain_text = str(source_code)


拿到数据之后我们使用bs4去匹配我们需要的内容:

soup = BeautifulSoup(plain_text,features="lxml")
list_soup = soup.find('div', {'class': 'mod book-list'})

try_times += 1;
if list_soup == None and try_times < 200:
continue
elif list_soup == None or len(list_soup) <= 1:
break
# 遍历查找的集合,提取细节信息
for book_info in list_soup.findAll('dd'):
title = book_info.find('a', {'class': 'title'}).string.strip()
desc = book_info.find('div', {'class': 'desc'}).string.strip()
desc_list = desc.split('/')
book_url = book_info.find('a', {'class': 'title'}).get('href')

try:
author_info = '作者/译者: ' + '/'.join(desc_list[0:-3])
pub_info = '出版信息: ' + '/'.join(desc_list[-3:])
rating = book_info.find('span', {'class': 'rating_nums'}).string.strip()
people_num = get_num(book_url)
people_num = people_num.strip('人评价')
except:
author_info = '作者/译者: 暂无'
       pub_info = '出版信息: 暂无'
       rating = '0.0'
       people_num = '0'
       print('detail info has some error!')

book_list.append([title, rating, people_num, author_info, pub_info])
try_times = 0
page_num += 1


我们还要获取点评人数的信息(如果不想要这个字段可以把people_num注释掉):

try:
req = urllib.request.Request(url,
           headers=hds[np.random.randint(0, len(hds))])
source_code = urllib.request.urlopen(req).read()
plain_text = str(source_code)
except :
print('http error!')
soup = BeautifulSoup(plain_text,features="lxml")
people_num = soup.find('div',
                      {'class': 'rating_sum'}).findAll(
'span')[1].string.strip()


根据给定标签获取所有的书:

book_lists = []
book_tag_lists = ['计算机',
'机器学习',
'linux',
'android',
'数据库',
'互联网']
for book_tag in book_tag_lists:
book_list = book_info(book_tag)
book_list = sorted(book_list, key=lambda x: x[1], reverse=True)
book_lists.append(book_list)


最后一步,我们将获取到的书的信息存到Excel里:


wb = Workbook(optimized_write=True)
ws = []
for i in range(len(book_tag_lists)):
ws.append(wb.create_sheet(title=book_tag_lists[i].decode())) # utf8->unicode
for i in range(len(book_tag_lists)):
ws[i].append(['序号', '书名', '评分', '评价人数', '作者', '出版社'])
count = 1
   for bl in book_lists[i]:
ws[i].append([count, bl[0], float(bl[1]), int(bl[2]), bl[3], bl[4]])
count += 1
save_path = 'book_list'
for i in range(len(book_tag_lists)):
save_path += ('-' + book_tag_lists[i].decode())
save_path += '.xlsx'
wb.save(save_path)


这样我们就大功告成了,查看结果:

640?wx_fmt=png

打开csv:

640?wx_fmt=png

ok,完美获取。


以下是完整代码,点击阅读原文也可以获取。

import importlib
import sys
import time
import urllib
import numpy as np
from bs4 import BeautifulSoup
from openpyxl import Workbook

importlib.reload(sys)

# 给出多个User-Agent,防止反爬
hds = [{'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'},
      {'User-Agent': 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11'},
      {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)'}]


def book_info(book_tag):
page_num = 0
   book_list = []
try_times = 0
   while True:
# url拼接
       url = 'http://www.douban.com/tag/' \
+ urllib.parse.quote(book_tag) \
+ '/book?start=' + str(page_num * 15)
print(url)
# 随机休眠时间,防止反爬
       time.sleep(np.random.rand() * 3)
req = urllib.request.Request(url, headers=hds[page_num % len(hds)])
source_code = urllib.request.urlopen(req).read()
plain_text = str(source_code)

##如果使用requests包,IP容易被封号
       # source_code = requests.get(url)
       # plain_text = source_code.text
       # 创建bs4对象
       soup = BeautifulSoup(plain_text,features="lxml")
list_soup = soup.find('div', {'class': 'mod book-list'})

try_times += 1;
if list_soup == None and try_times < 200:
continue
       elif list_soup == None or len(list_soup) <= 1:
break
       # 遍历查找的集合,提取细节信息
       for book_info in list_soup.findAll('dd'):
title = book_info.find('a', {'class': 'title'}).string.strip()
desc = book_info.find('div', {'class': 'desc'}).string.strip()
desc_list = desc.split('/')
book_url = book_info.find('a', {'class': 'title'}).get('href')

try:
author_info = '作者/译者: ' + '/'.join(desc_list[0:-3])
pub_info = '出版信息: ' + '/'.join(desc_list[-3:])
rating = book_info.find('span', {'class': 'rating_nums'}).string.strip()
people_num = get_num(book_url)
people_num = people_num.strip('人评价')
except:
author_info = '作者/译者: 暂无'
               pub_info = '出版信息: 暂无'
               rating = '0.0'
               people_num = '0'
               print('detail info has some error!')

book_list.append([title, rating, people_num, author_info, pub_info])
try_times = 0
       page_num += 1
       print('Downloading Information From Page %d' % page_num)
return book_list


def get_num(url):
try:
req = urllib.request.Request(url, headers=hds[np.random.randint(0, len(hds))])
source_code = urllib.request.urlopen(req).read()
plain_text = str(source_code)
except :
print('http error!')
soup = BeautifulSoup(plain_text,features="lxml")
people_num = soup.find('div',
                          {'class': 'rating_sum'}).findAll(
'span')[1].string.strip()
return people_num


def get_books(book_tag_lists):
book_lists = []
for book_tag in book_tag_lists:
book_list = book_info(book_tag)
book_list = sorted(book_list, key=lambda x: x[1], reverse=True)
book_lists.append(book_list)
return book_lists


def print_book_lists_excel(book_lists, book_tag_lists):
wb = Workbook(optimized_write=True)
ws = []
for i in range(len(book_tag_lists)):
ws.append(wb.create_sheet(title=book_tag_lists[i].decode())) # utf8->unicode
   for i in range(len(book_tag_lists)):
ws[i].append(['序号', '书名', '评分', '评价人数', '作者', '出版社'])
count = 1
       for bl in book_lists[i]:
ws[i].append([count, bl[0], float(bl[1]), int(bl[2]), bl[3], bl[4]])
count += 1
   save_path = 'book_list'
   for i in range(len(book_tag_lists)):
save_path += ('-' + book_tag_lists[i].decode())
save_path += '.xlsx'
   wb.save(save_path)


if __name__ == '__main__':
book_tag_lists = ['计算机','机器学习','linux','android','数据库','互联网']
book_lists = get_books(book_tag_lists)
print_book_lists_excel(book_lists, book_tag_lists)



代码地址:

https://www.bytelang.com/o/s/c/7QXO_UAlsLU=



640?wx_fmt=gif

“扫一扫,获取更多”


文章来源: blog.csdn.net,作者:敲代码的灰太狼,版权归原作者所有,如需转载,请联系作者。

原文链接:blog.csdn.net/tongtongjing1765/article/details/100581946

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。