【爬虫实践】获取某城市天气数据

举报
zstar 发表于 2022/08/06 00:02:18 2022/08/06
【摘要】 功能需求 获取山东济南城市每天的天气情况。 需要获取四个数据:天气、温度、风向、风级。 url地址:http://www.weather.com.cn/weather/101120101.shtml ...

功能需求

获取山东济南城市每天的天气情况。
需要获取四个数据:天气、温度、风向、风级。
url地址:http://www.weather.com.cn/weather/101120101.shtml

思路分析

在这里插入图片描述
该界面通过get请求,得到html数据,包含七天图示数据,故可用bs4对页面进行解析

功能一:获取今日天气

import os
import random
import time
from bs4 import BeautifulSoup
import re
import requests


# 得到网页并用bs4进行网页解析
def getHtml(url):
    # 请求头被封,于是采用多个请求头,每次随机用一个,防止被服务器识别为爬虫
    user_agent_list = [
        "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36",
        "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36",
        "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
        "Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.5; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
        "Mozilla/4.0 (compatible; MSIE 6.0; ) Opera/UCWEB7.0.2.37/28/999",
        "Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0; HTC; Titan)"
    ]
    header = {'User-Agent': random.choice(user_agent_list)}
    # 如果超时,重新进行三次连接
    reconnect = 0
    while reconnect < 3:
        try:
            with requests.get(url=url, headers=header, stream=True, timeout=20) as rep:
                # 得到中文乱码,查询网页编码方式为utf-8
                rep.encoding = 'utf-8'
                # 解析网页
                soup = BeautifulSoup(rep.text, 'html.parser')
                return soup
        except (requests.exceptions.RequestException, ValueError):
            reconnect += 1
    return []


# 获取今日天气数据
def get_content(soup):
    # 返回的是从今天开始一周7天的天气,下标[0]表示今天,如需后面几天的数据,修改下标即可
    weather = soup.findAll(name="p", attrs={"class": "wea"})[0].text
    weather.replace('\\n', '')
    # print(weather)
    # 气温同理,下标[0]表示今天最高气温/最低气温
    temperature = soup.findAll(name="p", attrs={"class": "tem"})[0].text
    temperature = temperature.strip()  # strip()用于剔除数据中的空格
    # print(temperature)
    # 风向同理,下标[0]表示今天风向
    wind_direction = soup.findAll(name="p", attrs={"class": "win"})[0].find(name="span")['title']
    # print(wind_direction)
    # 风级同理,下标[0]表示今天风级
    wind_scale = soup.findAll(name="p", attrs={"class": "win"})[0].find(name="i").text
    # print(wind_scale)
    return weather, temperature, wind_direction, wind_scale


if __name__ == '__main__':
    url = 'http://www.weather.com.cn/weather/101120101.shtml'
    soup = getHtml(url)
    weather, temperature, wind_direction, wind_scale = get_content(soup)
    with open(r'今日天气.txt', mode='w', encoding='utf-8') as f:
        f.write("天气:" + weather + "\n")
        f.write("气温:" + temperature + "\n")
        f.write("风向:" + wind_direction + "\n")
        f.write("风级:" + wind_scale + "\n")

  
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69

功能二:获取七日天气

在功能一的基础上,利用datetime.timedelta(days=i)得到七日日期

import datetime
import os
import random

from bs4 import BeautifulSoup
import requests


# 得到网页并用bs4进行网页解析
def getHtml(url):
    # 请求头被封,于是采用多个请求头,每次随机用一个,防止被服务器识别为爬虫
    user_agent_list = [
        "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36",
        "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36",
        "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36",
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36",
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
        "Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10.5; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)",
        "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
        "Mozilla/4.0 (compatible; MSIE 6.0; ) Opera/UCWEB7.0.2.37/28/999",
        "Mozilla/5.0 (compatible; MSIE 9.0; Windows Phone OS 7.5; Trident/5.0; IEMobile/9.0; HTC; Titan)"
    ]
    header = {'User-Agent': random.choice(user_agent_list)}
    # 如果超时,重新进行三次连接
    reconnect = 0
    while reconnect < 3:
        try:
            with requests.get(url=url, headers=header, stream=True, timeout=20) as rep:
                # 得到中文乱码,查询网页编码方式为utf-8
                rep.encoding = 'utf-8'
                # 解析网页
                soup = BeautifulSoup(rep.text, 'html.parser')
                return soup
        except (requests.exceptions.RequestException, ValueError):
            reconnect += 1
    return []


# 获取今日天气数据
def get_content(soup, i):
    # 返回的是从今天开始一周7天的天气,下标[0]表示今天,如需后面几天的数据,修改下标即可
    weather = soup.findAll(name="p", attrs={"class": "wea"})[i].text
    weather.replace('\\n', '')
    # print(weather)
    # 气温同理,下标[0]表示今天最高气温/最低气温
    temperature = soup.findAll(name="p", attrs={"class": "tem"})[i].text
    temperature = temperature.strip()  # strip()用于剔除数据中的空格
    # print(temperature)
    # 风向同理,下标[0]表示今天风向
    wind_direction = soup.findAll(name="p", attrs={"class": "win"})[i].find(name="span")['title']
    # print(wind_direction)
    # 风级同理,下标[0]表示今天风级
    wind_scale = soup.findAll(name="p", attrs={"class": "win"})[i].find(name="i").text
    # print(wind_scale)
    return weather, temperature, wind_direction, wind_scale


if __name__ == '__main__':
    file = r"./天气.txt"
    path = os.getcwd()
    if os.path.exists(file):
        os.remove(file)
    url = 'http://www.weather.com.cn/weather/101120101.shtml'
    soup = getHtml(url)
    for i in range(7):
        time = datetime.date.today() + datetime.timedelta(days=i)
        weather, temperature, wind_direction, wind_scale = get_content(soup, i)
        with open(file, mode='a', encoding='utf-8') as f:
            f.write("日期: " + str(time) + "\n")
            f.write("天气:" + weather + "\n")
            f.write("气温:" + temperature + "\n")
            f.write("风向:" + wind_direction + "\n")
            f.write("风级:" + wind_scale + "\n")
            f.write("\n")

  
 
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77

在这里插入图片描述

总体不难,比较实用,mark一下。

文章来源: zstar.blog.csdn.net,作者:zstar-_,版权归原作者所有,如需转载,请联系作者。

原文链接:zstar.blog.csdn.net/article/details/122850867

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。