蜂鸟网图片 -- 简介
今天玩点新鲜的, 使用一个新库 aiohttp, 利用它提高咱爬虫的爬取速度.
安装模块常规套路
pip install aiohttp
运行之后等待, 安装完毕, 想要深造, 那么官方文档必备 :https://aiohttp.readthedocs.io/en/stable/
接下来就可以开始写代码了.
我们要爬取的页面, 这一次选取的是
https://bbs.fengniao.com/forum/forum_101_1_lastpost.html
打开页面, 我们很容易就获取到了页码
- vcnS8tL/JPGJyIC8mZ3Q7yOe5+87Sw8fQ6NKqIMq508NBc3luY2lvICsgQWlvaHR0cNLssr1JTyCx4NC0xcCz5qOsxMfDtNDo0qrXotLio6zE49Do0qrS7LK9tcS3vbeox7DD5rzTyc9hc3luYzwvcD4KPHA+PC9wPgo8cD6908/CwLSjrM/Is6LK1Milu/HIodK7z8LJz8PmxMe49rXY1re1xM340rPUtMLroaM8L3A+CjxwPrT6wuvW0KOsz8jJ+cP30ru49mZldGNoX2ltZ191cmy1xLqvyv2jrM2syrHQr7T40ru49rLOyv2jrNXiuPayzsr90rK/ydLU1rG909C0y8ChozwvcD4KPHA+d2l0aMnPz8LOxLK71NrM4cq+o6zX1NDQy9HL98/gudjXysHPvLS/ySAoo+A/Jm9tZWdhOz8mYWN1dGU7KTwvcD4KPHA+YWlvaHR0cC5DbGllbnRTZXNzaW9uKCkgYXMgc2Vzc2lvbjq0tL2o0ru49nNlc3Npb2621M/zo6zIu7rz08O4w3Nlc3Npb2621M/zyKW08r+qzfjSs6Gjc2Vzc2lvbr/J0tS9+NDQtuDP7rLZ1/ejrLHIyOdwb3N0LGdldCxwdXS1yDwvcD4KPHA+tPrC69bQYXdhaXQgcmVzcG9uc2UudGV4dCgptci0/c340rPK/b7dt7W72DwvcD4KPHA+YXN5bmNpby5nZXRfZXZlbnRfbG9vcLS0vajP37PMo6xydW5fdW50aWxfY29tcGxldGW3vbeouLrU8LCyxcXWtNDQdGFza3PW0LXEyM7O8aGjdGFza3O/ydLUzqq1pbbAtcS6r8r9o6zSsr/J0tTKx8HQse2hozwvcD4KCjxwcmUgY2xhc3M9"brush:java;">import aiohttp
- import asyncio
- async def fetch_img_url(num):
- url = f'https://bbs.fengniao.com/forum/forum_101_{num}_lastpost.html' # 字符串拼接
- # 或者直接写成 url = 'https://bbs.fengniao.com/forum/forum_101_1_lastpost.html'
- print(url)
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) ApplewebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6726.400 QQBrowser/10.2.2265.400',
- }
- async with aiohttp.ClientSession() as session:
- # 获取轮播图地址
- async with session.get(url,headers=headers) as response:
- try:
- HTML = await response.text() # 获取到网页源码
- print(HTML)
- except Exception as e:
- print("基本错误")
- print(e)
- # 这部分你可以直接临摹
- loop = asyncio.get_event_loop()
- tasks = asyncio.ensure_future(fetch_img_url(1))
- results = loop.run_until_complete(tasks)
上面代码最后一部分也可以写成
- loop = asyncio.get_event_loop()
- tasks = [fetch_img_url(1)]
- results = loop.run_until_complete(asyncio.wait(tasks))
好了, 如果你已经成果的获取到了源码, 那么距离最终的目的就差那么一丢丢了.
tasks = [fetch_img_url(num) for num in range(1, 10)]
下面的一系列操作和上一篇博客非常类似, 找规律.
https://bbs.fengniao.com/forum/forum_101_4_lastpost.html
点击一张图片, 进入内页, 在点击内页的一张图片, 进入到一个轮播页面
再次点击进入图片播放页面
最后我们在图片播放页面, 找到源码中发现了所有的图片链接, 那么问题出来了, 如何从上面的第一个链接, 转变成轮播图的链接???
继续分析吧~~~~ ヾ (=?ω?=)o
https://bbs.fengniao.com/forum/forum_101_4_lastpost.html
转变成下面的链接?
https://bbs.fengniao.com/forum/pic/slide_101_10408464_89383854.html
继续看第一个链接, 我们使用 F12 开发者工具, 去抓取一个图片看看.
图片中标黄色框的位置, 发现了我们想要的数字, 那么好了, 我们只需要通过正则表达式把他们匹配出来就好了.
查找所有的图片
获取我们想要的两部分数字
- async def fetch_img_url(num):
- url = f'https://bbs.fengniao.com/forum/forum_101_{num}_lastpost.html'
- print(url)
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.6726.400 QQBrowser/10.2.2265.400',
- }
- async with aiohttp.ClientSession() as session:
- # 获取轮播图地址
- async with session.get(url,headers=headers) as response:
- try:
- ###############################################
- url_format = "https://bbs.fengniao.com/forum/pic/slide_101_{0}_{1}.html"
- HTML = await response.text() # 获取到网页源码
- pattern = re.compile('([\s\S.]*?)')
- first_match = pattern.findall(HTML)
- href_pattern = re.compile('href="/forum/(\d+?)_p(\d+?)\.html')
- urls = [url_format.format(href_pattern.search(url).group(1), href_pattern.search(url).group(2)) for url in first_match]
- ##############################################
- except Exception as e:
- print("基本错误")
- print(e)
代码完成, 我们已经获取到, 我们想要的 URL 了, 下面继续读取 URL 内部信息, 然后匹配我们想要的图片链接
- async def fetch_img_url(num):
- # 去抄上面的代码
- async with aiohttp.ClientSession() as session:
- # 获取轮播图地址
- async with session.get(url,headers=headers) as response:
- try:
- #去抄上面的代码去吧
- ################################################################
- for img_slider in urls:
- try:
- async with session.get(img_slider, headers=headers) as slider:
- slider_html = await slider.text() # 获取到网页源码
- try:
- pic_list_pattern = re.compile('var picList = \[(.*)?\];')
- pic_list = "[{}]".format(pic_list_pattern.search(slider_html).group(1))
- pic_json = JSON.loads(pic_list) # 图片列表已经拿到
- print(pic_json)
- except Exception as e:
- print("代码调试错误")
- print(pic_list)
- print("*"*100)
- print(e)
- except Exception as e:
- print("获取图片列表错误")
- print(img_slider)
- print(e)
- continue
- ################################################################
- print("{} 已经操作完毕".format(url))
- except Exception as e:
- print("基本错误")
- print(e)
图片最终的 JSON 已经拿到, 最后一步, 下载图片, 当当当~~~~, 一顿迅猛的操作之后, 图片就拿下来了
- async def fetch_img_url(num):
- # 代码去上面找
- async with aiohttp.ClientSession() as session:
- # 获取轮播图地址
- async with session.get(url,headers=headers) as response:
- try:
- # 代码去上面找
- for img_slider in urls:
- try:
- async with session.get(img_slider, headers=headers) as slider:
- # 代码去上面找
- ##########################################################
- for img in pic_json:
- try:
- img = img["downloadPic"]
- async with session.get(img, headers=headers) as img_res:
- imgcode = await img_res.read() # 图片读取
- with open("images/{}".format(img.split('/')[-1]), 'wb') as f:
- f.write(imgcode)
- f.close()
- except Exception as e:
- print("图片下载错误")
- print(e)
- continue
- ###############################################################
- except Exception as e:
- print("获取图片列表错误")
- print(img_slider)
- print(e)
- continue
- print("{} 已经操作完毕".format(url))
- except Exception as e:
- print("基本错误")
- print(e)
图片会在你提前写好的 images 文件夹里面快速的生成
tasks 最多可以开 1024 协程, 但是建议你开 100 个就 OK 了, 太多并发, 人家服务器吃不消.
以上操作执行完毕, 在添加一些细节, 比如保存到指定文件夹, 就 OK 了.
来源: https://www.2cto.com/kf/201905/806735.html