上一篇文章中我们介绍了爬虫的实现,及爬虫爬取数据的功能,这里会遇到几个问题,比如网站中 robots.txt 文件,里面有禁止爬取的 URL,还有爬虫是否支持代理功能,及有些网站对爬虫的风控措施,设计的爬虫下载限速功能。
1、解析 robots.txt
首先,我们需要解析 robots.txt 文件,以避免下载禁止爬取的 URL。适用 Python 自带的 robotparser 模块,就可以轻松的完成这项工作,如下面的代码。
robotparser 模块首先加载 robots.txt 文件,然后通过 can_fetch() 函数确定指定的用户代理是否允许访问网页。
- def get_robots(url):
- """Initialize robots parser for this domain
- """
- rp = robotparser.RobotFileParser()
- rp.set_url(urlparse.urljoin(url, '/robots.txt'))
- rp.read()
- return rp
为了将该功能集成到爬虫中,我们需要在 crawl 循环中添加该检查。
- while crawl_queue:
- url = crawl_queue.pop()
- # check url passes robots.txt restrictions
- if rp.can_fetch(user_agent,url):
- ...
- else:
- print 'Blocked by robots.txt:',url
2、支持代理
有时我们需要使用代理访问某个网站。比如,Netflix 屏蔽了美国以外的大多数国家。使用 urllib2 支持代理并没有想象的那么容易(可以尝试使用更友好的 Python HTTP 模块 requests 来实现该功能)下面是使用 urllib2 支持代理的代码。
proxy = …
opener = urllib2.build_opener()
proxy_params = {urlparse.urlparse(url).scheme:proxy}
opener.add_header(urllib2.ProxyHandler(proxy_params))
response = opener.open(request)
下面是集成了该功能的新版本 download 函数。
- def download(url, headers, proxy, num_retries, data=None):
- print 'Downloading:', url
- request = urllib2.Request(url, data, headers)
- opener = urllib2.build_opener()
- if proxy:
- proxy_params = {urlparse.urlparse(url).scheme: proxy}
- opener.add_handler(urllib2.ProxyHandler(proxy_params))
- try:
- response = opener.open(request)
- html = response.read()
- code = response.code
- except urllib2.URLError as e:
- print 'Download error:', e.reason
- html = ''
- if hasattr(e, 'code'):
- code = e.code
- if num_retries > 0 and 500 <= code < 600:
- # retry 5XX HTTP errors
- html = download(url, headers, proxy, num_retries - 1, data)
- else:
- code = None
- return html
3、下载限速
如果我们爬取网站的速度过快,就会面临被封禁或是造成服务器过载的风险。为了降低这些风险,我们可以在两次下载之间添加延时,从而对爬虫限速。下面是实现了该功能的类的代码。
- class Throttle:
- """Throttle downloading by sleeping between requests to same domain
- """
- def __init__(self, delay):
- # amount of delay between downloads for each domain
- self.delay = delay
- # timestamp of when a domain was last accessed
- self.domains = {}
- def wait(self, url):
- """Delay if have accessed this domain recently
- """
- domain = urlparse.urlsplit(url).netloc
- last_accessed = self.domains.get(domain)
- if self.delay > 0 and last_accessed is not None:
- sleep_secs = self.delay - (datetime.now() - last_accessed).seconds
- if sleep_secs > 0:
- time.sleep(sleep_secs)
- self.domains[domain] = datetime.now()
- Throttle类记录了每个域名上次访问的时间,如果当前时间距离上次访问时间小于指定延时,则执行睡眠操作。我们可以在每次下载之前调用Throttle对爬虫进行限速。
4、避免爬虫陷阱
目前,我们的爬虫会跟踪所有之前没有访问过的链接。但是,一些网站会动态生成页面内容,这样就会出现无限多的网页。比如,网站有一个在线日历功能,提供了可以访问下一个月或下一年的链接,那么下个月的页面中同样会有访问再下个月的链接,这样页面就会无止境的链接下去。这种情况成为爬虫陷阱。
想要避免陷入爬虫陷阱,一个简单的方法是积累到达当前网页经过了多少个链接,也就是深度。当到达最大深度时,爬虫就不再向对列中添加该网页中的链接了。要实现这一功能,我们需要修改 seen 变量。该变量原先只记录访问过的网页链接,现在修改为一个字典,增加了页面深度的记录。
def link_crawler(…,max_length = 2):
max_length = 2
seen = {}
…
depth = seen[url]
if depth != max_depth:
for link in links:
if link not in seen:
seen[link] = depth + 1
crawl_queue.qppend(link)
现在有了这一功能,我们就有信心爬虫的最终一定能够完成。如果想要禁用该功能,只需要将 max_depth 设为一个负数即可,此时当前深度永远不会与之相等。
最终版本
- import re
- import urlparse
- import urllib2
- import time
- from datetime import datetime
- import robotparser
- import Queue
- from scrape_callback3 import ScrapeCallback
- def link_crawler(seed_url, link_regex=None, delay=5, max_depth=-1, max_urls=-1, headers=None, user_agent='wswp',
- proxy=None, num_retries=1, scrape_callback=None):
- """Crawl from the given seed URL following links matched by link_regex
- """
- # the queue of URL's that still need to be crawled
- crawl_queue = [seed_url]
- # the URL's that have been seen and at what depth
- seen = {seed_url: 0}
- # track how many URL's have been downloaded
- num_urls = 0
- rp = get_robots(seed_url)
- throttle = Throttle(delay)
- headers = headers or {}
- if user_agent:
- headers['User-agent'] = user_agent
- while crawl_queue:
- url = crawl_queue.pop()
- depth = seen[url]
- # check url passes robots.txt restrictions
- if rp.can_fetch(user_agent, url):
- throttle.wait(url)
- html = download(url, headers, proxy=proxy, num_retries=num_retries)
- links = []
- if scrape_callback:
- links.extend(scrape_callback(url, html) or [])
- if depth != max_depth:
- # can still crawl further
- if link_regex:
- # filter for links matching our regular expression
- links.extend(link for link in get_links(html) if re.match(link_regex, link))
- for link in links:
- link = normalize(seed_url, link)
- # check whether already crawled this link
- if link not in seen:
- seen[link] = depth + 1
- # check link is within same domain
- if same_domain(seed_url, link):
- # success! add this new link to queue
- crawl_queue.append(link)
- # check whether have reached downloaded maximum
- num_urls += 1
- if num_urls == max_urls:
- break
- else:
- print 'Blocked by robots.txt:', url
- class Throttle:
- """Throttle downloading by sleeping between requests to same domain
- """
- def __init__(self, delay):
- # amount of delay between downloads for each domain
- self.delay = delay
- # timestamp of when a domain was last accessed
- self.domains = {}
- def wait(self, url):
- """Delay if have accessed this domain recently
- """
- domain = urlparse.urlsplit(url).netloc
- last_accessed = self.domains.get(domain)
- if self.delay > 0 and last_accessed is not None:
- sleep_secs = self.delay - (datetime.now() - last_accessed).seconds
- if sleep_secs > 0:
- time.sleep(sleep_secs)
- self.domains[domain] = datetime.now()
- def download(url, headers, proxy, num_retries, data=None):
- print 'Downloading:', url
- request = urllib2.Request(url, data, headers)
- opener = urllib2.build_opener()
- if proxy:
- proxy_params = {urlparse.urlparse(url).scheme: proxy}
- opener.add_handler(urllib2.ProxyHandler(proxy_params))
- try:
- response = opener.open(request)
- html = response.read()
- code = response.code
- except urllib2.URLError as e:
- print 'Download error:', e.reason
- html = ''
- if hasattr(e, 'code'):
- code = e.code
- if num_retries > 0 and 500 <= code < 600:
- # retry 5XX HTTP errors
- html = download(url, headers, proxy, num_retries - 1, data)
- else:
- code = None
- return html
- def normalize(seed_url, link):
- """Normalize this URL by removing hash and adding domain
- """
- link, _ = urlparse.urldefrag(link) # remove hash to avoid duplicates
- return urlparse.urljoin(seed_url, link)
- def same_domain(url1, url2):
- """Return True if both URL's belong to same domain
- """
- return urlparse.urlparse(url1).netloc == urlparse.urlparse(url2).netloc
- def get_robots(url):
- """Initialize robots parser for this domain
- """
- rp = robotparser.RobotFileParser()
- rp.set_url(urlparse.urljoin(url, '/robots.txt'))
- rp.read()
- return rp
- def get_links(html):
- """Return a list of links from html
- """
- # a regular expression to extract all links from the webpage
- webpage_regex = re.compile('<a[^>]+href=["\'](.*?)["\']', re.IGNORECASE)
- # list of all links from the webpage
- return webpage_regex.findall(html)
- if __name__ == '__main__':
- # link_crawler('http://example.webscraping.com', '/(index|view)', delay=0, num_retries=1, user_agent='BadCrawler')
- # link_crawler('http://example.webscraping.com', '/(index|view)', delay=0, num_retries=1, max_depth=1,
- # user_agent='GoodCrawler')
- link_crawler('http://fund.eastmoney.com',r'/fund.html#os_0;isall_0;ft_;pt_1',max_depth=-1,scrape_callback=ScrapeCallback
觉得好,就打赏下小编吧~
来源: http://www.bubuko.com/infodetail-1975119.html