人生苦短, 我用 Python
前文传送门:
小白学 Python 爬虫(1): 开篇 https://www.geekdigging.com/2019/11/13/3303836941/
小白学 Python 爬虫 (2): 前置准备(一) 基本类库的安装 https://www.geekdigging.com/2019/11/20/2586166930/
小白学 Python 爬虫(3): 前置准备(二)Linux 基础入门 https://www.geekdigging.com/2019/11/21/1005563697/
小白学 Python 爬虫(4): 前置准备(三)Docker 基础入门 https://www.geekdigging.com/2019/11/22/3679472340/
小白学 Python 爬虫 (5): 前置准备(四) 数据库基础 https://www.geekdigging.com/2019/11/24/334078215/
小白学 Python 爬虫 (6): 前置准备(五) 爬虫框架的安装 https://www.geekdigging.com/2019/11/25/1881661601/
小白学 Python 爬虫(7):HTTP 基础 https://www.geekdigging.com/2019/11/26/1197821400/
小白学 Python 爬虫(8): 网页基础 https://www.geekdigging.com/2019/11/27/101847406/
小白学 Python 爬虫(9): 爬虫基础 https://www.geekdigging.com/2019/11/28/1668465912/
小白学 Python 爬虫(10):Session 和 Cookies https://www.geekdigging.com/2019/12/01/2475257648/
小白学 Python 爬虫(11):urllib 基础使用(一) https://www.geekdigging.com/2019/12/02/2333822325/
小白学 Python 爬虫(12):urllib 基础使用(二) https://www.geekdigging.com/2019/12/03/819896244/
小白学 Python 爬虫(13):urllib 基础使用(三) https://www.geekdigging.com/2019/12/04/2992515886/
小白学 Python 爬虫(14):urllib 基础使用(四) https://www.geekdigging.com/2019/12/05/104488944/
小白学 Python 爬虫(15):urllib 基础使用(五) https://www.geekdigging.com/2019/12/07/2788855167/
小白学 Python 爬虫(16):urllib 实战之爬取妹子图 https://www.geekdigging.com/2019/12/09/1691033431/
小白学 Python 爬虫(17):Requests 基础使用 https://www.geekdigging.com/2019/12/10/1910005577/
小白学 Python 爬虫(18):Requests 进阶操作 https://www.geekdigging.com/2019/12/11/1468953802/
小白学 Python 爬虫(19):Xpath 基操 https://www.geekdigging.com/2019/12/12/3568648672/
小白学 Python 爬虫(20):Xpath 进阶 https://www.geekdigging.com/2019/12/13/2569867940/
小白学 Python 爬虫(21): 解析库 Beautiful Soup(上) https://www.geekdigging.com/2019/12/15/2789385418/
小白学 Python 爬虫(22): 解析库 Beautiful Soup(下) https://www.geekdigging.com/2019/12/16/876770087/
小白学 Python 爬虫(23): 解析库 pyquery 入门 https://www.geekdigging.com/2019/12/17/876770088/
小白学 Python 爬虫(24):2019 豆瓣电影排行 https://www.geekdigging.com/2019/12/18/1275791678/
小白学 Python 爬虫(25): 爬取股票信息 https://www.geekdigging.com/2019/12/19/1066903974/
小白学 Python 爬虫(26): 为啥买不起上海二手房你都买不起 https://www.geekdigging.com/2019/12/20/788803015/
小白学 Python 爬虫(27): 自动化测试框架 Selenium 从入门到放弃(上) https://www.geekdigging.com/2019/12/22/151891020/
小白学 Python 爬虫(28): 自动化测试框架 Selenium 从入门到放弃(下) https://www.geekdigging.com/2019/12/24/1100772905/
小白学 Python 爬虫(29):Selenium 获取某大型电商网站商品信息 https://www.geekdigging.com/2019/12/25/7469407721/
- from urllib.error import URLError
- from urllib.request import ProxyHandler, build_opener
- proxy_handler = ProxyHandler({
- 'http': 'http://182.34.37.0:9999',
- 'https': 'https://117.69.150.84:9999'
- })
- opener = build_opener(proxy_handler)
- try:
- response = opener.open('https://httpbin.org/get')
- print(response.read().decode('utf-8'))
- except URLError as e:
- print(e.reason)
- {
- "args": {},
- "headers": {
- "Accept-Encoding": "identity",
- "Host": "httpbin.org",
- "User-Agent": "Python-urllib/3.7"
- },
- "origin": "117.69.150.84, 117.69.150.84",
- "url": "https://httpbin.org/get"
- }
- import requests
- proxies = {
- 'http': 'http://59.52.186.117:9999',
- 'https': 'https://222.95.241.6:3000',
- }
- try:
- response = requests.get('https://httpbin.org/get', proxies = proxies)
- print(response.text)
- except requests.exceptions.ConnectionError as e:
- print('Error', e.args)
- {
- "args": {},
- "headers": {
- "Accept": "*/*",
- "Accept-Encoding": "gzip, deflate",
- "Host": "httpbin.org",
- "User-Agent": "python-requests/2.22.0"
- },
- "origin": "222.95.241.6, 222.95.241.6",
- "url": "https://httpbin.org/get"
- }
- from selenium import webdriver
- proxy = '222.95.241.6:3000'
- chrome_options = webdriver.ChromeOptions()
- chrome_options.add_argument('--proxy-server=https://' + proxy)
- driver = webdriver.Chrome(chrome_options=chrome_options)
- driver.get('https://httpbin.org/get')
- http://www.ip3366.net/
- https://www.kuaidaili.com/free/
- https://www.xicidaili.com/
来源: http://www.bubuko.com/infodetail-3357141.html