Requests: 让 HTTP 服务人类¶
发行版本 v2.18.1. (安装说明)
Requests 唯一的一个非转基因的 Python HTTP 库,人类可以安全享用。
警告:非专业使用其他 HTTP 库会导致危险的副作用,包括:安全缺陷症、冗余代码症、重新发明轮子症、啃文档症、抑郁、头疼、甚至死亡。
看吧,这就是 Requests 的威力:
>>> r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
>>> r.status_code
200
>>> r.headers['content-type']
'application/json; charset=utf8'
>>> r.encoding
'utf-8'
>>> r.text
u'{"type":"User"...'
>>> r.json()
{u'private_gists': 419, u'total_private_repos': 77, ...}
Requests 允许你发送纯天然,植物饲养的 HTTP/1.1 请求,无需手工劳动。你不需要手动为 URL 添加查询字串,也不需要对 POST 数据进行表单编码。Keep-alive 和 HTTP 连接池的功能是 100% 自动化的,一切动力都来自于根植在 Requests 内部的 urllib3。
用户见证¶
Twitter、Spotify、Microsoft、Amazon、Lyft、BuzzFeed、Reddit、NSA、女王殿下的政府、Amazon、Google、Twilio、Mozilla、Heroku、PayPal、NPR、Obama for America、Transifex、Native Instruments、Washington Post、Twitter、SoundCloud、Kippt、Readability、以及若干不愿公开身份的联邦政府机构都在内部使用。
- Armin Ronacher
- Requests 是一个完美的例子,它证明了通过恰到好处的抽象,API 可以写得多么优美。
- Matt DeBoard
- 我要想个办法,把 @kennethreitz 写的 Python requests 模块做成纹身。一字不漏。
- Daniel Greenfeld
- 感谢 @kennethreitz 的 Requests 库,刚刚用 10 行代码炸掉了 1200 行意大利面代码。今天真是爽呆了!
- Kenny Meyers
- Python HTTP: 疑惑与否,都去用 Requests 吧。简单优美,而且符合 Python 风格。
功能特性¶
Requests 完全满足今日 web 的需求。
- Keep-Alive & 连接池
- 国际化域名和 URL
- 带持久 Cookie 的会话
- 浏览器式的 SSL 认证
- 自动内容解码
- 基本/摘要式的身份认证
- 优雅的 key/value Cookie
- 自动解压
- Unicode 响应体
- HTTP(S) 代理支持
- 文件分块上传
- 流下载
- 连接超时
- 分块请求
- 支持
.netrc
Requests 支持 Python 2.6—2.7以及3.3—3.7,而且能在 PyPy 下完美运行。
用户指南¶
这部分文档是以文字为主,从 Requests 的背景讲起,然后对 Requests 的重点功能做了逐一的介绍。
简介¶
开发哲学¶
Requests 是以 PEP 20 的箴言为中心开发的
- Beautiful is better than ugly.(美丽优于丑陋)
- Explicit is better than implicit.(直白优于含蓄)
- Simple is better than complex.(简单优于复杂)
- Complex is better than complicated.(复杂优于繁琐)
- Readability counts.(可读性很重要)
对于 Requests 所有的贡献都应牢记这些重要的准则。
Apache2 协议¶
现在你找到的许多开源项目都是以 GPL 协议发布的。虽然 GPL 有它自己的一席之地, 但在开始你的下一个开源项目时,GPL 应该不再是你的默认选择。
项目发行于 GPL 协议之后,就不能用于任何本身没开源的商业产品中。
MIT、BSD、ISC、Apache2 许可都是优秀的替代品,它们允许你的开源软件自由应用在私有闭源软件中。
Requests 的发布许可为 Apache2 License.
Requests 协议¶
Copyright 2017 Kenneth Reitz
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
安装 Requests¶
这部分文档包含了 Requests 的安装过程,使用任何软件的第一步就是正确地安装它。
pip install requests¶
要安装 Requests,只要在你的终端中运行这个简单命令即可:
- ::
- $ pip install requests
如果你没有安装 pip (啧啧),这个 Python installation guide 可以带你完成这一流程。
快速上手¶
迫不及待了吗?本页内容为如何入门 Requests 提供了很好的指引。其假设你已经安装了 Requests。如果还没有,去安装一节看看吧。
首先,确认一下:
让我们从一些简单的示例开始吧。
发送请求¶
使用 Requests 发送网络请求非常简单。
一开始要导入 Requests 模块:
>>> import requests
然后,尝试获取某个网页。本例子中,我们来获取 Github 的公共时间线:
>>> r = requests.get('https://api.github.com/events')
现在,我们有一个名为 r
的 Response
对象。我们可以从这个对象中获取所有我们想要的信息。
Requests 简便的 API 意味着所有 HTTP 请求类型都是显而易见的。例如,你可以这样发送一个 HTTP POST 请求:
>>> r = requests.post('http://httpbin.org/post', data = {'key':'value'})
漂亮,对吧?那么其他 HTTP 请求类型:PUT,DELETE,HEAD 以及 OPTIONS 又是如何的呢?都是一样的简单:
>>> r = requests.put('http://httpbin.org/put', data = {'key':'value'})
>>> r = requests.delete('http://httpbin.org/delete')
>>> r = requests.head('http://httpbin.org/get')
>>> r = requests.options('http://httpbin.org/get')
都很不错吧,但这也仅是 Requests 的冰山一角呢。
传递 URL 参数¶
你也许经常想为 URL 的查询字符串(query string)传递某种数据。如果你是手工构建 URL,那么数据会以键/值对的形式置于 URL 中,跟在一个问号的后面。例如, httpbin.org/get?key=val
。
Requests 允许你使用 params
关键字参数,以一个字符串字典来提供这些参数。举例来说,如果你想传递
key1=value1
和 key2=value2
到 httpbin.org/get
,那么你可以使用如下代码:
>>> payload = {'key1': 'value1', 'key2': 'value2'}
>>> r = requests.get("http://httpbin.org/get", params=payload)
通过打印输出该 URL,你能看到 URL 已被正确编码:
>>> print(r.url)
http://httpbin.org/get?key2=value2&key1=value1
注意字典里值为 None
的键都不会被添加到 URL 的查询字符串里。
你还可以将一个列表作为值传入:
>>> payload = {'key1': 'value1', 'key2': ['value2', 'value3']}
>>> r = requests.get('http://httpbin.org/get', params=payload)
>>> print(r.url)
http://httpbin.org/get?key1=value1&key2=value2&key2=value3
响应内容¶
我们能读取服务器响应的内容。再次以 GitHub 时间线为例:
>>> import requests
>>> r = requests.get('https://api.github.com/events')
>>> r.text
u'[{"repository":{"open_issues":0,"url":"https://github.com/...
Requests 会自动解码来自服务器的内容。大多数 unicode 字符集都能被无缝地解码。
请求发出后,Requests 会基于 HTTP 头部对响应的编码作出有根据的推测。当你访问 r.text
之时,Requests 会使用其推测的文本编码。你可以找出 Requests 使用了什么编码,并且能够使用
r.encoding
属性来改变它:
>>> r.encoding
'utf-8'
>>> r.encoding = 'ISO-8859-1'
如果你改变了编码,每当你访问 r.text
,Request 都将会使用 r.encoding
的新值。你可能希望在使用特殊逻辑计算出文本的编码的情况下来修改编码。比如 HTTP 和 XML
自身可以指定编码。这样的话,你应该使用 r.content
来找到编码,然后设置 r.encoding
为相应的编码。这样就能使用正确的编码解析 r.text
了。
在你需要的情况下,Requests 也可以使用定制的编码。如果你创建了自己的编码,并使用
codecs
模块进行注册,你就可以轻松地使用这个解码器名称作为 r.encoding
的值,
然后由 Requests 来为你处理编码。
二进制响应内容¶
你也能以字节的方式访问请求响应体,对于非文本请求:
>>> r.content
b'[{"repository":{"open_issues":0,"url":"https://github.com/...
Requests 会自动为你解码 gzip
和 deflate
传输编码的响应数据。
例如,以请求返回的二进制数据创建一张图片,你可以使用如下代码:
>>> from PIL import Image
>>> from io import BytesIO
>>> i = Image.open(BytesIO(r.content))
JSON 响应内容¶
Requests 中也有一个内置的 JSON 解码器,助你处理 JSON 数据:
>>> import requests
>>> r = requests.get('https://api.github.com/events')
>>> r.json()
[{u'repository': {u'open_issues': 0, u'url': 'https://github.com/...
如果 JSON 解码失败, r.json()
就会抛出一个异常。例如,响应内容是 401 (Unauthorized),尝试访问 r.json()
将会抛出 ValueError: No JSON object could be decoded
异常。
需要注意的是,成功调用 r.json()
并**不**意味着响应的成功。有的服务器会在失败的响应中包含一个 JSON 对象(比如 HTTP 500 的错误细节)。这种 JSON 会被解码返回。要检查请求是否成功,请使用 r.raise_for_status()
或者检查 r.status_code
是否和你的期望相同。
原始响应内容¶
在罕见的情况下,你可能想获取来自服务器的原始套接字响应,那么你可以访问 r.raw
。
如果你确实想这么干,那请你确保在初始请求中设置了 stream=True
。具体你可以这么做:
>>> r = requests.get('https://api.github.com/events', stream=True)
>>> r.raw
<requests.packages.urllib3.response.HTTPResponse object at 0x101194810>
>>> r.raw.read(10)
'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03'
但一般情况下,你应该以下面的模式将文本流保存到文件:
with open(filename, 'wb') as fd:
for chunk in r.iter_content(chunk_size):
fd.write(chunk)
使用 Response.iter_content
将会处理大量你直接使用 Response.raw
不得不处理的。
当流下载时,上面是优先推荐的获取内容方式。 Note that chunk_size
can be freely adjusted to a number that
may better fit your use cases.
定制请求头¶
如果你想为请求添加 HTTP 头部,只要简单地传递一个 dict
给 headers
参数就可以了。
例如,在前一个示例中我们没有指定 content-type:
>>> url = 'https://api.github.com/some/endpoint'
>>> headers = {'user-agent': 'my-app/0.0.1'}
>>> r = requests.get(url, headers=headers)
注意: 定制 header 的优先级低于某些特定的信息源,例如:
- 如果在
.netrc
中设置了用户认证信息,使用 headers= 设置的授权就不会生效。而如果设置了auth=
参数,``.netrc`` 的设置就无效了。 - 如果被重定向到别的主机,授权 header 就会被删除。
- 代理授权 header 会被 URL 中提供的代理身份覆盖掉。
- 在我们能判断内容长度的情况下,header 的 Content-Length 会被改写。
更进一步讲,Requests 不会基于定制 header 的具体情况改变自己的行为。只不过在最后的请求中,所有的 header 信息都会被传递进去。
注意: 所有的 header 值必须是 string
、bytestring 或者 unicode。尽管传递 unicode
header 也是允许的,但不建议这样做。
更加复杂的 POST 请求¶
通常,你想要发送一些编码为表单形式的数据——非常像一个 HTML 表单。要实现这个,只需简单地传递一个字典给 data 参数。你的数据字典在发出请求时会自动编码为表单形式:
>>> payload = {'key1': 'value1', 'key2': 'value2'}
>>> r = requests.post("http://httpbin.org/post", data=payload)
>>> print(r.text)
{
...
"form": {
"key2": "value2",
"key1": "value1"
},
...
}
你还可以为 data
参数传入一个元组列表。在表单中多个元素使用同一 key 的时候,这种方式尤其有效:
>>> payload = (('key1', 'value1'), ('key1', 'value2'))
>>> r = requests.post('http://httpbin.org/post', data=payload)
>>> print(r.text)
{
...
"form": {
"key1": [
"value1",
"value2"
]
},
...
}
很多时候你想要发送的数据并非编码为表单形式的。如果你传递一个 string
而不是一个 dict
,那么数据会被直接发布出去。
例如,Github API v3 接受编码为 JSON 的 POST/PATCH 数据:
>>> import json
>>> url = 'https://api.github.com/some/endpoint'
>>> payload = {'some': 'data'}
>>> r = requests.post(url, data=json.dumps(payload))
此处除了可以自行对 dict
进行编码,你还可以使用 json
参数直接传递,然后它就会被自动编码。这是 2.4.2 版的新加功能:
>>> url = 'https://api.github.com/some/endpoint'
>>> payload = {'some': 'data'}
>>> r = requests.post(url, json=payload)
POST一个多部分编码(Multipart-Encoded)的文件¶
Requests 使得上传多部分编码文件变得很简单:
>>> url = 'http://httpbin.org/post'
>>> files = {'file': open('report.xls', 'rb')}
>>> r = requests.post(url, files=files)
>>> r.text
{
...
"files": {
"file": "<censored...binary...data>"
},
...
}
你可以显式地设置文件名,文件类型和请求头:
>>> url = 'http://httpbin.org/post'
>>> files = {'file': ('report.xls', open('report.xls', 'rb'), 'application/vnd.ms-excel', {'Expires': '0'})}
>>> r = requests.post(url, files=files)
>>> r.text
{
...
"files": {
"file": "<censored...binary...data>"
},
...
}
如果你想,你也可以发送作为文件来接收的字符串:
>>> url = 'http://httpbin.org/post'
>>> files = {'file': ('report.csv', 'some,data,to,send\nanother,row,to,send\n')}
>>> r = requests.post(url, files=files)
>>> r.text
{
...
"files": {
"file": "some,data,to,send\\nanother,row,to,send\\n"
},
...
}
如果你发送一个非常大的文件作为 multipart/form-data
请求,你可能希望将请求做成数据流。默认下 requests
不支持, 但有个第三方包 requests-toolbelt
是支持的。你可以阅读
toolbelt 文档 来了解使用方法。
在一个请求中发送多文件参考 高级用法 一节。
警告
我们强烈建议你用二进制模式(binary mode)打开文件。这是因为 Requests 可能会试图为你提供
Content-Length
header,在它这样做的时候,这个值会被设为文件的字节数(bytes)。如果用文本模式(text mode)打开文件,就可能会发生错误。
响应状态码¶
我们可以检测响应状态码:
>>> r = requests.get('http://httpbin.org/get')
>>> r.status_code
200
为方便引用,Requests还附带了一个内置的状态码查询对象:
>>> r.status_code == requests.codes.ok
True
如果发送了一个错误请求(一个 4XX 客户端错误,或者 5XX 服务器错误响应),我们可以通过
Response.raise_for_status()
来抛出异常:
>>> bad_r = requests.get('http://httpbin.org/status/404')
>>> bad_r.status_code
404
>>> bad_r.raise_for_status()
Traceback (most recent call last):
File "requests/models.py", line 832, in raise_for_status
raise http_error
requests.exceptions.HTTPError: 404 Client Error
但是,由于我们的例子中 r
的 status_code
是 200
,当我们调用
raise_for_status()
时,得到的是:
>>> r.raise_for_status()
None
一切都挺和谐哈。
响应头¶
我们可以查看以一个 Python 字典形式展示的服务器响应头:
>>> r.headers
{
'content-encoding': 'gzip',
'transfer-encoding': 'chunked',
'connection': 'close',
'server': 'nginx/1.0.4',
'x-runtime': '148ms',
'etag': '"e1ca502697e5c9317743dc078f67693f"',
'content-type': 'application/json'
}
但是这个字典比较特殊:它是仅为 HTTP 头部而生的。根据 RFC 2616, HTTP 头部是大小写不敏感的。
因此,我们可以使用任意大写形式来访问这些响应头字段:
>>> r.headers['Content-Type']
'application/json'
>>> r.headers.get('content-type')
'application/json'
它还有一个特殊点,那就是服务器可以多次接受同一 header,每次都使用不同的值。但 Requests 会将它们合并,这样它们就可以用一个映射来表示出来,参见 RFC 7230:
A recipient MAY combine multiple header fields with the same field name into one "field-name: field-value" pair, without changing the semantics of the message, by appending each subsequent field value to the combined field value in order, separated by a comma.
接收者可以合并多个相同名称的 header 栏位,把它们合为一个 "field-name: field-value" 配对,将每个后续的栏位值依次追加到合并的栏位值中,用逗号隔开即可,这样做不会改变信息的语义。
Cookie¶
如果某个响应中包含一些 cookie,你可以快速访问它们:
>>> url = 'http://example.com/some/cookie/setting/url'
>>> r = requests.get(url)
>>> r.cookies['example_cookie_name']
'example_cookie_value'
要想发送你的cookies到服务器,可以使用 cookies
参数:
>>> url = 'http://httpbin.org/cookies'
>>> cookies = dict(cookies_are='working')
>>> r = requests.get(url, cookies=cookies)
>>> r.text
'{"cookies": {"cookies_are": "working"}}'
Cookie 的返回对象为 RequestsCookieJar
,它的行为和字典类似,但接口更为完整,适合跨域名跨路径使用。你还可以把 Cookie Jar 传到 Requests 中:
>>> jar = requests.cookies.RequestsCookieJar()
>>> jar.set('tasty_cookie', 'yum', domain='httpbin.org', path='/cookies')
>>> jar.set('gross_cookie', 'blech', domain='httpbin.org', path='/elsewhere')
>>> url = 'http://httpbin.org/cookies'
>>> r = requests.get(url, cookies=jar)
>>> r.text
'{"cookies": {"tasty_cookie": "yum"}}'
重定向与请求历史¶
默认情况下,除了 HEAD, Requests 会自动处理所有重定向。
可以使用响应对象的 history
方法来追踪重定向。
Response.history
是一个
Response
对象的列表,为了完成请求而创建了这些对象。这个对象列表按照从最老到最近的请求进行排序。
例如,Github 将所有的 HTTP 请求重定向到 HTTPS:
>>> r = requests.get('http://github.com')
>>> r.url
'https://github.com/'
>>> r.status_code
200
>>> r.history
[<Response [301]>]
如果你使用的是GET、OPTIONS、POST、PUT、PATCH 或者 DELETE,那么你可以通过 allow_redirects
参数禁用重定向处理:
>>> r = requests.get('http://github.com', allow_redirects=False)
>>> r.status_code
301
>>> r.history
[]
如果你使用了 HEAD,你也可以启用重定向:
>>> r = requests.head('http://github.com', allow_redirects=True)
>>> r.url
'https://github.com/'
>>> r.history
[<Response [301]>]
超时¶
你可以告诉 requests 在经过以 timeout
参数设定的秒数时间之后停止等待响应。基本上所有的生产代码都应该使用这一参数。如果不使用,你的程序可能会永远失去响应:
>>> requests.get('http://github.com', timeout=0.001)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
requests.exceptions.Timeout: HTTPConnectionPool(host='github.com', port=80): Request timed out. (timeout=0.001)
注意
timeout
仅对连接过程有效,与响应体的下载无关。 timeout
并不是整个下载响应的时间限制,而是如果服务器在 timeout
秒内没有应答,将会引发一个异常(更精确地说,是在
timeout
秒内没有从基础套接字上接收到任何字节的数据时)If no timeout is specified explicitly, requests do
not time out.
错误与异常¶
遇到网络问题(如:DNS 查询失败、拒绝连接等)时,Requests 会抛出一个
ConnectionError
异常。
如果 HTTP 请求返回了不成功的状态码, Response.raise_for_status()
会抛出一个 HTTPError
异常。
若请求超时,则抛出一个 Timeout
异常。
若请求超过了设定的最大重定向次数,则会抛出一个 TooManyRedirects
异常。
所有Requests显式抛出的异常都继承自 requests.exceptions.RequestException
。
准备好学习更多内容了吗?去 高级用法 一节看看吧。
高级用法¶
本篇文档涵盖了 Requests 的一些高级特性。
会话对象¶
会话对象让你能够跨请求保持某些参数。它也会在同一个 Session 实例发出的所有请求之间保持 cookie,
期间使用 urllib3
的 connection pooling 功能。所以如果你向同一主机发送多个请求,底层的 TCP 连接将会被重用,从而带来显著的性能提升。 (参见 HTTP persistent connection).
会话对象具有主要的 Requests API 的所有方法。
我们来跨请求保持一些 cookie:
s = requests.Session()
s.get('http://httpbin.org/cookies/set/sessioncookie/123456789')
r = s.get("http://httpbin.org/cookies")
print(r.text)
# '{"cookies": {"sessioncookie": "123456789"}}'
会话也可用来为请求方法提供缺省数据。这是通过为会话对象的属性提供数据来实现的:
s = requests.Session()
s.auth = ('user', 'pass')
s.headers.update({'x-test': 'true'})
# both 'x-test' and 'x-test2' are sent
s.get('http://httpbin.org/headers', headers={'x-test2': 'true'})
任何你传递给请求方法的字典都会与已设置会话层数据合并。方法层的参数覆盖会话的参数。
不过需要注意,就算使用了会话,方法级别的参数也不会被跨请求保持。下面的例子只会和第一个请求发送 cookie ,而非第二个:
s = requests.Session()
r = s.get('http://httpbin.org/cookies', cookies={'from-my': 'browser'})
print(r.text)
# '{"cookies": {"from-my": "browser"}}'
r = s.get('http://httpbin.org/cookies')
print(r.text)
# '{"cookies": {}}'
如果你要手动为会话添加 cookie,就使用 Cookie utility 函数 来操纵
Session.cookies
。
会话还可以用作前后文管理器:
with requests.Session() as s:
s.get('http://httpbin.org/cookies/set/sessioncookie/123456789')
这样就能确保 with
区块退出后会话能被关闭,即使发生了异常也一样。
从字典参数中移除一个值
有时你会想省略字典参数中一些会话层的键。要做到这一点,你只需简单地在方法层参数中将那个键的值设置为 None
,那个键就会被自动省略掉。
包含在一个会话中的所有数据你都可以直接使用。学习更多细节请阅读 会话 API 文档。
请求与响应对象¶
任何时候进行了类似 requests.get() 的调用,你都在做两件主要的事情。其一,你在构建一个 Request 对象,
该对象将被发送到某个服务器请求或查询一些资源。其二,一旦 requests
得到一个从服务器返回的响应就会产生一个 Response
对象。该响应对象包含服务器返回的所有信息,也包含你原来创建的 Request
对象。如下是一个简单的请求,从 Wikipedia 的服务器得到一些非常重要的信息:
>>> r = requests.get('http://en.wikipedia.org/wiki/Monty_Python')
如果想访问服务器返回给我们的响应头部信息,可以这样做:
>>> r.headers
{'content-length': '56170', 'x-content-type-options': 'nosniff', 'x-cache':
'HIT from cp1006.eqiad.wmnet, MISS from cp1010.eqiad.wmnet', 'content-encoding':
'gzip', 'age': '3080', 'content-language': 'en', 'vary': 'Accept-Encoding,Cookie',
'server': 'Apache', 'last-modified': 'Wed, 13 Jun 2012 01:33:50 GMT',
'connection': 'close', 'cache-control': 'private, s-maxage=0, max-age=0,
must-revalidate', 'date': 'Thu, 14 Jun 2012 12:59:39 GMT', 'content-type':
'text/html; charset=UTF-8', 'x-cache-lookup': 'HIT from cp1006.eqiad.wmnet:3128,
MISS from cp1010.eqiad.wmnet:80'}
然而,如果想得到发送到服务器的请求的头部,我们可以简单地访问该请求,然后是该请求的头部:
>>> r.request.headers
{'Accept-Encoding': 'identity, deflate, compress, gzip',
'Accept': '*/*', 'User-Agent': 'python-requests/0.13.1'}
准备的请求 (Prepared Request)¶
当你从 API 或者会话调用中收到一个 Response
对象时,request
属性其实是使用了 PreparedRequest
。有时在发送请求之前,你需要对
body 或者 header (或者别的什么东西)做一些额外处理,下面演示了一个简单的做法:
from requests import Request, Session
s = Session()
req = Request('GET', url,
data=data,
headers=header
)
prepped = req.prepare()
# do something with prepped.body
# do something with prepped.headers
resp = s.send(prepped,
stream=stream,
verify=verify,
proxies=proxies,
cert=cert,
timeout=timeout
)
print(resp.status_code)
由于你没有对 Request
对象做什么特殊事情,你立即准备和修改了 PreparedRequest
对象,然后把它和别的参数一起发送到 requests.*
或者 Session.*
。
然而,上述代码会失去 Requests Session
对象的一些优势,
尤其 Session
级别的状态,例如 cookie 就不会被应用到你的请求上去。要获取一个带有状态的 PreparedRequest
,
请用 Session.prepare_request()
取代
Request.prepare()
的调用,如下所示:
from requests import Request, Session
s = Session()
req = Request('GET', url,
data=data
headers=headers
)
prepped = s.prepare_request(req)
# do something with prepped.body
# do something with prepped.headers
resp = s.send(prepped,
stream=stream,
verify=verify,
proxies=proxies,
cert=cert,
timeout=timeout
)
print(resp.status_code)
SSL 证书验证¶
Requests 可以为 HTTPS 请求验证 SSL 证书,就像 web 浏览器一样。SSL 验证默认是开启的,如果证书验证失败,Requests 会抛出 SSLError:
>>> requests.get('https://requestb.in')
requests.exceptions.SSLError: hostname 'requestb.in' doesn't match either of '*.herokuapp.com', 'herokuapp.com'
在该域名上我没有设置 SSL,所以失败了。但 Github 设置了 SSL:
>>> requests.get('https://github.com', verify=True)
<Response [200]>
你可以为 verify
传入 CA_BUNDLE 文件的路径,或者包含可信任 CA 证书文件的文件夹路径:
>>> requests.get('https://github.com', verify='/path/to/certfile')
或者将其保持在会话中:
s = requests.Session()
s.verify = '/path/to/certfile'
注解
如果 verify
设为文件夹路径,文件夹必须通过 OpenSSL 提供的 c_rehash 工具处理。
你还可以通过 REQUESTS_CA_BUNDLE
环境变量定义可信任 CA 列表。
如果你将 verify
设置为 False,Requests 也能忽略对 SSL 证书的验证。
>>> requests.get('https://kennethreitz.org', verify=False)
<Response [200]>
默认情况下, verify
是设置为 True 的。选项 verify
仅应用于主机证书。
# 对于私有证书,你也可以传递一个 CA_BUNDLE 文件的路径给 verify
。你也可以设置
# REQUEST_CA_BUNDLE
环境变量。
客户端证书¶
你也可以指定一个本地证书用作客户端证书,可以是单个文件(包含密钥和证书)或一个包含两个文件路径的元组:
>>> requests.get('https://kennethreitz.org', cert=('/path/client.cert', '/path/client.key'))
<Response [200]>
或者保持在会话中:
s = requests.Session()
s.cert = '/path/client.cert'
如果你指定了一个错误路径或一个无效的证书:
>>> requests.get('https://kennethreitz.org', cert='/wrong_path/client.pem')
SSLError: [Errno 336265225] _ssl.c:347: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib
警告
警告
本地证书的私有 key 必须是解密状态。目前,Requests 不支持使用加密的 key。
CA 证书¶
Requests 默认附带了一套它信任的根证书,来自于 Mozilla trust store。然而它们在每次 Requests 更新时才会更新。这意味着如果你固定使用某一版本的 Requests,你的证书有可能已经 太旧了。
从 Requests 2.4.0 版之后,如果系统中装了 certifi 包,Requests 会试图使用它里边的 证书。这样用户就可以在不修改代码的情况下更新他们的可信任证书。
为了安全起见,我们建议你经常更新 certifi!
响应体内容工作流¶
默认情况下,当你进行网络请求后,响应体会立即被下载。你可以通过 stream
参数覆盖这个行为,推迟下载响应体直到访问 Response.content
属性:
tarball_url = 'https://github.com/kennethreitz/requests/tarball/master'
r = requests.get(tarball_url, stream=True)
此时仅有响应头被下载下来了,连接保持打开状态,因此允许我们根据条件获取内容:
if int(r.headers['content-length']) < TOO_LONG:
content = r.content
...
你可以进一步使用 Response.iter_content
和 Response.iter_lines
方法来控制工作流,或者以 Response.raw
从底层 urllib3 的 urllib3.HTTPResponse <urllib3.response.HTTPResponse
读取未解码的响应体。
如果你在请求中把 stream
设为 True
,Requests 无法将连接释放回连接池,除非你
消耗了所有的数据,或者调用了 Response.close
。
这样会带来连接效率低下的问题。如果你发现你在使用 stream=True
的同时还在部分读取请求的
body(或者完全没有读取 body),那么你就应该考虑使用 with 语句发送请求,这样可以保证请求一定会被关闭:
with requests.get('http://httpbin.org/get', stream=True) as r:
# 在此处理响应。
保持活动状态(持久连接)¶
好消息——归功于 urllib3,同一会话内的持久连接是完全自动处理的!同一会话内你发出的任何请求都会自动复用恰当的连接!
注意:只有所有的响应体数据被读取完毕连接才会被释放为连接池;所以确保将 stream
设置为 False
或读取 Response
对象的 content
属性。
流式上传¶
Requests支持流式上传,这允许你发送大的数据流或文件而无需先把它们读入内存。要使用流式上传,仅需为你的请求体提供一个类文件对象即可:
with open('massive-body') as f:
requests.post('http://some.url/streamed', data=f)
警告
警告
我们强烈建议你用二进制模式(binary mode)打开文件。这是因为 requests 可能会为你提供 header
中的 Content-Length
,在这种情况下该值会被设为文件的字节数。如果你用文本模式打开文件,就可能碰到错误。
块编码请求¶
对于出去和进来的请求,Requests 也支持分块传输编码。要发送一个块编码的请求,仅需为你的请求体提供一个生成器(或任意没有具体长度的迭代器):
def gen():
yield 'hi'
yield 'there'
requests.post('http://some.url/chunked', data=gen())
对于分块的编码请求,我们最好使用 Response.iter_content()
对其数据进行迭代。在理想情况下,你的 request 会设置 stream=True
,这样你就可以通过调用
iter_content
并将分块大小参数设为 None
,从而进行分块的迭代。如果你要设置分块的最大体积,你可以把分块大小参数设为任意整数。
POST 多个分块编码的文件¶
你可以在一个请求中发送多个文件。例如,假设你要上传多个图像文件到一个 HTML 表单,使用一个多文件 field 叫做 "images":
<input type="file" name="images" multiple="true" required="true"/>
要实现,只要把文件设到一个元组的列表中,其中元组结构为 (form_field_name, file_info)
:
>>> url = 'http://httpbin.org/post'
>>> multiple_files = [
('images', ('foo.png', open('foo.png', 'rb'), 'image/png')),
('images', ('bar.png', open('bar.png', 'rb'), 'image/png'))]
>>> r = requests.post(url, files=multiple_files)
>>> r.text
{
...
'files': {'images': 'data:image/png;base64,iVBORw ....'}
'Content-Type': 'multipart/form-data; boundary=3131623adb2043caaeb5538cc7aa0b3a',
...
}
警告
警告
我们强烈建议你用二进制模式(binary mode)打开文件。这是因为 requests 可能会为你提供 header
中的 Content-Length
,在这种情况下该值会被设为文件的字节数。如果你用文本模式打开文件,就可能碰到错误。
事件挂钩¶
Requests有一个钩子系统,你可以用来操控部分请求过程,或信号事件处理。
可用的钩子:
response
:- 从一个请求产生的响应
你可以通过传递一个 {hook_name: callback_function}
字典给 hooks
请求参数为每个请求分配一个钩子函数:
hooks=dict(response=print_url)
callback_function
会接受一个数据块作为它的第一个参数。
def print_url(r, *args, **kwargs):
print(r.url)
若执行你的回调函数期间发生错误,系统会给出一个警告。
若回调函数返回一个值,默认以该值替换传进来的数据。若函数未返回任何东西,也没有什么其他的影响。
我们来在运行期间打印一些请求方法的参数:
>>> requests.get('http://httpbin.org', hooks=dict(response=print_url))
http://httpbin.org
<Response [200]>
自定义身份验证¶
Requests 允许你使用自己指定的身份验证机制。
任何传递给请求方法的 auth
参数的可调用对象,在请求发出之前都有机会修改请求。
自定义的身份验证机制是作为 requests.auth.AuthBase
的子类来实现的,也非常容易定义。Requests
在 requests.auth
中提供了两种常见的的身份验证方案: HTTPBasicAuth
和 HTTPDigestAuth
。
假设我们有一个web服务,仅在 X-Pizza
头被设置为一个密码值的情况下才会有响应。虽然这不太可能,但就以它为例好了。
from requests.auth import AuthBase
class PizzaAuth(AuthBase):
"""Attaches HTTP Pizza Authentication to the given Request object."""
def __init__(self, username):
# setup any auth-related data here
self.username = username
def __call__(self, r):
# modify and return the request
r.headers['X-Pizza'] = self.username
return r
然后就可以使用我们的PizzaAuth来进行网络请求:
>>> requests.get('http://pizzabin.org/admin', auth=PizzaAuth('kenneth'))
<Response [200]>
流式请求¶
使用 Response.iter_lines()
你可以很方便地对流式 API
(例如 Twitter 的流式 API )
进行迭代。简单地设置 stream
为 True
便可以使用 iter_lines
对相应进行迭代:
import json
import requests
r = requests.get('http://httpbin.org/stream/20', stream=True)
for line in r.iter_lines():
# filter out keep-alive new lines
if line:
decoded_line = line.decode('utf-8')
print(json.loads(decoded_line))
当使用 decode_unicode=True 在
Response.iter_lines()
或
Response.iter_content()
中时,你需要提供一个回退编码方式,以防服务器没有提供默认回退编码,从而导致错误:
r = requests.get('http://httpbin.org/stream/20', stream=True)
if r.encoding is None:
r.encoding = 'utf-8'
for line in r.iter_lines(decode_unicode=True):
if line:
print(json.loads(line))
警告
警告
iter_lines
不保证重进入时的安全性。多次调用该方法
会导致部分收到的数据丢失。如果你要在多处调用它,就应该使用生成的迭代器对象:
lines = r.iter_lines()
# 保存第一行以供后面使用,或者直接跳过
first_line = next(lines)
for line in lines:
print(line)
代理¶
如果需要使用代理,你可以通过为任意请求方法提供 proxies
参数来配置单个请求:
import requests
proxies = {
"http": "http://10.10.1.10:3128",
"https": "http://10.10.1.10:1080",
}
requests.get("http://example.org", proxies=proxies)
你也可以通过环境变量 HTTP_PROXY
和 HTTPS_PROXY
来配置代理。
$ export HTTP_PROXY="http://10.10.1.10:3128"
$ export HTTPS_PROXY="http://10.10.1.10:1080"
$ python
>>> import requests
>>> requests.get("http://example.org")
若你的代理需要使用HTTP Basic Auth,可以使用 http://user:password@host/ 语法:
proxies = {
"http": "http://user:pass@10.10.1.10:3128/",
}
要为某个特定的连接方式或者主机设置代理,使用 scheme://hostname 作为 key, 它会针对指定的主机和连接方式进行匹配。
proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'}
注意,代理 URL 必须包含连接方式。
SOCKS¶
2.10.0 新版功能.
除了基本的 HTTP 代理,Request 还支持 SOCKS 协议的代理。这是一个可选功能,若要使用, 你需要安装第三方库。
你可以用 pip
获取依赖:
$ pip install requests[socks]
安装好依赖以后,使用 SOCKS 代理和使用 HTTP 代理一样简单:
proxies = {
'http': 'socks5://user:pass@host:port',
'https': 'socks5://user:pass@host:port'
}
合规性¶
Requests 符合所有相关的规范和 RFC,这样不会为用户造成不必要的困难。但这种对规范的考虑导致一些行为对于不熟悉相关规范的人来说看似有点奇怪。
编码方式¶
当你收到一个响应时,Requests 会猜测响应的编码方式,用于在你调用 Response.text
方法时对响应进行解码。Requests 首先在 HTTP
头部检测是否存在指定的编码方式,如果不存在,则会使用
charade 来尝试猜测编码方式。
只有当 HTTP 头部不存在明确指定的字符集,并且 Content-Type
头部字段包含 text
值之时,
Requests 才不去猜测编码方式。在这种情况下,
RFC 2616
指定默认字符集必须是 ISO-8859-1
。Requests 遵从这一规范。如果你需要一种不同的编码方式,你可以手动设置 Response.encoding
属性,或使用原始的
Response.content
。
HTTP动词¶
Requests 提供了几乎所有HTTP动词的功能:GET、OPTIONS、HEAD、POST、PUT、PATCH、DELETE。以下内容为使用 Requests 中的这些动词以及 Github API 提供了详细示例。
我将从最常使用的动词 GET 开始。HTTP GET 是一个幂等方法,从给定的 URL 返回一个资源。因而,当你试图从一个 web 位置获取数据之时,你应该使用这个动词。一个使用示例是尝试从 Github 上获取关于一个特定 commit 的信息。假设我们想获取 Requests 的 commit a050faf
的信息。我们可以这样去做:
>>> import requests
>>> r = requests.get('https://api.github.com/repos/requests/requests/git/commits/a050faf084662f3a352dd1a941f2c7c9f886d4ad')
我们应该确认 GitHub 是否正确响应。如果正确响应,我们想弄清响应内容是什么类型的。像这样去做:
>>> if (r.status_code == requests.codes.ok):
... print r.headers['content-type']
...
application/json; charset=utf-8
可见,GitHub 返回了 JSON 数据,非常好,这样就可以使用 r.json
方法把这个返回的数据解析成
Python 对象。
>>> commit_data = r.json()
>>> print commit_data.keys()
[u'committer', u'author', u'url', u'tree', u'sha', u'parents', u'message']
>>> print commit_data[u'committer']
{u'date': u'2012-05-10T11:10:50-07:00', u'email': u'me@kennethreitz.com', u'name': u'Kenneth Reitz'}
>>> print commit_data[u'message']
makin' history
到目前为止,一切都非常简单。嗯,我们来研究一下 GitHub 的 API。我们可以去看看文档,但如果使用 Requests 来研究也许会更有意思一点。我们可以借助 Requests 的 OPTIONS 动词来看看我们刚使用过的 url 支持哪些 HTTP 方法。
>>> verbs = requests.options(r.url)
>>> verbs.status_code
500
额,这是怎么回事?毫无帮助嘛!原来 GitHub,与许多 API 提供方一样,实际上并未实现 OPTIONS 方法。这是一个恼人的疏忽,但没关系,那我们可以使用枯燥的文档。然而,如果 GitHub 正确实现了 OPTIONS,那么服务器应该在响应头中返回允许用户使用的 HTTP 方法,例如:
>>> verbs = requests.options('http://a-good-website.com/api/cats')
>>> print verbs.headers['allow']
GET,HEAD,POST,OPTIONS
转而去查看文档,我们看到对于提交信息,另一个允许的方法是 POST,它会创建一个新的提交。由于我们正在使用 Requests 代码库,我们应尽可能避免对它发送笨拙的 POST。作为替代,我们来玩玩 GitHub 的 Issue 特性。
本篇文档是回应 Issue #482 而添加的。鉴于该问题已经存在,我们就以它为例。先获取它。
>>> r = requests.get('https://api.github.com/requests/kennethreitz/requests/issues/482')
>>> r.status_code
200
>>> issue = json.loads(r.text)
>>> print(issue[u'title'])
Feature any http verb in docs
>>> print(issue[u'comments'])
3
Cool,有 3 个评论。我们来看一下最后一个评论。
>>> r = requests.get(r.url + u'/comments')
>>> r.status_code
200
>>> comments = r.json()
>>> print comments[0].keys()
[u'body', u'url', u'created_at', u'updated_at', u'user', u'id']
>>> print comments[2][u'body']
Probably in the "advanced" section
嗯,那看起来似乎是个愚蠢之处。我们发表个评论来告诉这个评论者他自己的愚蠢。那么,这个评论者是谁呢?
>>> print comments[2][u'user'][u'login']
kennethreitz
好,我们来告诉这个叫 Kenneth 的家伙,这个例子应该放在快速上手指南中。根据 GitHub API 文档,其方法是 POST 到该话题。我们来试试看。
>>> body = json.dumps({u"body": u"Sounds great! I'll get right on it!"})
>>> url = u"https://api.github.com/repos/requests/requests/issues/482/comments"
>>> r = requests.post(url=url, data=body)
>>> r.status_code
404
额,这有点古怪哈。可能我们需要验证身份。那就有点纠结了,对吧?不对。Requests 简化了多种身份验证形式的使用,包括非常常见的 Basic Auth。
>>> from requests.auth import HTTPBasicAuth
>>> auth = HTTPBasicAuth('fake@example.com', 'not_a_real_password')
>>> r = requests.post(url=url, data=body, auth=auth)
>>> r.status_code
201
>>> content = r.json()
>>> print(content[u'body'])
Sounds great! I'll get right on it.
太棒了!噢,不!我原本是想说等我一会,因为我得去喂我的猫。如果我能够编辑这条评论那就好了!幸运的是,GitHub 允许我们使用另一个 HTTP 动词 PATCH 来编辑评论。我们来试试。
>>> print(content[u"id"])
5804413
>>> body = json.dumps({u"body": u"Sounds great! I'll get right on it once I feed my cat."})
>>> url = u"https://api.github.com/repos/requests/requests/issues/comments/5804413"
>>> r = requests.patch(url=url, data=body, auth=auth)
>>> r.status_code
200
非常好。现在,我们来折磨一下这个叫 Kenneth 的家伙,我决定要让他急得团团转,也不告诉他是我在捣蛋。这意味着我想删除这条评论。GitHub 允许我们使用完全名副其实的 DELETE 方法来删除评论。我们来清除该评论。
>>> r = requests.delete(url=url, auth=auth)
>>> r.status_code
204
>>> r.headers['status']
'204 No Content'
很好。不见了。最后一件我想知道的事情是我已经使用了多少限额(ratelimit)。查查看,GitHub 在响应头部发送这个信息,因此不必下载整个网页,我将使用一个 HEAD 请求来获取响应头。
>>> r = requests.head(url=url, auth=auth)
>>> print r.headers
...
'x-ratelimit-remaining': '4995'
'x-ratelimit-limit': '5000'
...
很好。是时候写个 Python 程序以各种刺激的方式滥用 GitHub 的 API,还可以使用 4995 次呢。
定制动词¶
有时候你会碰到一些服务器,处于某些原因,它们允许或者要求用户使用上述 HTTP 动词之外的定制动词。比如说 WEBDAV 服务器会要求你使用 MKCOL 方法。别担心,Requests 一样可以搞定它们。你可以使用内建的 .request
方法,例如:
>>> r = requests.request('MKCOL', url, data=data)
>>> r.status_code
200 # Assuming your call was correct
这样你就可以使用服务器要求的任意方法动词了。
响应头链接字段¶
许多 HTTP API 都有响应头链接字段的特性,它们使得 API 能够更好地自我描述和自我显露。
GitHub 在 API 中为 分页 使用这些特性,例如:
>>> url = 'https://api.github.com/users/kennethreitz/repos?page=1&per_page=10'
>>> r = requests.head(url=url)
>>> r.headers['link']
'<https://api.github.com/users/kennethreitz/repos?page=2&per_page=10>; rel="next", <https://api.github.com/users/kennethreitz/repos?page=6&per_page=10>; rel="last"'
Requests 会自动解析这些响应头链接字段,并使得它们非常易于使用:
>>> r.links["next"]
{'url': 'https://api.github.com/users/kennethreitz/repos?page=2&per_page=10', 'rel': 'next'}
>>> r.links["last"]
{'url': 'https://api.github.com/users/kennethreitz/repos?page=7&per_page=10', 'rel': 'last'}
传输适配器¶
从 v1.0.0 以后,Requests 的内部采用了模块化设计。部分原因是为了实现传输适配器(Transport Adapter),你可以看看关于它的最早描述。传输适配器提供了一个机制,让你可以为 HTTP 服务定义交互方法。尤其是它允许你应用服务前的配置。
Requests 自带了一个传输适配器,也就是 HTTPAdapter
。
这个适配器使用了强大的 urllib3,为 Requests 提供了默认的 HTTP 和 HTTPS 交互。每当 Session
被初始化,就会有适配器附着在 Session
上,其中一个供 HTTP 使用,另一个供 HTTPS 使用。
Request 允许用户创建和使用他们自己的传输适配器,实现他们需要的特殊功能。创建好以后,传输适配器可以被加载到一个会话对象上,附带着一个说明,告诉会话适配器应该应用在哪个 web 服务上。
>>> s = requests.Session()
>>> s.mount('http://www.github.com', MyAdapter())
这个 mount 调用会注册一个传输适配器的特定实例到一个前缀上面。加载以后,任何使用该会话的 HTTP 请求,只要其 URL 是以给定的前缀开头,该传输适配器就会被使用到。
传输适配器的众多实现细节不在本文档的覆盖范围内,不过你可以看看接下来这个简单的 SSL
用例。更多的用法,你也许该考虑为 BaseAdapter
创建子类。
示例: 指定的 SSL 版本¶
Requests 开发团队刻意指定了内部库(urllib3)的默认 SSL 版本。一般情况下这样做没有问题,不过是不是你可能会需要连接到一个服务节点,而该节点使用了和默认不同的 SSL 版本。
你可以使用传输适配器解决这个问题,通过利用 HTTPAdapter 现有的大部分实现,再加上一个
ssl_version 参数并将它传递到 urllib3
中。我们会创建一个传输适配器,用来告诉
urllib3
让它使用 SSLv3:
import ssl
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.poolmanager import PoolManager
class Ssl3HttpAdapter(HTTPAdapter):
""""Transport adapter" that allows us to use SSLv3."""
def init_poolmanager(self, connections, maxsize, block=False):
self.poolmanager = PoolManager(num_pools=connections,
maxsize=maxsize,
block=block,
ssl_version=ssl.PROTOCOL_SSLv3)
阻塞和非阻塞¶
使用默认的传输适配器,Requests 不提供任何形式的非阻塞 IO。
Response.content
属性会阻塞,直到整个响应下载完成。如果你需要更多精细控制,该库的数据流功能(见 流式请求)
允许你每次接受少量的一部分响应,不过这些调用依然是阻塞式的。
如果你对于阻塞式 IO 有所顾虑,还有很多项目可以供你使用,它们结合了 Requests 和 Python 的某个异步框架。典型的优秀例子是 grequests 和 requests-futures。
Header 排序¶
在某些特殊情况下你也许需要按照次序来提供 header,如果你向 headers
关键字参数传入一个
OrderedDict
,就可以向提供一个带排序的 header。然而,Requests 使用的默认
header 的次序会被优先选择,这意味着如果你在 headers
关键字参数中覆盖了默认 header,和关键字参数中别的 header 相比,它们也许看上去会是次序错误的。
如果这个对你来说是个问题,那么用户应该考虑在 Session
对象上面设置默认 header,只要将 Session
设为一个定制的 OrderedDict
即可。这样就会让它成为优选的次序。
超时(timeout)¶
为防止服务器不能及时响应,大部分发至外部服务器的请求都应该带着 timeout 参数。在默认情况下,除非显式指定了 timeout 值,requests 是不会自动进行超时处理的。如果没有 timeout,你的代码可能会挂起若干分钟甚至更长时间。
连接超时指的是在你的客户端实现到远端机器端口的连接时(对应的是`connect()`_),Request 会等待的秒数。一个很好的实践方法是把连接超时设为比 3 的倍数略大的一个数值,因为 TCP 数据包重传窗口 (TCP packet retransmission window) 的默认大小是 3。
一旦你的客户端连接到了服务器并且发送了 HTTP 请求,读取超时指的就是客户端等待服务器发送请求的时间。(特定地,它指的是客户端要等待服务器发送字节之间的时间。在 99.9% 的情况下这指的是服务器发送第一个字节之前的时间)。
如果你制订了一个单一的值作为 timeout,如下所示:
r = requests.get('https://github.com', timeout=5)
这一 timeout 值将会用作 connect
和 read
二者的 timeout。如果要分别制定,就传入一个元组:
r = requests.get('https://github.com', timeout=(3.05, 27))
如果远端服务器很慢,你可以让 Request 永远等待,传入一个 None 作为 timeout 值,然后就冲咖啡去吧。
r = requests.get('https://github.com', timeout=None)
身份认证¶
本篇文档讨论如何配合 Requests 使用多种身份认证方式。
许多 web 服务都需要身份认证,并且也有多种不同的认证类型。 以下,我们会从简单到复杂概述 Requests 中可用的几种身份认证形式。
基本身份认证¶
许多要求身份认证的web服务都接受 HTTP Basic Auth。这是最简单的一种身份认证,并且 Requests 对这种认证方式的支持是直接开箱即可用。
以 HTTP Basic Auth 发送请求非常简单:
>>> from requests.auth import HTTPBasicAuth
>>> requests.get('https://api.github.com/user', auth=HTTPBasicAuth('user', 'pass'))
<Response [200]>
事实上,HTTP Basic Auth 如此常见,Requests 就提供了一种简写的使用方式:
>>> requests.get('https://api.github.com/user', auth=('user', 'pass'))
<Response [200]>
像这样在一个元组中提供认证信息与前一个 HTTPBasicAuth
例子是完全相同的。
netrc 认证¶
如果认证方法没有收到 auth
参数,Requests 将试图从用户的 netrc
文件中获取 URL 的 hostname 需要的认证身份。The netrc file overrides raw HTTP authentication headers
set with headers=.
如果找到了 hostname 对应的身份,就会以 HTTP Basic Auth 的形式发送请求。
摘要式身份认证¶
另一种非常流行的 HTTP 身份认证形式是摘要式身份认证,Requests 对它的支持也是开箱即可用的:
>>> from requests.auth import HTTPDigestAuth
>>> url = 'http://httpbin.org/digest-auth/auth/user/pass'
>>> requests.get(url, auth=HTTPDigestAuth('user', 'pass'))
<Response [200]>
OAuth 1 认证¶
Oauth 是一种常见的 Web API 认证方式。 requests-oauthlib
库可以让 Requests 用户简单地创建 OAuth 认证的请求:
- ::
>>> import requests >>> from requests_oauthlib import OAuth1
>>> url = 'https://api.twitter.com/1.1/account/verify_credentials.json' >>> auth = OAuth1('YOUR_APP_KEY', 'YOUR_APP_SECRET', ... 'USER_OAUTH_TOKEN', 'USER_OAUTH_TOKEN_SECRET')
>>> requests.get(url, auth=auth) <Response [200]>
关于 OAuth 工作流程的更多信息,请参见 OAuth 官方网站。 关于 requests-oauthlib 的文档和用例,请参见 GitHub 的 requests_oauthlib 代码库。
OAuth 2 与 OpenID 连接认证¶
requests-oauthlib
库还可以处理 OAuth 2,OAuth 2 是 OpenID 连接的基础机制。
请查看 requests-oauthlib OAuth2 documentation 文档以了解 OAuth 2 的各种认证管理流程:
其他身份认证形式¶
Requests 的设计允许其他形式的身份认证用简易的方式插入其中。开源社区的成员时常为更复杂或不那么常用的身份认证形式编写认证处理插件。其中一些最优秀的已被收集在 Requests organization 页面中,包括:
如果你想使用其中任何一种身份认证形式,直接去它们的 GitHub 页面,依照说明进行。
新的身份认证形式¶
如果你找不到所需要的身份认证形式的一个良好实现,你也可以自己实现它。Requests 非常易于添加你自己的身份认证形式。
要想自己实现,就从 AuthBase
继承一个子类,并实现 __call__()
方法:
>>> import requests
>>> class MyAuth(requests.auth.AuthBase):
... def __call__(self, r):
... # Implement my authentication
... return r
...
>>> url = 'http://httpbin.org/get'
>>> requests.get(url, auth=MyAuth())
<Response [200]>
当一个身份认证模块被附加到一个请求上,在设置 request 期间就会调用该模块。因此 __call__
方法必须完成使得身份认证生效的所有事情。一些身份认证形式会额外地添加钩子来提供进一步的功能。
你可以在 Requests organization 页面的 auth.py
文件中找到更多示例。
社区指南¶
这部分文档也是文字为主,详细介绍了 Requests 的生态和社区。
常见问题¶
这部分的文档回答了有关 Requests 的常见问题。
自定义 User-Agent?¶
Requests 允许你使用其它的 HTTP Header 来轻松的覆盖自带的 User-Agent 字符串。
怎么不使用 Httplib2?¶
Chris Adams 给出了一个很好的总结 Hacker News:
httplib2 是你应该使用 Request 的一个原因,尽管 httplib2 名声在外,但它文档欠佳,而且基本操作要写的代码依旧太多。对于 httplib2 我是很感激的,要写一个现代 HTTP 客户端要跟一吨低级麻烦事打交道,实在是件苦差事。但无论如何,还是直接使用 Requests 吧。Kenneth Reitz 是一个很负责的作者,他能把简单的东西做简单。httplib2 感觉像是一个学术产物,而 Requests 才真的是一个人们可以在生产系统中使用的东西。[1]
免责声明:尽管我名列在 Requests 的 AUTHORS 文件中,但对于 Requests 的优秀状态,我的贡献大约只有 0.0001% 吧。
1. http://code.google.com/p/httplib2/issues/detail?id=96 是一个好例子,这个讨厌的 bug 影响了很多人,有人几个月前就写了一个修正,这个修正很有效,我把它放在一个代码分支中,用它处理了几 TB 的数据都没问题,但它花了一年多才进到主分支中,进到 PyPI 则花了更长时间,所以用到 httplib2 的项目花了很长时间才等到问题的解决。
支持 Python 3 吗?¶
当然!下面是官方支持的python平台列表:
- Python 2.6
- Python 2.7
- Python 3.3
- Python 3.4
- Python 3.5
- Python 3.6
- PyPy
"hostname doesn't match" 错误是怎么回事?¶
当 SSL certificate verification 发现服务器响应的认证和它认为自己连接的主机名不匹配时,就会发生这样的错误。如果你确定服务器的 SSL 设置是正确的(例如你可以用浏览器访问页面),而且你使用的是 Python 2.6 或者 2.7,那么一个可能的解释就是你需要 Server-Name-Indication。
Server-Name-Indication 简称 SNI,是一个 SSL 的官方扩展,其中客户端会告诉服务器它连接了哪个主机名。当服务器使用虚拟主机( Virtual Hosting)时这点很重要。这样的服务器会服务多个 SSL 网站,所以它们需要能够针对客户端连接的主机名返回正确的证书。
Python 3 和 Python 2.7.9+ 的 SSL 模块包含了原生的 SNI 支持。更多关于在 Request、SNI 以及 Python < 2.7.9 的信息请参见这个 Stack Overflow 答案。
推荐的库和扩展¶
Requests 拥有很多强大有用的第三方扩展。这里概述了其中最好的几个。
Certifi CA Bundle¶
Certifi 是一个精心准备的根证书集合,用来验证 SSL 证书的可信任度,同时还会验证 TLS 主机的身份。这是一个从 Requests 项目中剥离出来的项目。
CacheControl¶
CacheControl 这个扩展能为 Requests 添加完整的 HTTP 缓存功能。这样你的 web 请求的效率会高很多,这个扩展适合在需要大量 web 请求的场合使用。
Requests-Toolbelt¶
Requests-Toolbelt 使一些有用工具的集合,有的用户会觉得它们很有用,但 Requests 中不适合将它们包进去。Requests 的核心组积极维护着这个库,它反映了社区用户最想要的一些功能。
Requests-OAuthlib¶
requests-oauthlib 使得从 Requests 自动进行 OAuth 跳转变成可能。这对很多使用 OAuth 进行验证的网站都很有用。它还提供了很多小配置,用来对非标准的 OAuth 提供者进行处理。
Integrations¶
Python for iOS¶
Requests is built into the wonderful Python for iOS runtime!
To give it a try, simply:
import requests
Articles & Talks¶
- Python for the Web teaches how to use Python to interact with the web, using Requests.
- Daniel Greenfeld's Review of Requests
- My 'Python for Humans' talk ( audio )
- Issac Kelly's 'Consuming Web APIs' talk
- Blog post about Requests via Yum
- Russian blog post introducing Requests
- Sending JSON in Requests
支持¶
对于Requests,如果你有问题或者建议,可以通过下面几种方法得到支持:
StackOverflow¶
如果你的问题不包含敏感或私有信息,或者你能将这些信息匿名化,那你就可以在
StackOverflow
上使用 python-requests
标签提问。
发送推文¶
如果你的问题在140个字符内描述,欢迎在 twitter 上发送推文至 @kennethreitz, @sigmavirus24, 或 @lukasaoz。
提交 issue¶
如果你在 Requests 中注意到了一些意想不到的行为,或者希望看到一个新的功能支持,请 在 GitHub 上提交 issue.
邮件¶
我很乐意回答关于 Requests 的个人或者更深入的问题。你可以随时写信至 requests@kennethreitz.com.
IRC¶
Requests 的官方 Freenode 频道是 #python-requests
Requests 的核心开发人员白天会在 IRC 中出没,你可以在 #python-requests
中找到他们:
- kennethreitz
- lukasa
- sigmavirus24
Vulnerability Disclosure¶
If you think you have found a potential security vulnerability in requests, please email sigmavirus24 and Lukasa directly. Do not file a public issue.
Our PGP Key fingerprints are:
- 0161 BB7E B208 B5E0 4FDC 9F81 D9DA 0A04 9113 F853 (@sigmavirus24)
- 90DC AE40 FEA7 4B14 9B70 662D F25F 2144 EEC1 373D (@lukasa)
If English is not your first language, please try to describe the problem and its impact to the best of your ability. For greater detail, please use your native language and we will try our best to translate it using online services.
Please also include the code you used to find the problem and the shortest amount of code necessary to reproduce it.
Please do not disclose this to anyone else. We will retrieve a CVE identifier if necessary and give you full credit under whatever name or alias you provide. We will only request an identifier when we have a fix and can publish it in a release.
We will respect your privacy and will only publicize your involvement if you grant us permission.
Process¶
This following information discusses the process the requests project follows in response to vulnerability disclosures. If you are disclosing a vulnerability, this section of the documentation lets you know how we will respond to your disclosure.
Timeline¶
When you report an issue, one of the project members will respond to you within two days at the outside. In most cases responses will be faster, usually within 12 hours. This initial response will at the very least confirm receipt of the report.
If we were able to rapidly reproduce the issue, the initial response will also contain confirmation of the issue. If we are not, we will often ask for more information about the reproduction scenario.
Our goal is to have a fix for any vulnerability released within two weeks of the initial disclosure. This may potentially involve shipping an interim release that simply disables function while a more mature fix can be prepared, but will in the vast majority of cases mean shipping a complete release as soon as possible.
Throughout the fix process we will keep you up to speed with how the fix is progressing. Once the fix is prepared, we will notify you that we believe we have a fix. Often we will ask you to confirm the fix resolves the problem in your environment, especially if we are not confident of our reproduction scenario.
At this point, we will prepare for the release. We will obtain a CVE number if one is required, providing you with full credit for the discovery. We will also decide on a planned release date, and let you know when it is. This release date will always be on a weekday.
At this point we will reach out to our major downstream packagers to notify them of an impending security-related patch so they can make arrangements. In addition, these packagers will be provided with the intended patch ahead of time, to ensure that they are able to promptly release their downstream packages. Currently the list of people we actively contact ahead of a public release is:
- Jeremy Cline, Red Hat (@jeremycline)
- Daniele Tricoli, Debian (@eriol)
We will notify these individuals at least a week ahead of our planned release date to ensure that they have sufficient time to prepare. If you believe you should be on this list, please let one of the maintainers know at one of the email addresses at the top of this article.
On release day, we will push the patch to our public repository, along with an updated changelog that describes the issue and credits you. We will then issue a PyPI release containing the patch.
At this point, we will publicise the release. This will involve mails to mailing lists, Tweets, and all other communication mechanisms available to the core team.
We will also explicitly mention which commits contain the fix to make it easier for other distributors and users to easily patch their own versions of requests if upgrading is not an option.
Previous CVEs¶
- Fixed in 2.6.0
- CVE 2015-2296, reported by Matthew Daley of BugFuzz.
- Fixed in 2.3.0
更新¶
如果你想和社区以及开发版的 Requests 保持最新的联系, 这有几种方式:
版本历史¶
Release History¶
2.18.1 (2017-06-14)¶
Bugfixes
- Fix an error in the packaging whereby the *.whl contained incorrect data that regressed the fix in v2.17.3.
2.18.0 (2017-06-14)¶
Improvements
Response
is now a context manager, so can be used directly in awith
statement without first having to be wrapped bycontextlib.closing()
.
Bugfixes
- Resolve installation failure if multiprocessing is not available
- Resolve tests crash if multiprocessing is not able to determine the number of CPU cores
- Resolve error swallowing in utils set_environ generator
2.17.3 (2017-05-29)¶
Improvements
- Improved
packages
namespace identity support, for monkeypatching libraries.
2.17.2 (2017-05-29)¶
Improvements
- Improved
packages
namespace identity support, for monkeypatching libraries.
2.17.1 (2017-05-29)¶
Improvements
- Improved
packages
namespace identity support, for monkeypatching libraries.
2.16.5 (2017-05-28)¶
- Improvements to
$ python -m requests.help
.
2.16.4 (2017-05-27)¶
- Introduction of the
$ python -m requests.help
command, for debugging with maintainers!
2.16.3 (2017-05-27)¶
- Further restored the
requests.packages
namespace for compatibility reasons.
2.16.2 (2017-05-27)¶
- Further restored the
requests.packages
namespace for compatibility reasons.
No code modification (noted below) should be neccessary any longer.
2.16.1 (2017-05-27)¶
- Restored the
requests.packages
namespace for compatibility reasons. - Bugfix for
urllib3
version parsing.
Note: code that was written to import against the requests.packages
namespace previously will have to import code that rests at this module-level
now.
For example:
from requests.packages.urllib3.poolmanager import PoolManager
Will need to be re-written to be:
from requests.packages import urllib3
urllib3.poolmanager.PoolManager
Or, even better:
from urllib3.poolmanager import PoolManager
2.16.0 (2017-05-26)¶
- Unvendor ALL the things!
2.15.1 (2017-05-26)¶
- Everyone makes mistakes.
2.15.0 (2017-05-26)¶
Improvements
- Introduction of the
Response.next
property, for getting the nextPreparedResponse
from a redirect chain (whenallow_redirects=False
). - Internal refactoring of
__version__
module.
Bugfixes
- Restored once-optional parameter for
requests.utils.get_environ_proxies()
.
2.14.2 (2017-05-10)¶
Bugfixes
- Changed a less-than to an equal-to and an or in the dependency markers to widen compatibility with older setuptools releases.
2.14.1 (2017-05-09)¶
Bugfixes
- Changed the dependency markers to widen compatibility with older pip releases.
2.14.0 (2017-05-09)¶
Improvements
- It is now possible to pass
no_proxy
as a key to theproxies
dictionary to provide handling similar to theNO_PROXY
environment variable. - When users provide invalid paths to certificate bundle files or directories
Requests now raises
IOError
, rather than failing at the time of the HTTPS request with a fairly inscrutable certificate validation error. - The behavior of
SessionRedirectMixin
was slightly altered.resolve_redirects
will now detect a redirect by callingget_redirect_target(response)
instead of directly queryingResponse.is_redirect
andResponse.headers['location']
. Advanced users will be able to process malformed redirects more easily. - Changed the internal calculation of elapsed request time to have higher resolution on Windows.
- Added
win_inet_pton
as conditional dependency for the[socks]
extra on Windows with Python 2.7. - Changed the proxy bypass implementation on Windows: the proxy bypass check doesn't use forward and reverse DNS requests anymore
- URLs with schemes that begin with
http
but are nothttp
orhttps
no longer have their host parts forced to lowercase.
Bugfixes
- Much improved handling of non-ASCII
Location
header values in redirects. FewerUnicodeDecodeErrors
are encountered on Python 2, and Python 3 now correctly understands that Latin-1 is unlikely to be the correct encoding. - If an attempt to
seek
file to find out its length fails, we now appropriately handle that by aborting our content-length calculations. - Restricted
HTTPDigestAuth
to only respond to auth challenges made on 4XX responses, rather than to all auth challenges. - Fixed some code that was firing
DeprecationWarning
on Python 3.6. - The dismayed person emoticon (
/o\\
) no longer has a big head. I'm sure this is what you were all worrying about most.
Miscellaneous
- Updated bundled urllib3 to v1.21.1.
- Updated bundled chardet to v3.0.2.
- Updated bundled idna to v2.5.
- Updated bundled certifi to 2017.4.17.
2.13.0 (2017-01-24)¶
Features
- Only load the
idna
library when we've determined we need it. This will save some memory for users.
Miscellaneous
- Updated bundled urllib3 to 1.20.
- Updated bundled idna to 2.2.
2.12.5 (2017-01-18)¶
Bugfixes
- Fixed an issue with JSON encoding detection, specifically detecting big-endian UTF-32 with BOM.
2.12.4 (2016-12-14)¶
Bugfixes
- Fixed regression from 2.12.2 where non-string types were rejected in the basic auth parameters. While support for this behaviour has been readded, the behaviour is deprecated and will be removed in the future.
2.12.3 (2016-12-01)¶
Bugfixes
- Fixed regression from v2.12.1 for URLs with schemes that begin with "http". These URLs have historically been processed as though they were HTTP-schemed URLs, and so have had parameters added. This was removed in v2.12.2 in an overzealous attempt to resolve problems with IDNA-encoding those URLs. This change was reverted: the other fixes for IDNA-encoding have been judged to be sufficient to return to the behaviour Requests had before v2.12.0.
2.12.2 (2016-11-30)¶
Bugfixes
- Fixed several issues with IDNA-encoding URLs that are technically invalid but which are widely accepted. Requests will now attempt to IDNA-encode a URL if it can but, if it fails, and the host contains only ASCII characters, it will be passed through optimistically. This will allow users to opt-in to using IDNA2003 themselves if they want to, and will also allow technically invalid but still common hostnames.
- Fixed an issue where URLs with leading whitespace would raise
InvalidSchema
errors. - Fixed an issue where some URLs without the HTTP or HTTPS schemes would still have HTTP URL preparation applied to them.
- Fixed an issue where Unicode strings could not be used in basic auth.
- Fixed an issue encountered by some Requests plugins where constructing a
Response object would cause
Response.content
to raise anAttributeError
.
2.12.1 (2016-11-16)¶
Bugfixes
- Updated setuptools 'security' extra for the new PyOpenSSL backend in urllib3.
Miscellaneous
- Updated bundled urllib3 to 1.19.1.
2.12.0 (2016-11-15)¶
Improvements
- Updated support for internationalized domain names from IDNA2003 to IDNA2008. This updated support is required for several forms of IDNs and is mandatory for .de domains.
- Much improved heuristics for guessing content lengths: Requests will no
longer read an entire
StringIO
into memory. - Much improved logic for recalculating
Content-Length
headers forPreparedRequest
objects. - Improved tolerance for file-like objects that have no
tell
method but do have aseek
method. - Anything that is a subclass of
Mapping
is now treated like a dictionary by thedata=
keyword argument. - Requests now tolerates empty passwords in proxy credentials, rather than stripping the credentials.
- If a request is made with a file-like object as the body and that request is redirected with a 307 or 308 status code, Requests will now attempt to rewind the body object so it can be replayed.
Bugfixes
- When calling
response.close
, the call toclose
will be propagated through to non-urllib3 backends. - Fixed issue where the
ALL_PROXY
environment variable would be preferred over scheme-specific variables likeHTTP_PROXY
. - Fixed issue where non-UTF8 reason phrases got severely mangled by falling back to decoding using ISO 8859-1 instead.
- Fixed a bug where Requests would not correctly correlate cookies set when using custom Host headers if those Host headers did not use the native string type for the platform.
Miscellaneous
- Updated bundled urllib3 to 1.19.
- Updated bundled certifi certs to 2016.09.26.
2.11.1 (2016-08-17)¶
Bugfixes
- Fixed a bug when using
iter_content
withdecode_unicode=True
for streamed bodies would raiseAttributeError
. This bug was introduced in 2.11. - Strip Content-Type and Transfer-Encoding headers from the header block when following a redirect that transforms the verb from POST/PUT to GET.
2.11.0 (2016-08-08)¶
Improvements
- Added support for the
ALL_PROXY
environment variable. - Reject header values that contain leading whitespace or newline characters to reduce risk of header smuggling.
Bugfixes
- Fixed occasional
TypeError
when attempting to decode a JSON response that occurred in an error case. Now correctly returns aValueError
. - Requests would incorrectly ignore a non-CIDR IP address in the
NO_PROXY
environment variables: Requests now treats it as a specific IP. - Fixed a bug when sending JSON data that could cause us to encounter obscure OpenSSL errors in certain network conditions (yes, really).
- Added type checks to ensure that
iter_content
only accepts integers andNone
for chunk sizes. - Fixed issue where responses whose body had not been fully consumed would have
the underlying connection closed but not returned to the connection pool,
which could cause Requests to hang in situations where the
HTTPAdapter
had been configured to use a blocking connection pool.
Miscellaneous
- Updated bundled urllib3 to 1.16.
- Some previous releases accidentally accepted non-strings as acceptable header values. This release does not.
2.10.0 (2016-04-29)¶
New Features
- SOCKS Proxy Support! (requires PySocks;
$ pip install requests[socks]
)
Miscellaneous
- Updated bundled urllib3 to 1.15.1.
2.9.2 (2016-04-29)¶
Improvements
- Change built-in CaseInsensitiveDict (used for headers) to use OrderedDict as its underlying datastore.
Bugfixes
- Don't use redirect_cache if allow_redirects=False
- When passed objects that throw exceptions from
tell()
, send them via chunked transfer encoding instead of failing. - Raise a ProxyError for proxy related connection issues.
2.9.1 (2015-12-21)¶
Bugfixes
- Resolve regression introduced in 2.9.0 that made it impossible to send binary strings as bodies in Python 3.
- Fixed errors when calculating cookie expiration dates in certain locales.
Miscellaneous
- Updated bundled urllib3 to 1.13.1.
2.9.0 (2015-12-15)¶
Minor Improvements (Backwards compatible)
- The
verify
keyword argument now supports being passed a path to a directory of CA certificates, not just a single-file bundle. - Warnings are now emitted when sending files opened in text mode.
- Added the 511 Network Authentication Required status code to the status code registry.
Bugfixes
- For file-like objects that are not seeked to the very beginning, we now send the content length for the number of bytes we will actually read, rather than the total size of the file, allowing partial file uploads.
- When uploading file-like objects, if they are empty or have no obvious
content length we set
Transfer-Encoding: chunked
rather thanContent-Length: 0
. - We correctly receive the response in buffered mode when uploading chunked bodies.
- We now handle being passed a query string as a bytestring on Python 3, by decoding it as UTF-8.
- Sessions are now closed in all cases (exceptional and not) when using the functional API rather than leaking and waiting for the garbage collector to clean them up.
- Correctly handle digest auth headers with a malformed
qop
directive that contains no token, by treating it the same as if noqop
directive was provided at all. - Minor performance improvements when removing specific cookies by name.
Miscellaneous
- Updated urllib3 to 1.13.
2.8.1 (2015-10-13)¶
Bugfixes
- Update certificate bundle to match
certifi
2015.9.6.2's weak certificate bundle. - Fix a bug in 2.8.0 where requests would raise
ConnectTimeout
instead ofConnectionError
- When using the PreparedRequest flow, requests will now correctly respect the
json
parameter. Broken in 2.8.0. - When using the PreparedRequest flow, requests will now correctly handle a Unicode-string method name on Python 2. Broken in 2.8.0.
2.8.0 (2015-10-05)¶
Minor Improvements (Backwards Compatible)
- Requests now supports per-host proxies. This allows the
proxies
dictionary to have entries of the form{'<scheme>://<hostname>': '<proxy>'}
. Host-specific proxies will be used in preference to the previously-supported scheme-specific ones, but the previous syntax will continue to work. Response.raise_for_status
now prints the URL that failed as part of the exception message.requests.utils.get_netrc_auth
now takes anraise_errors
kwarg, defaulting toFalse
. WhenTrue
, errors parsing.netrc
files cause exceptions to be thrown.- Change to bundled projects import logic to make it easier to unbundle requests downstream.
- Changed the default User-Agent string to avoid leaking data on Linux: now contains only the requests version.
Bugfixes
- The
json
parameter topost()
and friends will now only be used if neitherdata
norfiles
are present, consistent with the documentation. - We now ignore empty fields in the
NO_PROXY
environment variable. - Fixed problem where
httplib.BadStatusLine
would get raised if combiningstream=True
withcontextlib.closing
. - Prevented bugs where we would attempt to return the same connection back to the connection pool twice when sending a Chunked body.
- Miscellaneous minor internal changes.
- Digest Auth support is now thread safe.
Updates
- Updated urllib3 to 1.12.
2.7.0 (2015-05-03)¶
This is the first release that follows our new release process. For more, see our documentation.
Bugfixes
- Updated urllib3 to 1.10.4, resolving several bugs involving chunked transfer encoding and response framing.
2.6.2 (2015-04-23)¶
Bugfixes
- Fix regression where compressed data that was sent as chunked data was not properly decompressed. (#2561)
2.6.1 (2015-04-22)¶
Bugfixes
- Remove VendorAlias import machinery introduced in v2.5.2.
- Simplify the PreparedRequest.prepare API: We no longer require the user to pass an empty list to the hooks keyword argument. (c.f. #2552)
- Resolve redirects now receives and forwards all of the original arguments to the adapter. (#2503)
- Handle UnicodeDecodeErrors when trying to deal with a unicode URL that cannot be encoded in ASCII. (#2540)
- Populate the parsed path of the URI field when performing Digest Authentication. (#2426)
- Copy a PreparedRequest's CookieJar more reliably when it is not an instance of RequestsCookieJar. (#2527)
2.6.0 (2015-03-14)¶
Bugfixes
- CVE-2015-2296: Fix handling of cookies on redirect. Previously a cookie without a host value set would use the hostname for the redirected URL exposing requests users to session fixation attacks and potentially cookie stealing. This was disclosed privately by Matthew Daley of BugFuzz. This affects all versions of requests from v2.1.0 to v2.5.3 (inclusive on both ends).
- Fix error when requests is an
install_requires
dependency andpython setup.py test
is run. (#2462) - Fix error when urllib3 is unbundled and requests continues to use the vendored import location.
- Include fixes to
urllib3
's header handling. - Requests' handling of unvendored dependencies is now more restrictive.
Features and Improvements
- Support bytearrays when passed as parameters in the
files
argument. (#2468) - Avoid data duplication when creating a request with
str
,bytes
, orbytearray
input to thefiles
argument.
2.5.3 (2015-02-24)¶
Bugfixes
- Revert changes to our vendored certificate bundle. For more context see (#2455, #2456, and http://bugs.python.org/issue23476)
2.5.2 (2015-02-23)¶
Features and Improvements
- Add sha256 fingerprint support. (shazow/urllib3#540)
- Improve the performance of headers. (shazow/urllib3#544)
Bugfixes
- Copy pip's import machinery. When downstream redistributors remove requests.packages.urllib3 the import machinery will continue to let those same symbols work. Example usage in requests' documentation and 3rd-party libraries relying on the vendored copies of urllib3 will work without having to fallback to the system urllib3.
- Attempt to quote parts of the URL on redirect if unquoting and then quoting fails. (#2356)
- Fix filename type check for multipart form-data uploads. (#2411)
- Properly handle the case where a server issuing digest authentication challenges provides both auth and auth-int qop-values. (#2408)
- Fix a socket leak. (shazow/urllib3#549)
- Fix multiple
Set-Cookie
headers properly. (shazow/urllib3#534) - Disable the built-in hostname verification. (shazow/urllib3#526)
- Fix the behaviour of decoding an exhausted stream. (shazow/urllib3#535)
Security
- Pulled in an updated
cacert.pem
. - Drop RC4 from the default cipher list. (shazow/urllib3#551)
2.5.1 (2014-12-23)¶
Behavioural Changes
- Only catch HTTPErrors in raise_for_status (#2382)
Bugfixes
- Handle LocationParseError from urllib3 (#2344)
- Handle file-like object filenames that are not strings (#2379)
- Unbreak HTTPDigestAuth handler. Allow new nonces to be negotiated (#2389)
2.5.0 (2014-12-01)¶
Improvements
- Allow usage of urllib3's Retry object with HTTPAdapters (#2216)
- The
iter_lines
method on a response now accepts a delimiter with which to split the content (#2295)
Behavioural Changes
- Add deprecation warnings to functions in requests.utils that will be removed in 3.0 (#2309)
- Sessions used by the functional API are always closed (#2326)
- Restrict requests to HTTP/1.1 and HTTP/1.0 (stop accepting HTTP/0.9) (#2323)
Bugfixes
- Only parse the URL once (#2353)
- Allow Content-Length header to always be overridden (#2332)
- Properly handle files in HTTPDigestAuth (#2333)
- Cap redirect_cache size to prevent memory abuse (#2299)
- Fix HTTPDigestAuth handling of redirects after authenticating successfully (#2253)
- Fix crash with custom method parameter to Session.request (#2317)
- Fix how Link headers are parsed using the regular expression library (#2271)
Documentation
- Add more references for interlinking (#2348)
- Update CSS for theme (#2290)
- Update width of buttons and sidebar (#2289)
- Replace references of Gittip with Gratipay (#2282)
- Add link to changelog in sidebar (#2273)
2.4.3 (2014-10-06)¶
Bugfixes
- Unicode URL improvements for Python 2.
- Re-order JSON param for backwards compat.
- Automatically defrag authentication schemes from host/pass URIs. (#2249)
2.4.2 (2014-10-05)¶
Improvements
Bugfixes
- Avoid getting stuck in a loop (#2244)
- Multiple calls to iter* fail with unhelpful error. (#2240, #2241)
Documentation
2.4.1 (2014-09-09)¶
- Now has a "security" package extras set,
$ pip install requests[security]
- Requests will now use Certifi if it is available.
- Capture and re-raise urllib3 ProtocolError
- Bugfix for responses that attempt to redirect to themselves forever (wtf?).
2.4.0 (2014-08-29)¶
Behavioral Changes
Connection: keep-alive
header is now sent automatically.
Improvements
- Support for connect timeouts! Timeout now accepts a tuple (connect, read) which is used to set individual connect and read timeouts.
- Allow copying of PreparedRequests without headers/cookies.
- Updated bundled urllib3 version.
- Refactored settings loading from environment -- new Session.merge_environment_settings.
- Handle socket errors in iter_content.
2.3.0 (2014-05-16)¶
API Changes
- New
Response
propertyis_redirect
, which is true when the library could have processed this response as a redirection (whether or not it actually did). - The
timeout
parameter now affects requests with bothstream=True
andstream=False
equally. - The change in v2.0.0 to mandate explicit proxy schemes has been reverted.
Proxy schemes now default to
http://
. - The
CaseInsensitiveDict
used for HTTP headers now behaves like a normal dictionary when references as string or viewed in the interpreter.
Bugfixes
- No longer expose Authorization or Proxy-Authorization headers on redirect. Fix CVE-2014-1829 and CVE-2014-1830 respectively.
- Authorization is re-evaluated each redirect.
- On redirect, pass url as native strings.
- Fall-back to autodetected encoding for JSON when Unicode detection fails.
- Headers set to
None
on theSession
are now correctly not sent. - Correctly honor
decode_unicode
even if it wasn't used earlier in the same response. - Stop advertising
compress
as a supported Content-Encoding. - The
Response.history
parameter is now always a list. - Many, many
urllib3
bugfixes.
2.2.1 (2014-01-23)¶
Bugfixes
- Fixes incorrect parsing of proxy credentials that contain a literal or encoded '#' character.
- Assorted urllib3 fixes.
2.2.0 (2014-01-09)¶
API Changes
- New exception:
ContentDecodingError
. Raised instead ofurllib3
DecodeError
exceptions.
Bugfixes
- Avoid many many exceptions from the buggy implementation of
proxy_bypass
on OS X in Python 2.6. - Avoid crashing when attempting to get authentication credentials from ~/.netrc when running as a user without a home directory.
- Use the correct pool size for pools of connections to proxies.
- Fix iteration of
CookieJar
objects. - Ensure that cookies are persisted over redirect.
- Switch back to using chardet, since it has merged with charade.
2.1.0 (2013-12-05)¶
- Updated CA Bundle, of course.
- Cookies set on individual Requests through a
Session
(e.g. viaSession.get()
) are no longer persisted to theSession
. - Clean up connections when we hit problems during chunked upload, rather than leaking them.
- Return connections to the pool when a chunked upload is successful, rather than leaking it.
- Match the HTTPbis recommendation for HTTP 301 redirects.
- Prevent hanging when using streaming uploads and Digest Auth when a 401 is received.
- Values of headers set by Requests are now always the native string type.
- Fix previously broken SNI support.
- Fix accessing HTTP proxies using proxy authentication.
- Unencode HTTP Basic usernames and passwords extracted from URLs.
- Support for IP address ranges for no_proxy environment variable
- Parse headers correctly when users override the default
Host:
header. - Avoid munging the URL in case of case-sensitive servers.
- Looser URL handling for non-HTTP/HTTPS urls.
- Accept unicode methods in Python 2.6 and 2.7.
- More resilient cookie handling.
- Make
Response
objects pickleable. - Actually added MD5-sess to Digest Auth instead of pretending to like last time.
- Updated internal urllib3.
- Fixed @Lukasa's lack of taste.
2.0.1 (2013-10-24)¶
- Updated included CA Bundle with new mistrusts and automated process for the future
- Added MD5-sess to Digest Auth
- Accept per-file headers in multipart file POST messages.
- Fixed: Don't send the full URL on CONNECT messages.
- Fixed: Correctly lowercase a redirect scheme.
- Fixed: Cookies not persisted when set via functional API.
- Fixed: Translate urllib3 ProxyError into a requests ProxyError derived from ConnectionError.
- Updated internal urllib3 and chardet.
2.0.0 (2013-09-24)¶
API Changes:
- Keys in the Headers dictionary are now native strings on all Python versions, i.e. bytestrings on Python 2, unicode on Python 3.
- Proxy URLs now must have an explicit scheme. A
MissingSchema
exception will be raised if they don't. - Timeouts now apply to read time if
Stream=False
. RequestException
is now a subclass ofIOError
, notRuntimeError
.- Added new method to
PreparedRequest
objects:PreparedRequest.copy()
. - Added new method to
Session
objects:Session.update_request()
. This method updates aRequest
object with the data (e.g. cookies) stored on theSession
. - Added new method to
Session
objects:Session.prepare_request()
. This method updates and prepares aRequest
object, and returns the correspondingPreparedRequest
object. - Added new method to
HTTPAdapter
objects:HTTPAdapter.proxy_headers()
. This should not be called directly, but improves the subclass interface. httplib.IncompleteRead
exceptions caused by incorrect chunked encoding will now raise a RequestsChunkedEncodingError
instead.- Invalid percent-escape sequences now cause a Requests
InvalidURL
exception to be raised. - HTTP 208 no longer uses reason phrase
"im_used"
. Correctly uses"already_reported"
. - HTTP 226 reason added (
"im_used"
).
Bugfixes:
- Vastly improved proxy support, including the CONNECT verb. Special thanks to the many contributors who worked towards this improvement.
- Cookies are now properly managed when 401 authentication responses are received.
- Chunked encoding fixes.
- Support for mixed case schemes.
- Better handling of streaming downloads.
- Retrieve environment proxies from more locations.
- Minor cookies fixes.
- Improved redirect behaviour.
- Improved streaming behaviour, particularly for compressed data.
- Miscellaneous small Python 3 text encoding bugs.
.netrc
no longer overrides explicit auth.- Cookies set by hooks are now correctly persisted on Sessions.
- Fix problem with cookies that specify port numbers in their host field.
BytesIO
can be used to perform streaming uploads.- More generous parsing of the
no_proxy
environment variable. - Non-string objects can be passed in data values alongside files.
1.2.3 (2013-05-25)¶
- Simple packaging fix
1.2.2 (2013-05-23)¶
- Simple packaging fix
1.2.1 (2013-05-20)¶
- 301 and 302 redirects now change the verb to GET for all verbs, not just POST, improving browser compatibility.
- Python 3.3.2 compatibility
- Always percent-encode location headers
- Fix connection adapter matching to be most-specific first
- new argument to the default connection adapter for passing a block argument
- prevent a KeyError when there's no link headers
1.2.0 (2013-03-31)¶
- Fixed cookies on sessions and on requests
- Significantly change how hooks are dispatched - hooks now receive all the arguments specified by the user when making a request so hooks can make a secondary request with the same parameters. This is especially necessary for authentication handler authors
- certifi support was removed
- Fixed bug where using OAuth 1 with body
signature_type
sent no data - Major proxy work thanks to @Lukasa including parsing of proxy authentication from the proxy url
- Fix DigestAuth handling too many 401s
- Update vendored urllib3 to include SSL bug fixes
- Allow keyword arguments to be passed to
json.loads()
via theResponse.json()
method - Don't send
Content-Length
header by default onGET
orHEAD
requests - Add
elapsed
attribute toResponse
objects to time how long a request took. - Fix
RequestsCookieJar
- Sessions and Adapters are now picklable, i.e., can be used with the multiprocessing library
- Update charade to version 1.0.3
The change in how hooks are dispatched will likely cause a great deal of issues.
1.1.0 (2013-01-10)¶
- CHUNKED REQUESTS
- Support for iterable response bodies
- Assume servers persist redirect params
- Allow explicit content types to be specified for file data
- Make merge_kwargs case-insensitive when looking up keys
1.0.3 (2012-12-18)¶
- Fix file upload encoding bug
- Fix cookie behavior
1.0.2 (2012-12-17)¶
- Proxy fix for HTTPAdapter.
1.0.1 (2012-12-17)¶
- Cert verification exception bug.
- Proxy fix for HTTPAdapter.
1.0.0 (2012-12-17)¶
- Massive Refactor and Simplification
- Switch to Apache 2.0 license
- Swappable Connection Adapters
- Mountable Connection Adapters
- Mutable ProcessedRequest chain
- /s/prefetch/stream
- Removal of all configuration
- Standard library logging
- Make Response.json() callable, not property.
- Usage of new charade project, which provides python 2 and 3 simultaneous chardet.
- Removal of all hooks except 'response'
- Removal of all authentication helpers (OAuth, Kerberos)
This is not a backwards compatible change.
0.14.2 (2012-10-27)¶
- Improved mime-compatible JSON handling
- Proxy fixes
- Path hack fixes
- Case-Insensitive Content-Encoding headers
- Support for CJK parameters in form posts
0.14.1 (2012-10-01)¶
- Python 3.3 Compatibility
- Simply default accept-encoding
- Bugfixes
0.14.0 (2012-09-02)¶
- No more iter_content errors if already downloaded.
0.13.9 (2012-08-25)¶
- Fix for OAuth + POSTs
- Remove exception eating from dispatch_hook
- General bugfixes
0.13.8 (2012-08-21)¶
- Incredible Link header support :)
0.13.7 (2012-08-19)¶
- Support for (key, value) lists everywhere.
- Digest Authentication improvements.
- Ensure proxy exclusions work properly.
- Clearer UnicodeError exceptions.
- Automatic casting of URLs to strings (fURL and such)
- Bugfixes.
0.13.6 (2012-08-06)¶
- Long awaited fix for hanging connections!
0.13.5 (2012-07-27)¶
- Packaging fix
0.13.4 (2012-07-27)¶
- GSSAPI/Kerberos authentication!
- App Engine 2.7 Fixes!
- Fix leaking connections (from urllib3 update)
- OAuthlib path hack fix
- OAuthlib URL parameters fix.
0.13.3 (2012-07-12)¶
- Use simplejson if available.
- Do not hide SSLErrors behind Timeouts.
- Fixed param handling with urls containing fragments.
- Significantly improved information in User Agent.
- client certificates are ignored when verify=False
0.13.2 (2012-06-28)¶
- Zero dependencies (once again)!
- New: Response.reason
- Sign querystring parameters in OAuth 1.0
- Client certificates no longer ignored when verify=False
- Add openSUSE certificate support
0.13.1 (2012-06-07)¶
- Allow passing a file or file-like object as data.
- Allow hooks to return responses that indicate errors.
- Fix Response.text and Response.json for body-less responses.
0.13.0 (2012-05-29)¶
- Removal of Requests.async in favor of grequests
- Allow disabling of cookie persistence.
- New implementation of safe_mode
- cookies.get now supports default argument
- Session cookies not saved when Session.request is called with return_response=False
- Env: no_proxy support.
- RequestsCookieJar improvements.
- Various bug fixes.
0.12.1 (2012-05-08)¶
- New
Response.json
property. - Ability to add string file uploads.
- Fix out-of-range issue with iter_lines.
- Fix iter_content default size.
- Fix POST redirects containing files.
0.12.0 (2012-05-02)¶
- EXPERIMENTAL OAUTH SUPPORT!
- Proper CookieJar-backed cookies interface with awesome dict-like interface.
- Speed fix for non-iterated content chunks.
- Move
pre_request
to a more usable place. - New
pre_send
hook. - Lazily encode data, params, files.
- Load system Certificate Bundle if
certify
isn't available. - Cleanups, fixes.
0.11.2 (2012-04-22)¶
- Attempt to use the OS's certificate bundle if
certifi
isn't available. - Infinite digest auth redirect fix.
- Multi-part file upload improvements.
- Fix decoding of invalid %encodings in URLs.
- If there is no content in a response don't throw an error the second time that content is attempted to be read.
- Upload data on redirects.
0.11.1 (2012-03-30)¶
- POST redirects now break RFC to do what browsers do: Follow up with a GET.
- New
strict_mode
configuration to disable new redirect behavior.
0.11.0 (2012-03-14)¶
- Private SSL Certificate support
- Remove select.poll from Gevent monkeypatching
- Remove redundant generator for chunked transfer encoding
- Fix: Response.ok raises Timeout Exception in safe_mode
0.10.8 (2012-03-09)¶
- Generate chunked ValueError fix
- Proxy configuration by environment variables
- Simplification of iter_lines.
- New trust_env configuration for disabling system/environment hints.
- Suppress cookie errors.
0.10.7 (2012-03-07)¶
- encode_uri = False
0.10.6 (2012-02-25)¶
- Allow '=' in cookies.
0.10.5 (2012-02-25)¶
- Response body with 0 content-length fix.
- New async.imap.
- Don't fail on netrc.
0.10.4 (2012-02-20)¶
- Honor netrc.
0.10.3 (2012-02-20)¶
- HEAD requests don't follow redirects anymore.
- raise_for_status() doesn't raise for 3xx anymore.
- Make Session objects picklable.
- ValueError for invalid schema URLs.
0.10.2 (2012-01-15)¶
- Vastly improved URL quoting.
- Additional allowed cookie key values.
- Attempted fix for "Too many open files" Error
- Replace unicode errors on first pass, no need for second pass.
- Append '/' to bare-domain urls before query insertion.
- Exceptions now inherit from RuntimeError.
- Binary uploads + auth fix.
- Bugfixes.
0.10.1 (2012-01-23)¶
- PYTHON 3 SUPPORT!
- Dropped 2.5 Support. (Backwards Incompatible)
0.10.0 (2012-01-21)¶
Response.content
is now bytes-only. (Backwards Incompatible)- New
Response.text
is unicode-only. - If no
Response.encoding
is specified andchardet
is available,Response.text
will guess an encoding. - Default to ISO-8859-1 (Western) encoding for "text" subtypes.
- Removal of decode_unicode. (Backwards Incompatible)
- New multiple-hooks system.
- New
Response.register_hook
for registering hooks within the pipeline. Response.url
is now Unicode.
0.9.3 (2012-01-18)¶
- SSL verify=False bugfix (apparent on windows machines).
0.9.2 (2012-01-18)¶
- Asynchronous async.send method.
- Support for proper chunk streams with boundaries.
- session argument for Session classes.
- Print entire hook tracebacks, not just exception instance.
- Fix response.iter_lines from pending next line.
- Fix but in HTTP-digest auth w/ URI having query strings.
- Fix in Event Hooks section.
- Urllib3 update.
0.9.1 (2012-01-06)¶
- danger_mode for automatic Response.raise_for_status()
- Response.iter_lines refactor
0.9.0 (2011-12-28)¶
- verify ssl is default.
0.8.9 (2011-12-28)¶
- Packaging fix.
0.8.8 (2011-12-28)¶
- SSL CERT VERIFICATION!
- Release of Cerifi: Mozilla's cert list.
- New 'verify' argument for SSL requests.
- Urllib3 update.
0.8.7 (2011-12-24)¶
- iter_lines last-line truncation fix
- Force safe_mode for async requests
- Handle safe_mode exceptions more consistently
- Fix iteration on null responses in safe_mode
0.8.6 (2011-12-18)¶
- Socket timeout fixes.
- Proxy Authorization support.
0.8.5 (2011-12-14)¶
- Response.iter_lines!
0.8.4 (2011-12-11)¶
- Prefetch bugfix.
- Added license to installed version.
0.8.3 (2011-11-27)¶
- Converted auth system to use simpler callable objects.
- New session parameter to API methods.
- Display full URL while logging.
0.8.2 (2011-11-19)¶
- New Unicode decoding system, based on over-ridable Response.encoding.
- Proper URL slash-quote handling.
- Cookies with
[
,]
, and_
allowed.
0.8.1 (2011-11-15)¶
- URL Request path fix
- Proxy fix.
- Timeouts fix.
0.8.0 (2011-11-13)¶
- Keep-alive support!
- Complete removal of Urllib2
- Complete removal of Poster
- Complete removal of CookieJars
- New ConnectionError raising
- Safe_mode for error catching
- prefetch parameter for request methods
- OPTION method
- Async pool size throttling
- File uploads send real names
- Vendored in urllib3
0.7.6 (2011-11-07)¶
- Digest authentication bugfix (attach query data to path)
0.7.5 (2011-11-04)¶
- Response.content = None if there was an invalid response.
- Redirection auth handling.
0.7.4 (2011-10-26)¶
- Session Hooks fix.
0.7.3 (2011-10-23)¶
- Digest Auth fix.
0.7.2 (2011-10-23)¶
- PATCH Fix.
0.7.1 (2011-10-23)¶
- Move away from urllib2 authentication handling.
- Fully Remove AuthManager, AuthObject, &c.
- New tuple-based auth system with handler callbacks.
0.7.0 (2011-10-22)¶
- Sessions are now the primary interface.
- Deprecated InvalidMethodException.
- PATCH fix.
- New config system (no more global settings).
0.6.6 (2011-10-19)¶
- Session parameter bugfix (params merging).
0.6.5 (2011-10-18)¶
- Offline (fast) test suite.
- Session dictionary argument merging.
0.6.4 (2011-10-13)¶
- Automatic decoding of unicode, based on HTTP Headers.
- New
decode_unicode
setting. - Removal of
r.read/close
methods. - New
r.faw
interface for advanced response usage.* - Automatic expansion of parameterized headers.
0.6.3 (2011-10-13)¶
- Beautiful
requests.async
module, for making async requests w/ gevent.
0.6.2 (2011-10-09)¶
- GET/HEAD obeys allow_redirects=False.
0.6.1 (2011-08-20)¶
- Enhanced status codes experience
\o/
- Set a maximum number of redirects (
settings.max_redirects
) - Full Unicode URL support
- Support for protocol-less redirects.
- Allow for arbitrary request types.
- Bugfixes
0.6.0 (2011-08-17)¶
- New callback hook system
- New persistent sessions object and context manager
- Transparent Dict-cookie handling
- Status code reference object
- Removed Response.cached
- Added Response.request
- All args are kwargs
- Relative redirect support
- HTTPError handling improvements
- Improved https testing
- Bugfixes
0.5.1 (2011-07-23)¶
- International Domain Name Support!
- Access headers without fetching entire body (
read()
) - Use lists as dicts for parameters
- Add Forced Basic Authentication
- Forced Basic is default authentication type
python-requests.org
default User-Agent header- CaseInsensitiveDict lower-case caching
- Response.history bugfix
0.5.0 (2011-06-21)¶
- PATCH Support
- Support for Proxies
- HTTPBin Test Suite
- Redirect Fixes
- settings.verbose stream writing
- Querystrings for all methods
- URLErrors (Connection Refused, Timeout, Invalid URLs) are treated as explicitly raised
r.requests.get('hwe://blah'); r.raise_for_status()
0.4.1 (2011-05-22)¶
- Improved Redirection Handling
- New 'allow_redirects' param for following non-GET/HEAD Redirects
- Settings module refactoring
0.4.0 (2011-05-15)¶
- Response.history: list of redirected responses
- Case-Insensitive Header Dictionaries!
- Unicode URLs
0.3.4 (2011-05-14)¶
- Urllib2 HTTPAuthentication Recursion fix (Basic/Digest)
- Internal Refactor
- Bytes data upload Bugfix
0.3.3 (2011-05-12)¶
- Request timeouts
- Unicode url-encoded data
- Settings context manager and module
0.3.2 (2011-04-15)¶
- Automatic Decompression of GZip Encoded Content
- AutoAuth Support for Tupled HTTP Auth
0.3.1 (2011-04-01)¶
- Cookie Changes
- Response.read()
- Poster fix
0.3.0 (2011-02-25)¶
- Automatic Authentication API Change
- Smarter Query URL Parameterization
- Allow file uploads and POST data together
- New Authentication Manager System
- Simpler Basic HTTP System
- Supports all build-in urllib2 Auths
- Allows for custom Auth Handlers
0.2.4 (2011-02-19)¶
- Python 2.5 Support
- PyPy-c v1.4 Support
- Auto-Authentication tests
- Improved Request object constructor
0.2.3 (2011-02-15)¶
- New HTTPHandling Methods
- Response.__nonzero__ (false if bad HTTP Status)
- Response.ok (True if expected HTTP Status)
- Response.error (Logged HTTPError if bad HTTP Status)
- Response.raise_for_status() (Raises stored HTTPError)
0.2.2 (2011-02-14)¶
- Still handles request in the event of an HTTPError. (Issue #2)
- Eventlet and Gevent Monkeypatch support.
- Cookie Support (Issue #1)
0.2.1 (2011-02-14)¶
- Added file attribute to POST and PUT requests for multipart-encode file uploads.
- Added Request.url attribute for context and redirects
0.2.0 (2011-02-14)¶
- Birth!
0.0.1 (2011-02-13)¶
- Frustration
- Conception
Release Process and Rules¶
v2.6.2 新版功能.
Starting with the version to be released after v2.6.2
, the following rules
will govern and describe how the Requests core team produces a new release.
Major Releases¶
A major release will include breaking changes. When it is versioned, it will
be versioned as vX.0.0
. For example, if the previous release was
v10.2.7
the next version will be v11.0.0
.
Breaking changes are changes that break backwards compatibility with prior
versions. If the project were to change the text
attribute on a
Response
object to a method, that would only happen in a Major release.
Major releases may also include miscellaneous bug fixes and upgrades to vendored packages. The core developers of Requests are committed to providing a good user experience. This means we're also committed to preserving backwards compatibility as much as possible. Major releases will be infrequent and will need strong justifications before they are considered.
Minor Releases¶
A minor release will not include breaking changes but may include
miscellaneous bug fixes and upgrades to vendored packages. If the previous
version of Requests released was v10.2.7
a minor release would be
versioned as v10.3.0
.
Minor releases will be backwards compatible with releases that have the same
major version number. In other words, all versions that would start with
v10.
should be compatible with each other.
Hotfix Releases¶
A hotfix release will only include bug fixes that were missed when the project
released the previous version. If the previous version of Requests released
v10.2.7
the hotfix release would be versioned as v10.2.8
.
Hotfixes will not include upgrades to vendored dependencies after
v2.6.2
Reasoning¶
In the 2.5 and 2.6 release series, the Requests core team upgraded vendored dependencies and caused a great deal of headaches for both users and the core team. To reduce this pain, we're forming a concrete set of procedures so expectations will be properly set.
API 文档/指南¶
如果你要了解具体的函数、类、方法,这部分文档就是为你准备的。
开发接口¶
这部分文档包含了 Requests 所有的接口。对于 Requests 依赖的外部库部分,我们在这里介绍最重要的部分,并提供了规范文档的链接。
主要接口¶
Requests 所有的功能都可以通过以下 7 个方法访问。它们全部都会返回一个
Response
对象的实例。
-
requests.
request
(method, url, **kwargs)[源代码]¶ Constructs and sends a
Request
.参数: - method -- method for the new
Request
object. - url -- URL for the new
Request
object. - params -- (optional) Dictionary or bytes to be sent in the query string for the
Request
. - data -- (optional) Dictionary or list of tuples
[(key, value)]
(will be form-encoded), bytes, or file-like object to send in the body of theRequest
. - json -- (optional) json data to send in the body of the
Request
. - headers -- (optional) Dictionary of HTTP Headers to send with the
Request
. - cookies -- (optional) Dict or CookieJar object to send with the
Request
. - files -- (optional) Dictionary of
'name': file-like-objects
(or{'name': file-tuple}
) for multipart encoding upload.file-tuple
can be a 2-tuple('filename', fileobj)
, 3-tuple('filename', fileobj, 'content_type')
or a 4-tuple('filename', fileobj, 'content_type', custom_headers)
, where'content-type'
is a string defining the content type of the given file andcustom_headers
a dict-like object containing additional headers to add for the file. - auth -- (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
- timeout (float or tuple) -- (optional) How many seconds to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) tuple.
- allow_redirects (bool) -- (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to
True
. - proxies -- (optional) Dictionary mapping protocol to the URL of the proxy.
- verify -- (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to
True
. - stream -- (optional) if
False
, the response content will be immediately downloaded. - cert -- (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
返回: Response
object返回类型: Usage:
>>> import requests >>> req = requests.request('GET', 'http://httpbin.org/get') <Response [200]>
- method -- method for the new
-
requests.
head
(url, **kwargs)[源代码]¶ Sends a HEAD request.
参数: - url -- URL for the new
Request
object. - **kwargs -- Optional arguments that
request
takes.
返回: Response
object返回类型: - url -- URL for the new
-
requests.
post
(url, data=None, json=None, **kwargs)[源代码]¶ Sends a POST request.
参数: 返回: Response
object返回类型:
异常¶
-
exception
requests.
RequestException
(*args, **kwargs)[源代码]¶ There was an ambiguous exception that occurred while handling your request.
-
exception
requests.
ConnectTimeout
(*args, **kwargs)[源代码]¶ The request timed out while trying to connect to the remote server.
Requests that produced this error are safe to retry.
请求会话¶
-
class
requests.
Session
[源代码]¶ A Requests session.
Provides cookie persistence, connection-pooling, and configuration.
Basic Usage:
>>> import requests >>> s = requests.Session() >>> s.get('http://httpbin.org/get') <Response [200]>
Or as a context manager:
>>> with requests.Session() as s: >>> s.get('http://httpbin.org/get') <Response [200]>
-
cert
= None¶ SSL client certificate default, if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
A CookieJar containing all currently outstanding cookies set on this session. By default it is a
RequestsCookieJar
, but may be any othercookielib.CookieJar
compatible object.
-
delete
(url, **kwargs)[源代码]¶ Sends a DELETE request. Returns
Response
object.参数: - url -- URL for the new
Request
object. - **kwargs -- Optional arguments that
request
takes.
返回类型: - url -- URL for the new
-
get
(url, **kwargs)[源代码]¶ Sends a GET request. Returns
Response
object.参数: - url -- URL for the new
Request
object. - **kwargs -- Optional arguments that
request
takes.
返回类型: - url -- URL for the new
-
get_adapter
(url)[源代码]¶ Returns the appropriate connection adapter for the given URL.
返回类型: requests.adapters.BaseAdapter
-
get_redirect_target
(resp)¶ Receives a Response. Returns a redirect URI or
None
-
head
(url, **kwargs)[源代码]¶ Sends a HEAD request. Returns
Response
object.参数: - url -- URL for the new
Request
object. - **kwargs -- Optional arguments that
request
takes.
返回类型: - url -- URL for the new
-
headers
= None¶ A case-insensitive dictionary of headers to be sent on each
Request
sent from thisSession
.
-
hooks
= None¶ Event-handling hooks.
-
max_redirects
= None¶ Maximum number of redirects allowed. If the request exceeds this limit, a
TooManyRedirects
exception is raised. This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is 30.
-
merge_environment_settings
(url, proxies, stream, verify, cert)[源代码]¶ Check the environment and merge it with some settings.
返回类型: dict
-
mount
(prefix, adapter)[源代码]¶ Registers a connection adapter to a prefix.
Adapters are sorted in descending order by prefix length.
-
options
(url, **kwargs)[源代码]¶ Sends a OPTIONS request. Returns
Response
object.参数: - url -- URL for the new
Request
object. - **kwargs -- Optional arguments that
request
takes.
返回类型: - url -- URL for the new
-
params
= None¶ Dictionary of querystring data to attach to each
Request
. The dictionary values may be lists for representing multivalued query parameters.
-
post
(url, data=None, json=None, **kwargs)[源代码]¶ Sends a POST request. Returns
Response
object.参数: 返回类型:
-
prepare_request
(request)[源代码]¶ Constructs a
PreparedRequest
for transmission and returns it. ThePreparedRequest
has settings merged from theRequest
instance and those of theSession
.参数: request -- Request
instance to prepare with this session's settings.返回类型: requests.PreparedRequest
-
proxies
= None¶ Dictionary mapping protocol or protocol and host to the URL of the proxy (e.g. {'http': 'foo.bar:3128', 'http://host.name': 'foo.bar:4012'}) to be used on each
Request
.
-
rebuild_auth
(prepared_request, response)¶ When being redirected we may want to strip authentication from the request to avoid leaking credentials. This method intelligently removes and reapplies authentication where possible to avoid credential loss.
-
rebuild_method
(prepared_request, response)¶ When being redirected we may want to change the method of the request based on certain specs or browser behavior.
-
rebuild_proxies
(prepared_request, proxies)¶ This method re-evaluates the proxy configuration by considering the environment variables. If we are redirected to a URL covered by NO_PROXY, we strip the proxy configuration. Otherwise, we set missing proxy keys for this URL (in case they were stripped by a previous redirect).
This method also replaces the Proxy-Authorization header where necessary.
返回类型: dict
-
request
(method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=None, allow_redirects=True, proxies=None, hooks=None, stream=None, verify=None, cert=None, json=None)[源代码]¶ Constructs a
Request
, prepares it and sends it. ReturnsResponse
object.参数: - method -- method for the new
Request
object. - url -- URL for the new
Request
object. - params -- (optional) Dictionary or bytes to be sent in the query
string for the
Request
. - data -- (optional) Dictionary, bytes, or file-like object to send
in the body of the
Request
. - json -- (optional) json to send in the body of the
Request
. - headers -- (optional) Dictionary of HTTP Headers to send with the
Request
. - cookies -- (optional) Dict or CookieJar object to send with the
Request
. - files -- (optional) Dictionary of
'filename': file-like-objects
for multipart encoding upload. - auth -- (optional) Auth tuple or callable to enable Basic/Digest/Custom HTTP Auth.
- timeout (float or tuple) -- (optional) How long to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) tuple.
- allow_redirects (bool) -- (optional) Set to True by default.
- proxies -- (optional) Dictionary mapping protocol or protocol and hostname to the URL of the proxy.
- stream -- (optional) whether to immediately download the response
content. Defaults to
False
. - verify -- (optional) Either a boolean, in which case it controls whether we verify
the server's TLS certificate, or a string, in which case it must be a path
to a CA bundle to use. Defaults to
True
. - cert -- (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
返回类型: - method -- method for the new
-
resolve_redirects
(resp, req, stream=False, timeout=None, verify=True, cert=None, proxies=None, yield_requests=False, **adapter_kwargs)¶ Receives a Response. Returns a generator of Responses or Requests.
-
send
(request, **kwargs)[源代码]¶ Send a given PreparedRequest.
返回类型: requests.Response
-
stream
= None¶ Stream response content default.
-
trust_env
= None¶ Trust environment settings for proxy configuration, default authentication and similar.
-
verify
= None¶ SSL Verification default.
-
低级类¶
-
class
requests.
Request
(method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None)[源代码]¶ A user-created
Request
object.Used to prepare a
PreparedRequest
, which is sent to the server.参数: - method -- HTTP method to use.
- url -- URL to send.
- headers -- dictionary of headers to send.
- files -- dictionary of {filename: fileobject} files to multipart upload.
- data -- the body to attach to the request. If a dictionary is provided, form-encoding will take place.
- json -- json for the body to attach to the request (if files or data is not specified).
- params -- dictionary of URL parameters to append to the URL.
- auth -- Auth handler or (user, pass) tuple.
- cookies -- dictionary or CookieJar of cookies to attach to this request.
- hooks -- dictionary of callback hooks, for internal usage.
Usage:
>>> import requests >>> req = requests.Request('GET', 'http://httpbin.org/get') >>> req.prepare() <PreparedRequest [GET]>
-
deregister_hook
(event, hook)¶ Deregister a previously registered hook. Returns True if the hook existed, False if not.
-
prepare
()[源代码]¶ Constructs a
PreparedRequest
for transmission and returns it.
-
register_hook
(event, hook)¶ Properly register a hook.
-
class
requests.
Response
[源代码]¶ The
Response
object, which contains a server's response to an HTTP request.-
apparent_encoding
¶ The apparent encoding, provided by the chardet library.
-
close
()[源代码]¶ Releases the connection back to the pool. Once this method has been called the underlying
raw
object must not be accessed again.Note: Should not normally need to be called explicitly.
-
content
¶ Content of the response, in bytes.
A CookieJar of Cookies the server sent back.
-
elapsed
= None¶ The amount of time elapsed between sending the request and the arrival of the response (as a timedelta). This property specifically measures the time taken between sending the first byte of the request and finishing parsing the headers. It is therefore unaffected by consuming the response content or the value of the
stream
keyword argument.
-
encoding
= None¶ Encoding to decode with when accessing r.text.
-
headers
= None¶ Case-insensitive Dictionary of Response Headers. For example,
headers['content-encoding']
will return the value of a'Content-Encoding'
response header.
-
history
= None¶ A list of
Response
objects from the history of the Request. Any redirect responses will end up here. The list is sorted from the oldest to the most recent request.
-
is_permanent_redirect
¶ True if this Response one of the permanent versions of redirect.
-
is_redirect
¶ True if this Response is a well-formed HTTP redirect that could have been processed automatically (by
Session.resolve_redirects
).
-
iter_content
(chunk_size=1, decode_unicode=False)[源代码]¶ Iterates over the response data. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. The chunk size is the number of bytes it should read into memory. This is not necessarily the length of each item returned as decoding can take place.
chunk_size must be of type int or None. A value of None will function differently depending on the value of stream. stream=True will read data as it arrives in whatever size the chunks are received. If stream=False, data is returned as a single chunk.
If decode_unicode is True, content will be decoded using the best available encoding based on the response.
-
iter_lines
(chunk_size=512, decode_unicode=None, delimiter=None)[源代码]¶ Iterates over the response data, one line at a time. When stream=True is set on the request, this avoids reading the content at once into memory for large responses.
注解
This method is not reentrant safe.
-
json
(**kwargs)[源代码]¶ Returns the json-encoded content of a response, if any.
参数: **kwargs -- Optional arguments that json.loads
takes.引发: ValueError -- If the response body does not contain valid json.
-
links
¶ Returns the parsed header links of the response, if any.
-
next
¶ Returns a PreparedRequest for the next request in a redirect chain, if there is one.
-
ok
¶ Returns True if
status_code
is less than 400.This attribute checks if the status code of the response is between 400 and 600 to see if there was a client error or a server error. If the status code, is between 200 and 400, this will return True. This is not a check to see if the response code is
200 OK
.
-
raw
= None¶ File-like object representation of response (for advanced usage). Use of
raw
requires thatstream=True
be set on the request.
-
reason
= None¶ Textual reason of responded HTTP Status, e.g. "Not Found" or "OK".
-
request
= None¶ The
PreparedRequest
object to which this is a response.
-
status_code
= None¶ Integer Code of responded HTTP Status, e.g. 404 or 200.
-
text
¶ Content of the response, in unicode.
If Response.encoding is None, encoding will be guessed using
chardet
.The encoding of the response content is determined based solely on HTTP headers, following RFC 2616 to the letter. If you can take advantage of non-HTTP knowledge to make a better guess at the encoding, you should set
r.encoding
appropriately before accessing this property.
-
url
= None¶ Final URL location of Response.
-
更低级的类¶
-
class
requests.
PreparedRequest
[源代码]¶ The fully mutable
PreparedRequest
object, containing the exact bytes that will be sent to the server.Generated from either a
Request
object or manually.Usage:
>>> import requests >>> req = requests.Request('GET', 'http://httpbin.org/get') >>> r = req.prepare() <PreparedRequest [GET]> >>> s = requests.Session() >>> s.send(r) <Response [200]>
-
body
= None¶ request body to send to the server.
-
deregister_hook
(event, hook)¶ Deregister a previously registered hook. Returns True if the hook existed, False if not.
-
headers
= None¶ dictionary of HTTP headers.
-
hooks
= None¶ dictionary of callback hooks, for internal usage.
-
method
= None¶ HTTP verb to send to the server.
-
path_url
¶ Build the path URL to use.
-
prepare
(method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None)[源代码]¶ Prepares the entire request with the given parameters.
Prepares the given HTTP cookie data.
This function eventually generates a
Cookie
header from the given cookies using cookielib. Due to cookielib's design, the header will not be regenerated if it already exists, meaning this function can only be called once for the life of thePreparedRequest
object. Any subsequent calls toprepare_cookies
will have no actual effect, unless the "Cookie" header is removed beforehand.
-
register_hook
(event, hook)¶ Properly register a hook.
-
url
= None¶ HTTP URL to send the request to.
-
-
class
requests.adapters.
BaseAdapter
[源代码]¶ The Base Transport Adapter
-
send
(request, stream=False, timeout=None, verify=True, cert=None, proxies=None)[源代码]¶ Sends PreparedRequest object. Returns Response object.
参数: - request -- The
PreparedRequest
being sent. - stream -- (optional) Whether to stream the request content.
- timeout (float or tuple) -- (optional) How long to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) tuple.
- verify -- (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use
- cert -- (optional) Any user-provided SSL certificate to be trusted.
- proxies -- (optional) The proxies dictionary to apply to the request.
- request -- The
-
-
class
requests.adapters.
HTTPAdapter
(pool_connections=10, pool_maxsize=10, max_retries=0, pool_block=False)[源代码]¶ The built-in HTTP Adapter for urllib3.
Provides a general-case interface for Requests sessions to contact HTTP and HTTPS urls by implementing the Transport Adapter interface. This class will usually be created by the
Session
class under the covers.参数: - pool_connections -- The number of urllib3 connection pools to cache.
- pool_maxsize -- The maximum number of connections to save in the pool.
- max_retries -- The maximum number of retries each connection
should attempt. Note, this applies only to failed DNS lookups, socket
connections and connection timeouts, never to requests where data has
made it to the server. By default, Requests does not retry failed
connections. If you need granular control over the conditions under
which we retry a request, import urllib3's
Retry
class and pass that instead. - pool_block -- Whether the connection pool should block for connections.
Usage:
>>> import requests >>> s = requests.Session() >>> a = requests.adapters.HTTPAdapter(max_retries=3) >>> s.mount('http://', a)
-
add_headers
(request, **kwargs)[源代码]¶ Add any headers needed by the connection. As of v2.0 this does nothing by default, but is left for overriding by users that subclass the
HTTPAdapter
.This should not be called from user code, and is only exposed for use when subclassing the
HTTPAdapter
.参数: - request -- The
PreparedRequest
to add headers to. - kwargs -- The keyword arguments from the call to send().
- request -- The
-
build_response
(req, resp)[源代码]¶ Builds a
Response
object from a urllib3 response. This should not be called from user code, and is only exposed for use when subclassing theHTTPAdapter
参数: - req -- The
PreparedRequest
used to generate the response. - resp -- The urllib3 response object.
返回类型: - req -- The
-
cert_verify
(conn, url, verify, cert)[源代码]¶ Verify a SSL certificate. This method should not be called from user code, and is only exposed for use when subclassing the
HTTPAdapter
.参数: - conn -- The urllib3 connection object associated with the cert.
- url -- The requested URL.
- verify -- Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use
- cert -- The SSL certificate to verify.
-
close
()[源代码]¶ Disposes of any internal state.
Currently, this closes the PoolManager and any active ProxyManager, which closes any pooled connections.
-
get_connection
(url, proxies=None)[源代码]¶ Returns a urllib3 connection for the given URL. This should not be called from user code, and is only exposed for use when subclassing the
HTTPAdapter
.参数: - url -- The URL to connect to.
- proxies -- (optional) A Requests-style dictionary of proxies used on this request.
返回类型: urllib3.ConnectionPool
-
init_poolmanager
(connections, maxsize, block=False, **pool_kwargs)[源代码]¶ Initializes a urllib3 PoolManager.
This method should not be called from user code, and is only exposed for use when subclassing the
HTTPAdapter
.参数: - connections -- The number of urllib3 connection pools to cache.
- maxsize -- The maximum number of connections to save in the pool.
- block -- Block when no free connections are available.
- pool_kwargs -- Extra keyword arguments used to initialize the Pool Manager.
-
proxy_headers
(proxy)[源代码]¶ Returns a dictionary of the headers to add to any request sent through a proxy. This works with urllib3 magic to ensure that they are correctly sent to the proxy, rather than in a tunnelled request if CONNECT is being used.
This should not be called from user code, and is only exposed for use when subclassing the
HTTPAdapter
.参数: proxies -- The url of the proxy being used for this request. 返回类型: dict
-
proxy_manager_for
(proxy, **proxy_kwargs)[源代码]¶ Return urllib3 ProxyManager for the given proxy.
This method should not be called from user code, and is only exposed for use when subclassing the
HTTPAdapter
.参数: - proxy -- The proxy to return a urllib3 ProxyManager for.
- proxy_kwargs -- Extra keyword arguments used to configure the Proxy Manager.
返回: ProxyManager
返回类型:
-
request_url
(request, proxies)[源代码]¶ Obtain the url to use when making the final request.
If the message is being sent through a HTTP proxy, the full URL has to be used. Otherwise, we should only use the path portion of the URL.
This should not be called from user code, and is only exposed for use when subclassing the
HTTPAdapter
.参数: - request -- The
PreparedRequest
being sent. - proxies -- A dictionary of schemes or schemes and hosts to proxy URLs.
返回类型: str
- request -- The
-
send
(request, stream=False, timeout=None, verify=True, cert=None, proxies=None)[源代码]¶ Sends PreparedRequest object. Returns Response object.
参数: - request -- The
PreparedRequest
being sent. - stream -- (optional) Whether to stream the request content.
- timeout (float or tuple or urllib3 Timeout object) -- (optional) How long to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) tuple.
- verify -- (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use
- cert -- (optional) Any user-provided SSL certificate to be trusted.
- proxies -- (optional) The proxies dictionary to apply to the request.
返回类型: - request -- The
身份验证¶
-
class
requests.auth.
HTTPBasicAuth
(username, password)[源代码]¶ Attaches HTTP Basic Authentication to the given Request object.
编码¶
-
requests.utils.
get_encodings_from_content
(content)[源代码]¶ Returns encodings from given content string.
参数: content -- bytestring to extract encodings from.
Cookie¶
Returns a CookieJar from a key/value dictionary.
参数: - cj -- CookieJar to insert cookies into.
- cookie_dict -- Dict of key/values to insert into CookieJar.
返回类型: CookieJar
Returns a CookieJar from a key/value dictionary.
参数: - cookie_dict -- Dict of key/values to insert into CookieJar.
- cookiejar -- (optional) A cookiejar to add the cookies to.
- overwrite -- (optional) If False, will not replace cookies already in the jar with new ones.
Compatibility class; is a cookielib.CookieJar, but exposes a dict interface.
This is the CookieJar we create by default for requests and sessions that don't specify one, since some clients may expect response.cookies and session.cookies to support dict operations.
Requests does not use the dict interface internally; it's just for compatibility with external client code. All requests code should work out of the box with externally provided instances of
CookieJar
, e.g.LWPCookieJar
andFileCookieJar
.Unlike a regular CookieJar, this class is pickleable.
警告
dictionary operations that are normally O(1) may be O(n).
Add correct Cookie: header to request (urllib2.Request object).
The Cookie2 header is also added unless policy.hide_cookie2 is true.
Clear some cookies.
Invoking this method without arguments will clear all cookies. If given a single argument, only cookies belonging to that domain will be removed. If given two arguments, cookies belonging to the specified path within that domain are removed. If given three arguments, then the cookie with the specified name, path and domain is removed.
Raises KeyError if no matching cookie exists.
Discard all expired cookies.
You probably don't need to call this method: expired cookies are never sent back to the server (provided you're using DefaultCookiePolicy), this method is called by CookieJar itself every so often, and the .save() method won't save expired cookies anyway (unless you ask otherwise by passing a true ignore_expires argument).
Discard all session cookies.
Note that the .save() method won't save session cookies anyway, unless you ask otherwise by passing a true ignore_discard argument.
Return a copy of this RequestsCookieJar.
Extract cookies from response, where allowable given the request.
Dict-like get() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains.
警告
operation is O(n), not O(1).
Takes as an argument an optional domain and path and returns a plain old Python dict of name-value pairs of cookies that meet the requirements.
返回类型: dict
Dict-like items() that returns a list of name-value tuples from the jar. Allows client-code to call
dict(RequestsCookieJar)
and get a vanilla python dict of key value pairs.参见
keys() and values().
Dict-like iteritems() that returns an iterator of name-value tuples from the jar.
参见
iterkeys() and itervalues().
Dict-like iterkeys() that returns an iterator of names of cookies from the jar.
参见
itervalues() and iteritems().
Dict-like itervalues() that returns an iterator of values of cookies from the jar.
参见
iterkeys() and iteritems().
Dict-like keys() that returns a list of names of cookies from the jar.
参见
values() and items().
Utility method to list all the domains in the jar.
Utility method to list all the paths in the jar.
Return sequence of Cookie objects extracted from response object.
Returns True if there are multiple domains in the jar. Returns False otherwise.
返回类型: bool
If key is not found, d is returned if given, otherwise KeyError is raised.
as a 2-tuple; but raise KeyError if D is empty.
Dict-like set() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains.
Set a cookie if policy says it's OK to do so.
Updates this jar with cookies from another CookieJar or dict-like
Dict-like values() that returns a list of values of cookies from the jar.
参见
keys() and items().
There are two cookies that meet the criteria specified in the cookie jar. Use .get and .set and include domain and path args in order to be more specific.
状态码查询¶
-
requests.
codes
¶
>>> requests.codes['temporary_redirect']
307
>>> requests.codes.teapot
418
>>> requests.codes['\o/']
200
-
class
requests.
Request
(method=None, url=None, headers=None, files=None, data=None, params=None, auth=None, cookies=None, hooks=None, json=None)[源代码] A user-created
Request
object.Used to prepare a
PreparedRequest
, which is sent to the server.参数: - method -- HTTP method to use.
- url -- URL to send.
- headers -- dictionary of headers to send.
- files -- dictionary of {filename: fileobject} files to multipart upload.
- data -- the body to attach to the request. If a dictionary is provided, form-encoding will take place.
- json -- json for the body to attach to the request (if files or data is not specified).
- params -- dictionary of URL parameters to append to the URL.
- auth -- Auth handler or (user, pass) tuple.
- cookies -- dictionary or CookieJar of cookies to attach to this request.
- hooks -- dictionary of callback hooks, for internal usage.
Usage:
>>> import requests >>> req = requests.Request('GET', 'http://httpbin.org/get') >>> req.prepare() <PreparedRequest [GET]>
-
deregister_hook
(event, hook) Deregister a previously registered hook. Returns True if the hook existed, False if not.
-
prepare
()[源代码] Constructs a
PreparedRequest
for transmission and returns it.
-
register_hook
(event, hook) Properly register a hook.
-
class
requests.
Response
[源代码] The
Response
object, which contains a server's response to an HTTP request.-
apparent_encoding
The apparent encoding, provided by the chardet library.
-
close
()[源代码] Releases the connection back to the pool. Once this method has been called the underlying
raw
object must not be accessed again.Note: Should not normally need to be called explicitly.
-
content
Content of the response, in bytes.
-
cookies
= None A CookieJar of Cookies the server sent back.
-
elapsed
= None The amount of time elapsed between sending the request and the arrival of the response (as a timedelta). This property specifically measures the time taken between sending the first byte of the request and finishing parsing the headers. It is therefore unaffected by consuming the response content or the value of the
stream
keyword argument.
-
encoding
= None Encoding to decode with when accessing r.text.
-
headers
= None Case-insensitive Dictionary of Response Headers. For example,
headers['content-encoding']
will return the value of a'Content-Encoding'
response header.
-
history
= None A list of
Response
objects from the history of the Request. Any redirect responses will end up here. The list is sorted from the oldest to the most recent request.
-
is_permanent_redirect
True if this Response one of the permanent versions of redirect.
-
is_redirect
True if this Response is a well-formed HTTP redirect that could have been processed automatically (by
Session.resolve_redirects
).
-
iter_content
(chunk_size=1, decode_unicode=False)[源代码] Iterates over the response data. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. The chunk size is the number of bytes it should read into memory. This is not necessarily the length of each item returned as decoding can take place.
chunk_size must be of type int or None. A value of None will function differently depending on the value of stream. stream=True will read data as it arrives in whatever size the chunks are received. If stream=False, data is returned as a single chunk.
If decode_unicode is True, content will be decoded using the best available encoding based on the response.
-
iter_lines
(chunk_size=512, decode_unicode=None, delimiter=None)[源代码] Iterates over the response data, one line at a time. When stream=True is set on the request, this avoids reading the content at once into memory for large responses.
注解
This method is not reentrant safe.
-
json
(**kwargs)[源代码] Returns the json-encoded content of a response, if any.
参数: **kwargs -- Optional arguments that json.loads
takes.引发: ValueError -- If the response body does not contain valid json.
-
links
Returns the parsed header links of the response, if any.
-
next
Returns a PreparedRequest for the next request in a redirect chain, if there is one.
-
ok
Returns True if
status_code
is less than 400.This attribute checks if the status code of the response is between 400 and 600 to see if there was a client error or a server error. If the status code, is between 200 and 400, this will return True. This is not a check to see if the response code is
200 OK
.
-
raw
= None File-like object representation of response (for advanced usage). Use of
raw
requires thatstream=True
be set on the request.
-
reason
= None Textual reason of responded HTTP Status, e.g. "Not Found" or "OK".
-
request
= None The
PreparedRequest
object to which this is a response.
-
status_code
= None Integer Code of responded HTTP Status, e.g. 404 or 200.
-
text
Content of the response, in unicode.
If Response.encoding is None, encoding will be guessed using
chardet
.The encoding of the response content is determined based solely on HTTP headers, following RFC 2616 to the letter. If you can take advantage of non-HTTP knowledge to make a better guess at the encoding, you should set
r.encoding
appropriately before accessing this property.
-
url
= None Final URL location of Response.
-
迁移到 1.x¶
本节详细介绍 0.x 和 1.x 的主要区别,减少升级带来的一些不便。
API 变化¶
Response.json
现在是可调用的并且不再是响应体的属性。import requests r = requests.get('https://github.com/timeline.json') r.json() # 如果 JSON 解码失败,该调用会引发异常。
Session
API 也发生了变化. Sessions 对象不再需要参数了。Session
现在是大写的了,但为了向后兼容,它仍然能被小写的session
实例化。s = requests.Session() # 过去会话需要接收参数 s.auth = auth s.headers.update(headers) r = s.get('http://httpbin.org/headers')
除了'response',所有的请求挂钩已被移除。
认证助手已经被分解成单独的模块了. 参见 requests-oauthlib and requests-kerberos.
流请求的参数已从
prefetch
改变到stream
,并且逻辑也被颠倒。除此之外,stream
现在对于原始响应读取也是必需的。# 在 0.x 中,传入 prefetch=False 会达到同样的结果 r = requests.get('https://github.com/timeline.json', stream=True) for chunk in r.iter_content(8192): ...
requests 方法的
config
参数已全部删除。 现在配置这些选项都在Session
,比如 keep-alive 和最大数目的重定向。 详细程度选项应当由配置日志来处理。import requests import logging # 启用调试于 http.client 级别 (requests->urllib3->http.client) # 你将能看到 REQUEST,包括 HEADERS 和 DATA,以及包含 HEADERS 但不包含 DATA 的 RESPONSE。 # 唯一缺少的是 response.body,它不会被 log 记录。 try: # for Python 3 from http.client import HTTPConnection except ImportError: from httplib import HTTPConnection HTTPConnection.debuglevel = 1 logging.basicConfig() # 初始化 logging,否则不会看到任何 requests 的输出。 logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True requests.get('http://httpbin.org/headers')
许可¶
有一个关键的与 API 无关的区别是开放许可从 ISC 许可 变更到 Apache 2.0 许可. Apache 2.0 license 确保了对于 Requests 的贡献也被涵盖在 Apache 2.0 许可内.
迁移到 2.x¶
和 1.0 发布版相比,破坏向后兼容的更改比较少。不过在这个主发布版本中,依然还有一些应该注意的问题。
更多关于变更的细节,包括 API,以及相关的 GitHub Issue 和部分 bug 修复,请参阅 Cory blog 中的相关主题。
API 变化¶
Requests 处理异常的行为有部分更改。
RequestException
现在是IOError
的子类,而非RuntimeError
的子类,新的归类更为合理。此外,无效的 URL 转义序列现在会引发RequestException
的一个子类,而非一个ValueError
。requests.get('http://%zz/') # raises requests.exceptions.InvalidURL
最后, 错误分块导致的
httplib.IncompleteRead
异常现在变成了ChunkedEncodingError
。代理 API 略有改动,现在需要提供代理 URL 的 scheme。
proxies = { "http": "10.10.1.10:3128", # 需使用 http://10.10.1.10:3128 } # requests 1.x 中有效, requests 2.x 中会引发 requests.exceptions.MissingSchema requests.get("http://example.org", proxies=proxies)
行为变化¶
headers
字典中的 key 现在都是原生字符串,在所有版本的 Python 中都是如此。也就是说,Python 2 中是 bytestring,Python 3 中是 unicode。 如果 key 不是原声字符串(Python 2 中 unicode,或 Python 3 中 bytestring) 它们会被以 UTF-8 编码转成原生字符串。headers
字典中的 value 应该都是字符串。在 1.0 版之前该项目就是要求这样做的,只不过最近(v2.11.0之后)这条变成了强制条款。建议在可能的情况下,避免让 header 值使用 unicode 编码。
贡献指南¶
如果你要为项目做出贡献,请参考这部分文档。
Contributor's Guide¶
If you're reading this, you're probably interested in contributing to Requests. Thank you very much! Open source projects live-and-die based on the support they receive from others, and the fact that you're even considering contributing to the Requests project is very generous of you.
This document lays out guidelines and advice for contributing to this project. If you're thinking of contributing, please start by reading this document and getting a feel for how contributing to this project works. If you have any questions, feel free to reach out to either Ian Cordasco or Cory Benfield, the primary maintainers.
If you have non-technical feedback, philosophical ponderings, crazy ideas, or other general thoughts about Requests or its position within the Python ecosystem, the BDFL, Kenneth Reitz, would love to hear from you.
The guide is split into sections based on the type of contribution you're thinking of making, with a section that covers general guidelines for all contributors.
Be Cordial¶
Be cordial or be on your way. —Kenneth Reitz
Requests has one very important rule governing all forms of contribution, including reporting bugs or requesting features. This golden rule is "be cordial or be on your way".
All contributions are welcome, as long as everyone involved is treated with respect.
Get Early Feedback¶
If you are contributing, do not feel the need to sit on your contribution until it is perfectly polished and complete. It helps everyone involved for you to seek feedback as early as you possibly can. Submitting an early, unfinished version of your contribution for feedback in no way prejudices your chances of getting that contribution accepted, and can save you from putting a lot of work into a contribution that is not suitable for the project.
Contribution Suitability¶
Our project maintainers have the last word on whether or not a contribution is suitable for Requests. All contributions will be considered carefully, but from time to time, contributions will be rejected because they do not suit the current goals or needs of the project.
If your contribution is rejected, don't despair! As long as you followed these guidelines, you will have a much better chance of getting your next contribution accepted.
Code Contributions¶
Steps for Submitting Code¶
When contributing code, you'll want to follow this checklist:
- Fork the repository on GitHub.
- Run the tests to confirm they all pass on your system. If they don't, you'll need to investigate why they fail. If you're unable to diagnose this yourself, raise it as a bug report by following the guidelines in this document: Bug Reports.
- Write tests that demonstrate your bug or feature. Ensure that they fail.
- Make your change.
- Run the entire test suite again, confirming that all tests pass including the ones you just added.
- Send a GitHub Pull Request to the main repository's
master
branch. GitHub Pull Requests are the expected method of code collaboration on this project.
The following sub-sections go into more detail on some of the points above.
Code Review¶
Contributions will not be merged until they've been code reviewed. You should implement any code review feedback unless you strongly object to it. In the event that you object to the code review feedback, you should make your case clearly and calmly. If, after doing so, the feedback is judged to still apply, you must either apply the feedback or withdraw your contribution.
New Contributors¶
If you are new or relatively new to Open Source, welcome! Requests aims to be a gentle introduction to the world of Open Source. If you're concerned about how best to contribute, please consider mailing a maintainer (listed above) and asking for help.
Please also check the Get Early Feedback section.
Kenneth Reitz's Code Style™¶
The Requests codebase uses the PEP 8 code style.
In addition to the standards outlined in PEP 8, we have a few guidelines:
- Line-length can exceed 79 characters, to 100, when convenient.
- Line-length can exceed 100 characters, when doing otherwise would be terribly inconvenient.
- Always use single-quoted strings (e.g.
'#flatearth'
), unless a single-quote occurs within the string.
Additionally, one of the styles that PEP8 recommends for line continuations completely lacks all sense of taste, and is not to be permitted within the Requests codebase:
# Aligned with opening delimiter.
foo = long_function_name(var_one, var_two,
var_three, var_four)
No. Just don't. Please.
Docstrings are to follow the following syntaxes:
def the_earth_is_flat():
"""NASA divided up the seas into thirty-three degrees."""
pass
def fibonacci_spiral_tool():
"""With my feet upon the ground I lose myself / between the sounds
and open wide to suck it in. / I feel it move across my skin. / I'm
reaching up and reaching out. / I'm reaching for the random or
whatever will bewilder me. / Whatever will bewilder me. / And
following our will and wind we may just go where no one's been. /
We'll ride the spiral to the end and may just go where no one's
been.
Spiral out. Keep going...
"""
pass
All functions, methods, and classes are to contain docstrings. Object data
model methods (e.g. __repr__
) are typically the exception to this rule.
Thanks for helping to make the world a better place!
Documentation Contributions¶
Documentation improvements are always welcome! The documentation files live in
the docs/
directory of the codebase. They're written in
reStructuredText, and use Sphinx to generate the full suite of
documentation.
When contributing documentation, please do your best to follow the style of the documentation files. This means a soft-limit of 79 characters wide in your text files and a semi-formal, yet friendly and approachable, prose style.
When presenting Python code, use single-quoted strings ('hello'
instead of
"hello"
).
Bug Reports¶
Bug reports are hugely important! Before you raise one, though, please check through the GitHub issues, both open and closed, to confirm that the bug hasn't been reported before. Duplicate bug reports are a huge drain on the time of other contributors, and should be avoided as much as possible.
Feature Requests¶
Requests is in a perpetual feature freeze, only the BDFL can add or approve of new features. The maintainers believe that Requests is a feature-complete piece of software at this time.
One of the most important skills to have while maintaining a largely-used open source project is learning the ability to say "no" to suggested changes, while keeping an open ear and mind.
If you believe there is a feature missing, feel free to raise a feature request, but please do be aware that the overwhelming likelihood is that your feature request will not be accepted.
Development Philosophy¶
Requests is an open but opinionated library, created by an open but opinionated developer.
Management Style¶
Kenneth Reitz is the BDFL. He has final say in any decision related to the Requests project. Kenneth is responsible for the direction and form of the library. In addition to making decisions based on technical merit, he is responsible for making decisions based on the development philosophy of Requests. Only Kenneth may merge code into Requests.
Ian Cordasco and Cory Benfield are the core contributors. They are responsible for triaging bug reports, reviewing pull requests and ensuring that Kenneth is kept up to speed with developments around the library. The day-to-day managing of the project is done by the core contributors. They are responsible for making judgements about whether or not a feature request is likely to be accepted by Kenneth. They do not have the authority to change code or merge code changes, though they may change documentation. Their word is not final.
Values¶
- Simplicity is always better than functionality.
- Listen to everyone, then disregard it.
- The API is all that matters. Everything else is secondary.
- Fit the 90% use-case. Ignore the nay-sayers.
Semantic Versioning¶
For many years, the open source community has been plagued with version number dystonia. Numbers vary so greatly from project to project, they are practically meaningless.
Requests uses Semantic Versioning. This specification seeks to put an end to this madness with a small set of practical guidelines for you and your colleagues to use in your next project.
Standard Library?¶
Requests has no active plans to be included in the standard library. This decision has been discussed at length with Guido as well as numerous core developers.
Essentially, the standard library is where a library goes to die. It is appropriate for a module to be included when active development is no longer necessary.
Linux Distro Packages¶
Distributions have been made for many Linux repositories, including: Ubuntu, Debian, RHEL, and Arch.
These distributions are sometimes divergent forks, or are otherwise not kept up-to-date with the latest code and bugfixes. PyPI (and its mirrors) and GitHub are the official distribution sources; alternatives are not supported by the Requests project.
How to Help¶
Requests is under active development, and contributions are more than welcome!
- Check for open issues or open a fresh issue to start a discussion around a bug. There is a Contributor Friendly tag for issues that should be ideal for people who are not very familiar with the codebase yet.
- Fork the repository on GitHub and start making your changes to a new branch.
- Write a test which shows that the bug was fixed.
- Send a pull request and bug the maintainer until it gets merged and published. :) Make sure to add yourself to AUTHORS.
Feature Freeze¶
As of v1.0.0, Requests has now entered a feature freeze. Requests for new features and Pull Requests implementing those features will not be accepted.
Development Dependencies¶
You'll need to install py.test in order to run the Requests' test suite:
$ pip install -r requirements.txt
$ py.test
platform darwin -- Python 2.7.3 -- pytest-2.3.4
collected 25 items
test_requests.py .........................
25 passed in 3.50 seconds
Runtime Environments¶
Requests currently supports the following versions of Python:
- Python 2.6
- Python 2.7
- Python 3.3
- Python 3.4
- Python 3.5
- PyPy 1.9
Google AppEngine is not officially supported although support is available with the Requests-Toolbelt.
Are you crazy?¶
- SPDY support would be awesome. No C extensions.
Downstream Repackaging¶
If you are repackaging Requests, please note that you must also redistribute the cacerts.pem
file in order to get correct SSL functionality.
贡献者¶
Requests is written and maintained by Kenneth Reitz and various contributors:
Keepers of the Four Crystals¶
- Kenneth Reitz <me@kennethreitz.org> @kennethreitz, Keeper of the Master Crystal.
- Cory Benfield <cory@lukasa.co.uk> @lukasa
- Ian Cordasco <graffatcolmingov@gmail.com> @sigmavirus24
- Nate Prewitt <nate.prewitt@gmail.com> @nateprewitt
Urllib3¶
- Andrey Petrov <andrey.petrov@shazow.net>
Patches and Suggestions¶
- Various Pocoo Members
- Chris Adams
- Flavio Percoco Premoli
- Dj Gilcrease
- Justin Murphy
- Rob Madole
- Aram Dulyan
- Johannes Gorset
- 村山めがね (Megane Murayama)
- James Rowe
- Daniel Schauenberg
- Zbigniew Siciarz
- Daniele Tricoli 'Eriol'
- Richard Boulton
- Miguel Olivares <miguel@moliware.com>
- Alberto Paro
- Jérémy Bethmont
- 潘旭 (Xu Pan)
- Tamás Gulácsi
- Rubén Abad
- Peter Manser
- Jeremy Selier
- Jens Diemer
- Alex (@alopatin)
- Tom Hogans <tomhsx@gmail.com>
- Armin Ronacher
- Shrikant Sharat Kandula
- Mikko Ohtamaa
- Den Shabalin
- Daniel Miller <danielm@vs-networks.com>
- Alejandro Giacometti
- Rick Mak
- Johan Bergström
- Josselin Jacquard
- Travis N. Vaught
- Fredrik Möllerstrand
- Daniel Hengeveld
- Dan Head
- Bruno Renié
- David Fischer
- Joseph McCullough
- Juergen Brendel
- Juan Riaza
- Ryan Kelly
- Rolando Espinoza La fuente
- Robert Gieseke
- Idan Gazit
- Ed Summers
- Chris Van Horne
- Christopher Davis
- Ori Livneh
- Jason Emerick
- Bryan Helmig
- Jonas Obrist
- Lucian Ursu
- Tom Moertel
- Frank Kumro Jr
- Chase Sterling
- Marty Alchin
- takluyver
- Ben Toews (@mastahyeti)
- David Kemp
- Brendon Crawford
- Denis (@Telofy)
- Matt Giuca
- Adam Tauber
- Honza Javorek
- Brendan Maguire <maguire.brendan@gmail.com>
- Chris Dary
- Danver Braganza <danverbraganza@gmail.com>
- Max Countryman
- Nick Chadwick
- Jonathan Drosdeck
- Jiri Machalek
- Steve Pulec
- Michael Kelly
- Michael Newman <newmaniese@gmail.com>
- Jonty Wareing <jonty@jonty.co.uk>
- Shivaram Lingamneni
- Miguel Turner
- Rohan Jain (@crodjer)
- Justin Barber <barber.justin@gmail.com>
- Roman Haritonov (@reclosedev)
- Josh Imhoff <joshimhoff13@gmail.com>
- Arup Malakar <amalakar@gmail.com>
- Danilo Bargen (@dbrgn)
- Torsten Landschoff
- Michael Holler (@apotheos)
- Timnit Gebru
- Sarah Gonzalez
- Victoria Mo
- Leila Muhtasib
- Matthias Rahlf <matthias@webding.de>
- Jakub Roztocil <jakub@roztocil.name>
- Rhys Elsmore
- André Graf (@dergraf)
- Stephen Zhuang (@everbird)
- Martijn Pieters
- Jonatan Heyman
- David Bonner <dbonner@gmail.com> (@rascalking)
- Vinod Chandru
- Johnny Goodnow <j.goodnow29@gmail.com>
- Denis Ryzhkov <denisr@denisr.com>
- Wilfred Hughes <me@wilfred.me.uk>
- Dmitry Medvinsky <me@dmedvinsky.name>
- Bryce Boe <bbzbryce@gmail.com> (@bboe)
- Colin Dunklau <colin.dunklau@gmail.com> (@cdunklau)
- Bob Carroll <bob.carroll@alum.rit.edu> (@rcarz)
- Hugo Osvaldo Barrera <hugo@osvaldobarrera.com.ar> (@hobarrera)
- Łukasz Langa <lukasz@langa.pl>
- Dave Shawley <daveshawley@gmail.com>
- James Clarke (@jam)
- Kevin Burke <kev@inburke.com>
- Flavio Curella
- David Pursehouse <david.pursehouse@gmail.com> (@dpursehouse)
- Jon Parise (@jparise)
- Alexander Karpinsky (@homm86)
- Marc Schlaich (@schlamar)
- Park Ilsu <daftonshady@gmail.com> (@daftshady)
- Matt Spitz (@mattspitz)
- Vikram Oberoi (@voberoi)
- Can Ibanoglu <can.ibanoglu@gmail.com> (@canibanoglu)
- Thomas Weißschuh <thomas@t-8ch.de> (@t-8ch)
- Jayson Vantuyl <jayson@aggressive.ly>
- Pengfei.X <pengphy@gmail.com>
- Kamil Madac <kamil.madac@gmail.com>
- Michael Becker <mike@beckerfuffle.com> (@beckerfuffle)
- Erik Wickstrom <erik@erikwickstrom.com> (@erikwickstrom)
- Константин Подшумок (@podshumok)
- Ben Bass (@codedstructure)
- Jonathan Wong <evolutionace@gmail.com> (@ContinuousFunction)
- Martin Jul (@mjul)
- Joe Alcorn (@buttscicles)
- Syed Suhail Ahmed <ssuhail.ahmed93@gmail.com> (@syedsuhail)
- Scott Sadler (@ssadler)
- Arthur Darcet (@arthurdarcet)
- Ulrich Petri (@ulope)
- Muhammad Yasoob Ullah Khalid <yasoob.khld@gmail.com> (@yasoob)
- Paul van der Linden (@pvanderlinden)
- Colin Dickson (@colindickson)
- Smiley Barry (@smiley)
- Shagun Sodhani (@shagunsodhani)
- Robin Linderborg (@vienno)
- Brian Samek (@bsamek)
- Dmitry Dygalo (@Stranger6667)
- piotrjurkiewicz
- Jesse Shapiro <jesse@jesseshapiro.net> (@haikuginger)
- Nate Prewitt <nate.prewitt@gmail.com> (@nateprewitt)
- Maik Himstedt
- Michael Hunsinger
- Brian Bamsch <bbamsch32@gmail.com> (@bbamsch)
- Om Prakash Kumar <omprakash070@gmail.com> (@iamprakashom)
- Philipp Konrad <gardiac2002@gmail.com> (@gardiac2002)
- Hussain Tamboli <hussaintamboli18@gmail.com> (@hussaintamboli)
- Casey Davidson (@davidsoncasey)
- Andrii Soldatenko (@a_soldatenko)
- Moinuddin Quadri <moin18@gmail.com> (@moin18)
- Matt Kohl (@mattkohl)
- Jonathan Vanasco (@jvanasco)
- David Fontenot (@davidfontenot)
- Shmuel Amar (@shmuelamar)
- Gary Wu (@garywu)
- Ryan Pineo (@ryanpineo)
- Ed Morley (@edmorley)
- Matt Liu <liumatt@gmail.com> (@mlcrazy)
没有别的指南了,你现在要靠自己了。
祝你好运。