Commit e430a818 authored by Sebastian Ramacher's avatar Sebastian Ramacher

New upstream version 0.9.8

parent db7b6c7a
Metadata-Version: 1.1
Name: requests-futures
Version: 0.9.7
Version: 0.9.8
Summary: Asynchronous Python HTTP for Humans.
Home-page: https://github.com/ross/requests-futures
Author: Ross McFarland
......@@ -90,6 +90,25 @@ Description: Asynchronous Python HTTP Requests for Humans
are shifted (thrown) to the future.result() call so try/except blocks should be
moved there.
Canceling queued requests (a.k.a cleaning up after yourself)
=========================
If you know that you won't be needing any additional responses from futures that
haven't yet resolved, it's a good idea to cancel those requests. You can do this
by using the session as a context manager:
.. code-block:: python
from requests_futures.sessions import FuturesSession
with FuturesSession(max_workers=1) as session:
future = session.get('https://httpbin.org/get')
future2 = session.get('https://httpbin.org/delay/10')
future3 = session.get('https://httpbin.org/delay/10')
response = future.result()
In this example, the second or third request will be skipped, saving time and
resources that would otherwise be wasted.
Working in the Background
=========================
......@@ -117,6 +136,68 @@ Description: Asynchronous Python HTTP Requests for Humans
pprint(response.data)
Using ProcessPoolExecutor
=========================
Similarly to `ThreadPoolExecutor`, it is possible to use an instance of
`ProcessPoolExecutor`. As the name suggest, the requests will be executed
concurrently in separate processes rather than threads.
.. code-block:: python
from concurrent.futures import ProcessPoolExecutor
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10))
# ... use as before
.. HINT::
Using the `ProcessPoolExecutor` is useful, in cases where memory
usage per request is very high (large response) and cycling the interpretor
is required to release memory back to OS.
A base requirement of using `ProcessPoolExecutor` is that the `Session.request`,
`FutureSession` and (the optional) `background_callback` all be pickle-able.
This means that only Python 3.5 is fully supported, while Python versions
3.4 and above REQUIRE an existing `requests.Session` instance to be passed
when initializing `FutureSession`. Python 2.X and < 3.4 are currently not
supported.
.. code-block:: python
# Using python 3.4
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
session=Session())
# ... use as before
In case pickling fails, an exception is raised pointing to this documentation.
.. code-block:: python
# Using python 2.7
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
session=Session())
Traceback (most recent call last):
...
RuntimeError: Cannot pickle function. Refer to documentation: https://github.com/ross/requests-futures/#using-processpoolexecutor
.. IMPORTANT::
* Python >= 3.4 required
* A session instance is required when using Python < 3.5
* If sub-classing `FuturesSession` it must be importable (module global)
* If using `background_callback` it too must be importable (module global)
Installation
============
......@@ -134,7 +215,6 @@ Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
......@@ -82,6 +82,25 @@ beyond returning a Future rather than Response. As with all futures exceptions
are shifted (thrown) to the future.result() call so try/except blocks should be
moved there.
Canceling queued requests (a.k.a cleaning up after yourself)
=========================
If you know that you won't be needing any additional responses from futures that
haven't yet resolved, it's a good idea to cancel those requests. You can do this
by using the session as a context manager:
.. code-block:: python
from requests_futures.sessions import FuturesSession
with FuturesSession(max_workers=1) as session:
future = session.get('https://httpbin.org/get')
future2 = session.get('https://httpbin.org/delay/10')
future3 = session.get('https://httpbin.org/delay/10')
response = future.result()
In this example, the second or third request will be skipped, saving time and
resources that would otherwise be wasted.
Working in the Background
=========================
......@@ -109,6 +128,68 @@ for a simple example take json parsing.
pprint(response.data)
Using ProcessPoolExecutor
=========================
Similarly to `ThreadPoolExecutor`, it is possible to use an instance of
`ProcessPoolExecutor`. As the name suggest, the requests will be executed
concurrently in separate processes rather than threads.
.. code-block:: python
from concurrent.futures import ProcessPoolExecutor
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10))
# ... use as before
.. HINT::
Using the `ProcessPoolExecutor` is useful, in cases where memory
usage per request is very high (large response) and cycling the interpretor
is required to release memory back to OS.
A base requirement of using `ProcessPoolExecutor` is that the `Session.request`,
`FutureSession` and (the optional) `background_callback` all be pickle-able.
This means that only Python 3.5 is fully supported, while Python versions
3.4 and above REQUIRE an existing `requests.Session` instance to be passed
when initializing `FutureSession`. Python 2.X and < 3.4 are currently not
supported.
.. code-block:: python
# Using python 3.4
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
session=Session())
# ... use as before
In case pickling fails, an exception is raised pointing to this documentation.
.. code-block:: python
# Using python 2.7
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
session=Session())
Traceback (most recent call last):
...
RuntimeError: Cannot pickle function. Refer to documentation: https://github.com/ross/requests-futures/#using-processpoolexecutor
.. IMPORTANT::
* Python >= 3.4 required
* A session instance is required when using Python < 3.5
* If sub-classing `FuturesSession` it must be importable (module global)
* If using `background_callback` it too must be importable (module global)
Installation
============
......
Metadata-Version: 1.1
Name: requests-futures
Version: 0.9.7
Version: 0.9.8
Summary: Asynchronous Python HTTP for Humans.
Home-page: https://github.com/ross/requests-futures
Author: Ross McFarland
......@@ -90,6 +90,25 @@ Description: Asynchronous Python HTTP Requests for Humans
are shifted (thrown) to the future.result() call so try/except blocks should be
moved there.
Canceling queued requests (a.k.a cleaning up after yourself)
=========================
If you know that you won't be needing any additional responses from futures that
haven't yet resolved, it's a good idea to cancel those requests. You can do this
by using the session as a context manager:
.. code-block:: python
from requests_futures.sessions import FuturesSession
with FuturesSession(max_workers=1) as session:
future = session.get('https://httpbin.org/get')
future2 = session.get('https://httpbin.org/delay/10')
future3 = session.get('https://httpbin.org/delay/10')
response = future.result()
In this example, the second or third request will be skipped, saving time and
resources that would otherwise be wasted.
Working in the Background
=========================
......@@ -117,6 +136,68 @@ Description: Asynchronous Python HTTP Requests for Humans
pprint(response.data)
Using ProcessPoolExecutor
=========================
Similarly to `ThreadPoolExecutor`, it is possible to use an instance of
`ProcessPoolExecutor`. As the name suggest, the requests will be executed
concurrently in separate processes rather than threads.
.. code-block:: python
from concurrent.futures import ProcessPoolExecutor
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10))
# ... use as before
.. HINT::
Using the `ProcessPoolExecutor` is useful, in cases where memory
usage per request is very high (large response) and cycling the interpretor
is required to release memory back to OS.
A base requirement of using `ProcessPoolExecutor` is that the `Session.request`,
`FutureSession` and (the optional) `background_callback` all be pickle-able.
This means that only Python 3.5 is fully supported, while Python versions
3.4 and above REQUIRE an existing `requests.Session` instance to be passed
when initializing `FutureSession`. Python 2.X and < 3.4 are currently not
supported.
.. code-block:: python
# Using python 3.4
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
session=Session())
# ... use as before
In case pickling fails, an exception is raised pointing to this documentation.
.. code-block:: python
# Using python 2.7
from concurrent.futures import ProcessPoolExecutor
from requests import Session
from requests_futures.sessions import FuturesSession
session = FuturesSession(executor=ProcessPoolExecutor(max_workers=10),
session=Session())
Traceback (most recent call last):
...
RuntimeError: Cannot pickle function. Refer to documentation: https://github.com/ross/requests-futures/#using-processpoolexecutor
.. IMPORTANT::
* Python >= 3.4 required
* A session instance is required when using Python < 3.5
* If sub-classing `FuturesSession` it must be importable (module global)
* If using `background_callback` it too must be importable (module global)
Installation
============
......@@ -134,7 +215,6 @@ Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.2
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
......@@ -12,7 +12,7 @@ async requests HTTP library
import logging
__title__ = 'requests-futures'
__version__ = '0.9.7'
__version__ = '0.9.8'
__build__ = 0x000000
__author__ = 'Ross McFarland'
__license__ = 'Apache 2.0'
......
......@@ -19,36 +19,53 @@ releases of python.
print(response.content)
"""
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
from functools import partial
from pickle import dumps, PickleError
from concurrent.futures import ThreadPoolExecutor
from requests import Session
from requests.adapters import DEFAULT_POOLSIZE, HTTPAdapter
def wrap(self, sup, background_callback, *args_, **kwargs_):
""" A global top-level is required for ProcessPoolExecutor """
resp = sup(*args_, **kwargs_)
return background_callback(self, resp) or resp
PICKLE_ERROR = ('Cannot pickle function. Refer to documentation: https://'
'github.com/ross/requests-futures/#using-processpoolexecutor')
class FuturesSession(Session):
def __init__(self, executor=None, max_workers=2, session=None, *args,
**kwargs):
def __init__(self, executor=None, max_workers=2, session=None,
adapter_kwargs=None, *args, **kwargs):
"""Creates a FuturesSession
Notes
~~~~~
* ProcessPoolExecutor is not supported b/c Response objects are
not picklable.
* `ProcessPoolExecutor` may be used with Python > 3.4;
see README for more information.
* If you provide both `executor` and `max_workers`, the latter is
ignored and provided executor is used as is.
"""
_adapter_kwargs = {}
super(FuturesSession, self).__init__(*args, **kwargs)
self._owned_executor = executor is None
if executor is None:
executor = ThreadPoolExecutor(max_workers=max_workers)
# set connection pool size equal to max_workers if needed
if max_workers > DEFAULT_POOLSIZE:
adapter_kwargs = dict(pool_connections=max_workers,
pool_maxsize=max_workers)
self.mount('https://', HTTPAdapter(**adapter_kwargs))
self.mount('http://', HTTPAdapter(**adapter_kwargs))
_adapter_kwargs.update({'pool_connections': max_workers,
'pool_maxsize': max_workers})
_adapter_kwargs.update(adapter_kwargs or {})
if _adapter_kwargs:
self.mount('https://', HTTPAdapter(**_adapter_kwargs))
self.mount('http://', HTTPAdapter(**_adapter_kwargs))
self.executor = executor
self.session = session
......@@ -61,25 +78,30 @@ class FuturesSession(Session):
The background_callback param allows you to do some processing on the
response in the background, e.g. call resp.json() so that json parsing
happens in the background thread.
:rtype : concurrent.futures.Future
"""
if self.session:
func = sup = self.session.request
func = self.session.request
else:
func = sup = super(FuturesSession, self).request
# avoid calling super to not break pickled method
func = partial(Session.request, self)
background_callback = kwargs.pop('background_callback', None)
if background_callback:
def wrap(*args_, **kwargs_):
resp = sup(*args_, **kwargs_)
background_callback(self, resp)
return resp
func = partial(wrap, self, func, background_callback)
func = wrap
if isinstance(self.executor, ProcessPoolExecutor):
# verify function can be pickled
try:
dumps(func)
except (TypeError, PickleError):
raise RuntimeError(PICKLE_ERROR)
return self.executor.submit(func, *args, **kwargs)
def __enter__(self):
return self
def close(self):
super(FuturesSession, self).close()
if self._owned_executor:
self.executor.shutdown()
def __exit__(self, type, value, traceback):
self.executor.shutdown()
......@@ -48,7 +48,6 @@ setup(
'Programming Language :: Python',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
......
......@@ -3,11 +3,18 @@
"""Tests for Requests."""
from concurrent.futures import Future
from requests import Response, session
from concurrent.futures import Future, ProcessPoolExecutor
from os import environ
from sys import version_info
try:
from sys import pypy_version_info
except ImportError:
pypy_version_info = None
from unittest import TestCase, main, skipIf
from requests import Response, session
from requests.adapters import DEFAULT_POOLSIZE
from requests_futures.sessions import FuturesSession
from unittest import TestCase, main
HTTPBIN = environ.get('HTTPBIN_URL', 'http://httpbin.org/')
......@@ -79,6 +86,21 @@ class RequestsTestCase(TestCase):
max_workers=5)
self.assertEqual(session.executor._max_workers, 10)
def test_adapter_kwargs(self):
""" Tests the `adapter_kwargs` shortcut. """
from concurrent.futures import ThreadPoolExecutor
session = FuturesSession()
self.assertFalse(session.get_adapter('http://')._pool_block)
session = FuturesSession(max_workers=DEFAULT_POOLSIZE + 1,
adapter_kwargs={'pool_block': True})
adapter = session.get_adapter('http://')
self.assertTrue(adapter._pool_block)
self.assertEqual(adapter._pool_connections, DEFAULT_POOLSIZE + 1)
self.assertEqual(adapter._pool_maxsize, DEFAULT_POOLSIZE + 1)
session = FuturesSession(executor=ThreadPoolExecutor(max_workers=10),
adapter_kwargs={'pool_connections': 20})
self.assertEqual(session.get_adapter('http://')._pool_connections, 20)
def test_redirect(self):
""" Tests for the ability to cleanly handle redirects. """
sess = FuturesSession()
......@@ -117,5 +139,148 @@ class RequestsTestCase(TestCase):
self.assertTrue(passout._exit_called)
# << test process pool executor >>
# see discussion https://github.com/ross/requests-futures/issues/11
def global_cb_modify_response(s, r):
""" add the parsed json data to the response """
assert s, FuturesSession
assert r, Response
r.data = r.json()
r.__attrs__.append('data') # required for pickling new attribute
def global_cb_return_result(s, r):
""" simply return parsed json data """
assert s, FuturesSession
assert r, Response
return r.json()
def global_rasing_cb(s, r):
raise Exception('boom')
# pickling instance method supported only from here
unsupported_platform = version_info < (3, 4) and not pypy_version_info
session_required = version_info < (3, 5,) and not pypy_version_info
@skipIf(unsupported_platform, 'not supported in python < 3.4')
class RequestsProcessPoolTestCase(TestCase):
def setUp(self):
self.proc_executor = ProcessPoolExecutor(max_workers=2)
self.session = session()
@skipIf(session_required, 'not supported in python < 3.5')
def test_futures_session(self):
self._assert_futures_session()
@skipIf(not session_required, 'fully supported on python >= 3.5')
def test_exception_raised(self):
with self.assertRaises(RuntimeError):
self._assert_futures_session()
def test_futures_existing_session(self):
self.session.headers['Foo'] = 'bar'
self._assert_futures_session(session=self.session)
def _assert_futures_session(self, session=None):
# basic futures get
if session:
sess = FuturesSession(executor=self.proc_executor, session=session)
else:
sess = FuturesSession(executor=self.proc_executor)
future = sess.get(httpbin('get'))
self.assertIsInstance(future, Future)
resp = future.result()
self.assertIsInstance(resp, Response)
self.assertEqual(200, resp.status_code)
# non-200, 404
future = sess.get(httpbin('status/404'))
resp = future.result()
self.assertEqual(404, resp.status_code)
future = sess.get(httpbin('get'),
background_callback=global_cb_modify_response)
# this should block until complete
resp = future.result()
if session:
self.assertEqual(resp.json()['headers']['Foo'], 'bar')
self.assertEqual(200, resp.status_code)
# make sure the callback was invoked
self.assertTrue(hasattr(resp, 'data'))
future = sess.get(httpbin('get'),
background_callback=global_cb_return_result)
# this should block until complete
resp = future.result()
# make sure the callback was invoked
self.assertIsInstance(resp, dict)
future = sess.get(httpbin('get'), background_callback=global_rasing_cb)
with self.assertRaises(Exception) as cm:
resp = future.result()
self.assertEqual('boom', cm.exception.args[0])
# Tests for the ability to cleanly handle redirects
future = sess.get(httpbin('redirect-to?url=get'))
self.assertIsInstance(future, Future)
resp = future.result()
self.assertIsInstance(resp, Response)
self.assertEqual(200, resp.status_code)
future = sess.get(httpbin('redirect-to?url=status/404'))
resp = future.result()
self.assertEqual(404, resp.status_code)
@skipIf(session_required, 'not supported in python < 3.5')
def test_context(self):
self._assert_context()
def test_context_with_session(self):
self._assert_context(session=self.session)
def _assert_context(self, session=None):
if session:
helper_instance = TopLevelContextHelper(executor=self.proc_executor,
session=self.session)
else:
helper_instance = TopLevelContextHelper(executor=self.proc_executor)
passout = None
with helper_instance as sess:
passout = sess
future = sess.get(httpbin('get'))
self.assertIsInstance(future, Future)
resp = future.result()
self.assertIsInstance(resp, Response)
self.assertEqual(200, resp.status_code)
self.assertTrue(passout._exit_called)
class TopLevelContextHelper(FuturesSession):
def __init__(self, *args, **kwargs):
super(TopLevelContextHelper, self).__init__(
*args, **kwargs)
self._exit_called = False
def __exit__(self, *args, **kwargs):
self._exit_called = True
return super(TopLevelContextHelper, self).__exit__(
*args, **kwargs)
@skipIf(not unsupported_platform, 'Exception raised when unsupported')
class ProcessPoolExceptionRaisedTestCase(TestCase):
def test_exception_raised(self):
executor = ProcessPoolExecutor(max_workers=2)
sess = FuturesSession(executor=executor, session=session())
with self.assertRaises(RuntimeError):
sess.get(httpbin('get'))
if __name__ == '__main__':
main()
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment