Commit a66f315a authored by Michael Fladischer's avatar Michael Fladischer

Import celery_3.1.20.orig.tar.gz

parent ecfab0e5
...@@ -195,3 +195,5 @@ Piotr Maślanka, 2015/08/24 ...@@ -195,3 +195,5 @@ Piotr Maślanka, 2015/08/24
Gerald Manipon, 2015/10/19 Gerald Manipon, 2015/10/19
Krzysztof Bujniewicz, 2015/10/21 Krzysztof Bujniewicz, 2015/10/21
Sukrit Khera, 2015/10/26 Sukrit Khera, 2015/10/26
Dave Smith, 2015/10/27
Dennis Brakhane, 2015/10/30
...@@ -8,6 +8,112 @@ This document contains change notes for bugfix releases in the 3.1.x series ...@@ -8,6 +8,112 @@ This document contains change notes for bugfix releases in the 3.1.x series
(Cipater), please see :ref:`whatsnew-3.1` for an overview of what's (Cipater), please see :ref:`whatsnew-3.1` for an overview of what's
new in Celery 3.1. new in Celery 3.1.
.. _version-3.1.20:
- **Requirements**
- Now depends on :ref:`Kombu 3.0.33 <kombu:version-3.0.33>`.
- Now depends on :mod:`billiard` 3.3.0.22.
Includes binary wheels for Microsoft Windows x86 and x86_64!
- **Task**: Error emails now uses ``utf-8`` charset by default (Issue #2737).
- **Task**: Retry now forwards original message headers (Issue #3017).
- **Worker**: Bootsteps can now hook into ``on_node_join``/``leave``/``lost``.
See :ref:`extending-consumer-gossip` for an example.
- **Events**: Fixed handling of DST timezones (Issue #2983).
- **Results**: Redis backend stopped respecting certain settings.
Contributed by Jeremy Llewellyn.
- **Results**: Database backend now properly supports JSON exceptions
(Issue #2441).
- **Results**: Redis ``new_join`` did not properly call task errbacks on chord
error (Issue #2796).
- **Results**: Restores Redis compatibility with redis-py < 2.10.0
(Issue #2903).
- **Results**: Fixed rare issue with chord error handling (Issue #2409).
- **Tasks**: Using queue-name values in :setting:`CELERY_ROUTES` now works
again (Issue #2987).
- **General**: Result backend password now sanitized in report output
(Issue #2812, Issue #2004).
- **Configuration**: Now gives helpful error message when the result backend
configuration points to a module, and not a class (Issue #2945).
- **Results**: Exceptions sent by JSON serialized workers are now properly
handled by pickle configured workers.
- **Programs**: ``celery control autoscale`` now works (Issue #2950).
- **Programs**: ``celery beat --detached`` now runs after fork callbacks.
- **General**: Fix for LRU cache implementation on Python 3.5 (Issue #2897).
Contributed by Dennis Brakhane.
Python 3.5's ``OrderedDict`` does not allow mutation while it is being
iterated over. This breaks "update" if it is called with a dict
larger than the maximum size.
This commit changes the code to a version that does not iterate over
the dict, and should also be a little bit faster.
- **Init scripts**: The beat init script now properly reports service as down
when no pid file can be found.
Eric Zarowny
- **Beat**: Added cleaning of corrupted scheduler files for some storage
backend errors (Issue #2985).
Fix contributed by Aleksandr Kuznetsov.
- **Beat**: Now syncs the schedule even if the schedule is empty.
Fix contributed by Colin McIntosh.
- **Supervisord**: Set higher process priority in supervisord example.
Contributed by George Tantiras.
- **Documentation**: Includes improvements by:
Bryson
Caleb Mingle
Christopher Martin
Dieter Adriaenssens
Jason Veatch
Jeremy Cline
Juan Rossi
Kevin Harvey
Kevin McCarthy
Kirill Pavlov
Marco Buttu
Mayflower
Mher Movsisyan
Michael Floering
michael-k
Nathaniel Varona
Rudy Attias
Ryan Luckie
Steven Parker
squfrans
Tadej Janež
TakesxiSximada
Tom S
.. _version-3.1.19: .. _version-3.1.19:
3.1.19 3.1.19
......
Copyright (c) 2009, 2010, 2011, 2012 Ask Solem, and individual contributors. All Rights Reserved. Copyright (c) 2015 Ask Solem & contributors. All rights reserved.
Copyright (c) 2012-2014 GoPivotal, Inc. All rights reserved. Copyright (c) 2012-2014 GoPivotal, Inc. All rights reserved.
Copyright (c) 2009, 2010, 2011, 2012 Ask Solem, and individual contributors. All rights reserved.
Celery is licensed under The BSD License (3 Clause, also known as Celery is licensed under The BSD License (3 Clause, also known as
the new BSD license). The license is an OSI approved Open Source the new BSD license). The license is an OSI approved Open Source
...@@ -39,9 +40,9 @@ Documentation License ...@@ -39,9 +40,9 @@ Documentation License
The documentation portion of Celery (the rendered contents of the The documentation portion of Celery (the rendered contents of the
"docs" directory of a software distribution or checkout) is supplied "docs" directory of a software distribution or checkout) is supplied
under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 under the "Creative Commons Attribution-ShareAlike 4.0
United States License as described by International" (CC BY-SA 4.0) License as described by
http://creativecommons.org/licenses/by-nc-sa/3.0/us/ http://creativecommons.org/licenses/by-sa/4.0/
Footnotes Footnotes
========= =========
......
Metadata-Version: 1.1 Metadata-Version: 1.1
Name: celery Name: celery
Version: 3.1.19 Version: 3.1.20
Summary: Distributed Task Queue Summary: Distributed Task Queue
Home-page: http://celeryproject.org Home-page: http://celeryproject.org
Author: Ask Solem Author: Ask Solem
...@@ -12,7 +12,7 @@ Description: ================================= ...@@ -12,7 +12,7 @@ Description: =================================
.. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
:Version: 3.1.19 (Cipater) :Version: 3.1.20 (Cipater)
:Web: http://celeryproject.org/ :Web: http://celeryproject.org/
:Download: http://pypi.python.org/pypi/celery/ :Download: http://pypi.python.org/pypi/celery/
:Source: http://github.com/celery/celery/ :Source: http://github.com/celery/celery/
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
.. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
:Version: 3.1.19 (Cipater) :Version: 3.1.20 (Cipater)
:Web: http://celeryproject.org/ :Web: http://celeryproject.org/
:Download: http://pypi.python.org/pypi/celery/ :Download: http://pypi.python.org/pypi/celery/
:Source: http://github.com/celery/celery/ :Source: http://github.com/celery/celery/
......
Metadata-Version: 1.1 Metadata-Version: 1.1
Name: celery Name: celery
Version: 3.1.19 Version: 3.1.20
Summary: Distributed Task Queue Summary: Distributed Task Queue
Home-page: http://celeryproject.org Home-page: http://celeryproject.org
Author: Ask Solem Author: Ask Solem
...@@ -12,7 +12,7 @@ Description: ================================= ...@@ -12,7 +12,7 @@ Description: =================================
.. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
:Version: 3.1.19 (Cipater) :Version: 3.1.20 (Cipater)
:Web: http://celeryproject.org/ :Web: http://celeryproject.org/
:Download: http://pypi.python.org/pypi/celery/ :Download: http://pypi.python.org/pypi/celery/
:Source: http://github.com/celery/celery/ :Source: http://github.com/celery/celery/
......
pytz>dev pytz>dev
billiard>=3.3.0.21,<3.4 billiard>=3.3.0.22,<3.4
kombu>=3.0.29,<3.1 kombu>=3.0.33,<3.1
[zookeeper] [zookeeper]
kazoo>=1.3.1 kazoo>=1.3.1
......
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""Distributed Task Queue""" """Distributed Task Queue"""
# :copyright: (c) 2015 Ask Solem and individual contributors.
# All rights # reserved.
# :copyright: (c) 2012-2014 GoPivotal, Inc., All rights reserved.
# :copyright: (c) 2009 - 2012 Ask Solem and individual contributors, # :copyright: (c) 2009 - 2012 Ask Solem and individual contributors,
# All rights reserved. # All rights reserved.
# :copyright: (c) 2012-2014 GoPivotal, Inc., All rights reserved.
# :license: BSD (3 Clause), see LICENSE for more details. # :license: BSD (3 Clause), see LICENSE for more details.
from __future__ import absolute_import from __future__ import absolute_import
...@@ -17,7 +19,7 @@ version_info_t = namedtuple( ...@@ -17,7 +19,7 @@ version_info_t = namedtuple(
) )
SERIES = 'Cipater' SERIES = 'Cipater'
VERSION = version_info_t(3, 1, 19, '', '') VERSION = version_info_t(3, 1, 20, '', '')
__version__ = '{0.major}.{0.minor}.{0.micro}{0.releaselevel}'.format(VERSION) __version__ = '{0.major}.{0.minor}.{0.micro}{0.releaselevel}'.format(VERSION)
__author__ = 'Ask Solem' __author__ = 'Ask Solem'
__contact__ = 'ask@celeryproject.org' __contact__ = 'ask@celeryproject.org'
......
...@@ -26,7 +26,7 @@ def add_backend_cleanup_task(app): ...@@ -26,7 +26,7 @@ def add_backend_cleanup_task(app):
backend. backend.
If the configured backend requires periodic cleanup this task is also If the configured backend requires periodic cleanup this task is also
automatically configured to run every day at midnight (requires automatically configured to run every day at 4am (requires
:program:`celery beat` to be running). :program:`celery beat` to be running).
""" """
......
...@@ -282,6 +282,15 @@ class Control(object): ...@@ -282,6 +282,15 @@ class Control(object):
""" """
return self.broadcast('pool_shrink', {'n': n}, destination, **kwargs) return self.broadcast('pool_shrink', {'n': n}, destination, **kwargs)
def autoscale(self, max, min, destination=None, **kwargs):
"""Change worker(s) autoscale setting.
Supports the same arguments as :meth:`broadcast`.
"""
return self.broadcast(
'autoscale', {'max': max, 'min': min}, destination, **kwargs)
def broadcast(self, command, arguments=None, destination=None, def broadcast(self, command, arguments=None, destination=None,
connection=None, reply=False, timeout=1, limit=None, connection=None, reply=False, timeout=1, limit=None,
callback=None, channel=None, **extra_kwargs): callback=None, channel=None, **extra_kwargs):
......
...@@ -31,6 +31,8 @@ class MapRoute(object): ...@@ -31,6 +31,8 @@ class MapRoute(object):
return dict(self.map[task]) return dict(self.map[task])
except KeyError: except KeyError:
pass pass
except ValueError:
return {'queue': self.map[task]}
class Router(object): class Router(object):
......
...@@ -527,13 +527,18 @@ class Task(object): ...@@ -527,13 +527,18 @@ class Task(object):
if an error occurs while executing the task. if an error occurs while executing the task.
:keyword producer: :class:~@amqp.TaskProducer` instance to use. :keyword producer: :class:~@amqp.TaskProducer` instance to use.
:keyword add_to_parent: If set to True (default) and the task :keyword add_to_parent: If set to True (default) and the task
is applied while executing another task, then the result is applied while executing another task, then the result
will be appended to the parent tasks ``request.children`` will be appended to the parent tasks ``request.children``
attribute. Trailing can also be disabled by default using the attribute. Trailing can also be disabled by default using the
:attr:`trail` attribute :attr:`trail` attribute
:keyword publisher: Deprecated alias to ``producer``. :keyword publisher: Deprecated alias to ``producer``.
:keyword headers: Message headers to be sent in the
task (a :class:`dict`)
:rtype :class:`celery.result.AsyncResult`: if :rtype :class:`celery.result.AsyncResult`: if
:setting:`CELERY_ALWAYS_EAGER` is not set, otherwise :setting:`CELERY_ALWAYS_EAGER` is not set, otherwise
:class:`celery.result.EagerResult`. :class:`celery.result.EagerResult`.
...@@ -575,6 +580,7 @@ class Task(object): ...@@ -575,6 +580,7 @@ class Task(object):
'soft_time_limit': limit_soft, 'soft_time_limit': limit_soft,
'time_limit': limit_hard, 'time_limit': limit_hard,
'reply_to': request.reply_to, 'reply_to': request.reply_to,
'headers': request.headers,
} }
options.update( options.update(
{'queue': queue} if queue else (request.delivery_info or {}) {'queue': queue} if queue else (request.delivery_info or {})
......
...@@ -15,6 +15,8 @@ import re ...@@ -15,6 +15,8 @@ import re
from collections import Mapping from collections import Mapping
from types import ModuleType from types import ModuleType
from kombu.utils.url import maybe_sanitize_url
from celery.datastructures import ConfigurationView from celery.datastructures import ConfigurationView
from celery.five import items, string_t, values from celery.five import items, string_t, values
from celery.platforms import pyimplementation from celery.platforms import pyimplementation
...@@ -184,9 +186,12 @@ def filter_hidden_settings(conf): ...@@ -184,9 +186,12 @@ def filter_hidden_settings(conf):
if isinstance(key, string_t): if isinstance(key, string_t):
if HIDDEN_SETTINGS.search(key): if HIDDEN_SETTINGS.search(key):
return mask return mask
if 'BROKER_URL' in key.upper(): elif 'BROKER_URL' in key.upper():
from kombu import Connection from kombu import Connection
return Connection(value).as_uri(mask=mask) return Connection(value).as_uri(mask=mask)
elif key.upper() in ('CELERY_RESULT_BACKEND', 'CELERY_BACKEND'):
return maybe_sanitize_url(value, mask=mask)
return value return value
return dict((k, maybe_censor(k, v)) for k, v in items(conf)) return dict((k, maybe_censor(k, v)) for k, v in items(conf))
...@@ -216,7 +221,8 @@ def bugreport(app): ...@@ -216,7 +221,8 @@ def bugreport(app):
py_v=_platform.python_version(), py_v=_platform.python_version(),
driver_v=driver_v, driver_v=driver_v,
transport=transport, transport=transport,
results=app.conf.CELERY_RESULT_BACKEND or 'disabled', results=maybe_sanitize_url(
app.conf.CELERY_RESULT_BACKEND or 'disabled'),
human_settings=app.conf.humanize(), human_settings=app.conf.humanize(),
loader=qualname(app.loader.__class__), loader=qualname(app.loader.__class__),
) )
......
...@@ -22,6 +22,7 @@ from functools import partial ...@@ -22,6 +22,7 @@ from functools import partial
from billiard import current_process from billiard import current_process
from kombu.utils.encoding import safe_str from kombu.utils.encoding import safe_str
from kombu.utils.url import maybe_sanitize_url
from celery import VERSION_BANNER, platforms, signals from celery import VERSION_BANNER, platforms, signals
from celery.app import trace from celery.app import trace
...@@ -227,7 +228,9 @@ class Worker(WorkController): ...@@ -227,7 +228,9 @@ class Worker(WorkController):
hostname=safe_str(self.hostname), hostname=safe_str(self.hostname),
version=VERSION_BANNER, version=VERSION_BANNER,
conninfo=self.app.connection().as_uri(), conninfo=self.app.connection().as_uri(),
results=self.app.conf.CELERY_RESULT_BACKEND or 'disabled', results=maybe_sanitize_url(
self.app.conf.CELERY_RESULT_BACKEND or 'disabled',
),
concurrency=concurrency, concurrency=concurrency,
platform=safe_str(_platform.platform()), platform=safe_str(_platform.platform()),
events=events, events=events,
......
...@@ -9,7 +9,9 @@ ...@@ -9,7 +9,9 @@
from __future__ import absolute_import from __future__ import absolute_import
import sys import sys
import types
from celery.exceptions import ImproperlyConfigured
from celery.local import Proxy from celery.local import Proxy
from celery._state import current_app from celery._state import current_app
from celery.five import reraise from celery.five import reraise
...@@ -44,10 +46,14 @@ def get_backend_cls(backend=None, loader=None): ...@@ -44,10 +46,14 @@ def get_backend_cls(backend=None, loader=None):
loader = loader or current_app.loader loader = loader or current_app.loader
aliases = dict(BACKEND_ALIASES, **loader.override_backends) aliases = dict(BACKEND_ALIASES, **loader.override_backends)
try: try:
return symbol_by_name(backend, aliases) cls = symbol_by_name(backend, aliases)
except ValueError as exc: except ValueError as exc:
reraise(ValueError, ValueError(UNKNOWN_BACKEND.format( reraise(ImproperlyConfigured, ImproperlyConfigured(
backend, exc)), sys.exc_info()[2]) UNKNOWN_BACKEND.format(backend, exc)), sys.exc_info()[2])
if isinstance(cls, types.ModuleType):
raise ImproperlyConfigured(UNKNOWN_BACKEND.format(
backend, 'is a Python module, not a backend class.'))
return cls
def get_backend_by_url(backend=None, loader=None): def get_backend_by_url(backend=None, loader=None):
......
...@@ -116,7 +116,7 @@ class BaseBackend(object): ...@@ -116,7 +116,7 @@ class BaseBackend(object):
status=states.SUCCESS, request=request) status=states.SUCCESS, request=request)
def mark_as_failure(self, task_id, exc, traceback=None, request=None): def mark_as_failure(self, task_id, exc, traceback=None, request=None):
"""Mark task as executed with failure. Stores the execption.""" """Mark task as executed with failure. Stores the exception."""
return self.store_result(task_id, exc, status=states.FAILURE, return self.store_result(task_id, exc, status=states.FAILURE,
traceback=traceback, request=request) traceback=traceback, request=request)
...@@ -166,11 +166,11 @@ class BaseBackend(object): ...@@ -166,11 +166,11 @@ class BaseBackend(object):
def exception_to_python(self, exc): def exception_to_python(self, exc):
"""Convert serialized exception to Python exception.""" """Convert serialized exception to Python exception."""
if exc: if exc:
if self.serializer in EXCEPTION_ABLE_CODECS: if not isinstance(exc, BaseException):
return get_pickled_exception(exc) exc = create_exception_cls(
elif not isinstance(exc, BaseException):
return create_exception_cls(
from_utf8(exc['exc_type']), __name__)(exc['exc_message']) from_utf8(exc['exc_type']), __name__)(exc['exc_message'])
if self.serializer in EXCEPTION_ABLE_CODECS:
exc = get_pickled_exception(exc)
return exc return exc
def prepare_value(self, result): def prepare_value(self, result):
...@@ -241,6 +241,8 @@ class BaseBackend(object): ...@@ -241,6 +241,8 @@ class BaseBackend(object):
return self.persistent if p is None else p return self.persistent if p is None else p
def encode_result(self, result, status): def encode_result(self, result, status):
if isinstance(result, ExceptionInfo):
result = result.exception
if status in self.EXCEPTION_STATES and isinstance(result, Exception): if status in self.EXCEPTION_STATES and isinstance(result, Exception):
return self.prepare_exception(result) return self.prepare_exception(result)
else: else:
......
...@@ -140,7 +140,7 @@ class DatabaseBackend(BaseBackend): ...@@ -140,7 +140,7 @@ class DatabaseBackend(BaseBackend):
task = Task(task_id) task = Task(task_id)
task.status = states.PENDING task.status = states.PENDING
task.result = None task.result = None
return task.to_dict() return self.meta_from_decoded(task.to_dict())
@retry @retry
def _save_group(self, group_id, result): def _save_group(self, group_id, result):
......
...@@ -36,12 +36,6 @@ else: # pragma: no cover ...@@ -36,12 +36,6 @@ else: # pragma: no cover
__all__ = ['MongoBackend'] __all__ = ['MongoBackend']
class Bunch(object):
def __init__(self, **kw):
self.__dict__.update(kw)
class MongoBackend(BaseBackend): class MongoBackend(BaseBackend):
host = 'localhost' host = 'localhost'
port = 27017 port = 27017
......
...@@ -63,6 +63,7 @@ class RedisBackend(KeyValueStoreBackend): ...@@ -63,6 +63,7 @@ class RedisBackend(KeyValueStoreBackend):
conf = self.app.conf conf = self.app.conf
if self.redis is None: if self.redis is None:
raise ImproperlyConfigured(REDIS_MISSING) raise ImproperlyConfigured(REDIS_MISSING)
self._client_capabilities = self._detect_client_capabilities()
# For compatibility with the old REDIS_* configuration keys. # For compatibility with the old REDIS_* configuration keys.
def _get(key): def _get(key):
...@@ -227,31 +228,41 @@ class RedisBackend(KeyValueStoreBackend): ...@@ -227,31 +228,41 @@ class RedisBackend(KeyValueStoreBackend):
except Exception as exc: except Exception as exc:
error('Chord callback for %r raised: %r', error('Chord callback for %r raised: %r',
request.group, exc, exc_info=1) request.group, exc, exc_info=1)
app._tasks[callback.task].backend.fail_from_current_stack( return self.chord_error_from_stack(
callback.id, callback,
exc=ChordError('Callback error: {0!r}'.format(exc)), ChordError('Callback error: {0!r}'.format(exc)),
) )
except ChordError as exc: except ChordError as exc:
error('Chord %r raised: %r', request.group, exc, exc_info=1) error('Chord %r raised: %r', request.group, exc, exc_info=1)
app._tasks[callback.task].backend.fail_from_current_stack( return self.chord_error_from_stack(callback, exc)
callback.id, exc=exc,
)
except Exception as exc: except Exception as exc:
error('Chord %r raised: %r', request.group, exc, exc_info=1) error('Chord %r raised: %r', request.group, exc, exc_info=1)
app._tasks[callback.task].backend.fail_from_current_stack( return self.chord_error_from_stack(
callback.id, exc=ChordError('Join error: {0!r}'.format(exc)), callback, ChordError('Join error: {0!r}'.format(exc)),
) )
def _detect_client_capabilities(self, socket_connect_timeout=False):
if self.redis.VERSION < (2, 4, 4):
raise ImproperlyConfigured(
'Redis backend requires redis-py versions 2.4.4 or later. '
'You have {0.__version__}'.format(redis))
if self.redis.VERSION >= (2, 10):
socket_connect_timeout = True
return {'socket_connect_timeout': socket_connect_timeout}
def _create_client(self, socket_timeout=None, socket_connect_timeout=None, def _create_client(self, socket_timeout=None, socket_connect_timeout=None,
**params): **params):
return self.redis.Redis( return self._new_redis_client(
connection_pool=self.ConnectionPool( socket_timeout=socket_timeout and float(socket_timeout),
socket_timeout=socket_timeout and float(socket_timeout), socket_connect_timeout=socket_connect_timeout and float(
socket_connect_timeout=socket_connect_timeout and float( socket_connect_timeout), **params
socket_connect_timeout),
**params),
) )
def _new_redis_client(self, **params):
if not self._client_capabilities['socket_connect_timeout']:
params.pop('socket_connect_timeout', None)
return self.redis.Redis(connection_pool=self.ConnectionPool(**params))
@property @property
def ConnectionPool(self): def ConnectionPool(self):
if self._ConnectionPool is None: if self._ConnectionPool is None:
......
...@@ -374,6 +374,12 @@ class PersistentScheduler(Scheduler): ...@@ -374,6 +374,12 @@ class PersistentScheduler(Scheduler):
def setup_schedule(self): def setup_schedule(self):
try: try:
self._store = self._open_schedule() self._store = self._open_schedule()
# In some cases there may be different errors from a storage
# backend for corrupted files. Example - DBPageNotFoundError
# exception from bsddb. In such case the file will be
# successfully opened but the error will be raised on first key
# retrieving.
self._store.keys()
except Exception as exc: except Exception as exc:
self._store = self._destroy_open_corrupted_schedule(exc) self._store = self._destroy_open_corrupted_schedule(exc)
...@@ -476,6 +482,8 @@ class Service(object): ...@@ -476,6 +482,8 @@ class Service(object):
debug('beat: Waking up %s.', debug('beat: Waking up %s.',
humanize_seconds(interval, prefix='in ')) humanize_seconds(interval, prefix='in '))
time.sleep(interval) time.sleep(interval)
if self.scheduler.should_sync():
self.scheduler._do_sync()
except (KeyboardInterrupt, SystemExit): except (KeyboardInterrupt, SystemExit):
self._is_shutdown.set() self._is_shutdown.set()
finally: finally:
......
...@@ -41,7 +41,8 @@ def detach(path, argv, logfile=None, pidfile=None, uid=None, ...@@ -41,7 +41,8 @@ def detach(path, argv, logfile=None, pidfile=None, uid=None,
gid=None, umask=None, working_directory=None, fake=False, app=None, gid=None, umask=None, working_directory=None, fake=False, app=None,
executable=None): executable=None):
fake = 1 if C_FAKEFORK else fake fake = 1 if C_FAKEFORK else fake
with detached(logfile, pidfile, uid, gid, umask, working_directory, fake): with detached(logfile, pidfile, uid, gid, umask, working_directory, fake,
after_forkers=False):
try: try:
if executable is not None: if executable is not None:
path = executable path = executable
......
...@@ -404,8 +404,10 @@ class AsynPool(_pool.Pool): ...@@ -404,8 +404,10 @@ class AsynPool(_pool.Pool):
# as processes are recycled, or found lost elsewhere. # as processes are recycled, or found lost elsewhere.
self._fileno_to_outq[proc.outqR_fd] = proc self._fileno_to_outq[proc.outqR_fd] = proc
self._fileno_to_synq[proc.synqW_fd] = proc self._fileno_to_synq[proc.synqW_fd] = proc
self.on_soft_timeout = self._timeout_handler.on_soft_timeout self.on_soft_timeout = self.on_hard_timeout = None
self.on_hard_timeout = self._timeout_handler.on_hard_timeout if self._timeout_handler:
self.on_soft_timeout = self._timeout_handler.on_soft_timeout
self.on_hard_timeout = self._timeout_handler.on_hard_timeout
def _event_process_exit(self, hub, fd): def _event_process_exit(self, hub, fd):
# This method is called whenever the process sentinel is readable. # This method is called whenever the process sentinel is readable.
......
...@@ -17,7 +17,7 @@ Experimental task class that buffers messages and processes them as a list. ...@@ -17,7 +17,7 @@ Experimental task class that buffers messages and processes them as a list.
**Simple Example** **Simple Example**
A click counter that flushes the buffer every 100 messages, and every A click counter that flushes the buffer every 100 messages, and every
seconds. Does not do anything with the data, but can easily be modified 10 seconds. Does not do anything with the data, but can easily be modified
to store it in a database. to store it in a database.
.. code-block:: python .. code-block:: python
......
...@@ -132,13 +132,23 @@ class Rdb(Pdb): ...@@ -132,13 +132,23 @@ class Rdb(Pdb):
def say(self, m): def say(self, m):
print(m, file=self.out) print(m, file=self.out)
def __enter__(self):
return self
def __exit__(self, *exc_info):
self._close_session()