Commit 4d660f8f authored by Rebecca N. Palmer's avatar Rebecca N. Palmer

New upstream version 1.0.3+dfsg

parent 16317e59
......@@ -10,7 +10,7 @@ clone_folder: C:\projects\theano
environment:
BINSTAR_TOKEN:
secure: 58KqJcKtfCBVCuIzpnkLm4XZLQqKq95Hs8Ly20HWaMSla67nusrp3y4sy6XzZOBQ
secure: Z4ZN29hd1UKw4qUwSlpFk+58Ssa+DfIKSGhN3Wr5uOAsP3dCXrNDl5+ipVdzADFn
CONDA_LOC: "C:\\Miniconda-x64"
MKL_THREADING_LAYER: GNU
......
......@@ -17,6 +17,7 @@ abalkin <abalkin@enlnt.com> Alexander Belopolsky <abalkin@enlnt.com>
abalkin <abalkin@enlnt.com> Alexander Belopolsky <a@enlnt.com>
Adam Becker <junkkhaotik@gmail.com> khaotik <aruhanb@gmail.com>
Adam Becker <junkkhaotik@gmail.com> khaotik <junkkhaotik@gmail.com>
Adrian Seyboldt <aseyboldt@gmail.com> aseyboldt <aseyboldt@gmail.com>
Aleksandar Botev <botevmg@gmail.com> botev <botevmg@gmail.com>
Alex Lamb <alex6200@gmail.com> AlexLamb <alex6200@gmail.com>
Alex Lamb <alex6200@gmail.com> DeathMonster666 <alex6200@gmail.com>
......@@ -242,6 +243,7 @@ Steven Bocco <stevenbocco@gmail.com> Seton Steven Bocco <boccoset@leto15.iro.umo
Steven Bocco <stevenbocco@gmail.com> Seton Steven Bocco <boccoset@leto51.iro.umontreal.ca>
Steven Pigeon <pigeon@iro.umontreal.ca> steven-pigeon <pigeon@iro.umontreal.ca>
Thomas George <tfjgeorge@gmail.com> Thomas George <georgeth@helios1.helios>
Thomas Wiecki <thomas.wiecki@gmail.com> twiecki <thomas.wiecki@gmail.com>
Valentin Bisson <valentin.bisson@umontreal.ca> onze <onzeonline@gmail.com>
Xavier Bouthillier <xavier.bouthillier@gmail.com> Xavier Bouthillier <xavier.bouthillier@umontreal.ca>
Xavier Bouthillier <xavier.bouthillier@gmail.com> Xavier Bouthillier/ <xavier.bouthillier@gmail.com>
......
......@@ -5,6 +5,39 @@
Old Release Notes
=================
Theano 1.0.2 (23rd of May, 2018)
====================================
This is a maintenance release of Theano, version ``1.0.2``, with no
new features, but some important bug fixes.
We recommend that everybody update to this version.
Highlights (since 1.0.1):
- Theano should work under PyPy now (this is experimental).
- Update for cuDNN 7.1 RNN API changes.
- Fix for a crash related to mixed dtypes with cuDNN convolutions.
- MAGMA should work in more cases without manual config.
- Handle reductions with non-default accumulator dtype better on the GPU.
- Improvements to the test suite so that it fails less often due to
random chance.
A total of 6 people contributed to this release since ``1.0.1``:
- Frederic Bastien
- Steven Bocco
- Jon Haygood
- Arnaud Bergeron
- Jordan Melendez
- Desiree Vogt-Lee
- Garming Sam
- Pascal Lamblin
- Vincent Dumoulin
- Glexin
- Simon Lefrancois
Theano 1.0.1 (6th of December, 2017)
====================================
......
......@@ -2,34 +2,24 @@
Release Notes
=============
Theano 1.0.2 (23rd of May, 2018)
====================================
Theano 1.0.3 (20th of September 2018)
=====================================
This is a maintenance release of Theano, version ``1.0.2``, with no
This is a maintenance release of Theano, version ``1.0.3``, with no
new features, but some important bug fixes.
We recommend that everybody update to this version.
Highlights (since 1.0.1):
Highlights (since 1.0.2):
- Theano should work under PyPy now (this is experimental).
- Update for cuDNN 7.1 RNN API changes.
- Fix for a crash related to mixed dtypes with cuDNN convolutions.
- MAGMA should work in more cases without manual config.
- Handle reductions with non-default accumulator dtype better on the GPU.
- Improvements to the test suite so that it fails less often due to
random chance.
- Theano is now compatible with Python 3.7
- Broadcasting for sparse dot products works correctly
- Subtensor grads do not return int anymore
A total of 6 people contributed to this release since ``1.0.1``:
A total of 5 people contributed to this release since ``1.0.2``:
- Frederic Bastien
- Steven Bocco
- Jon Haygood
- Arnaud Bergeron
- Jordan Melendez
- Desiree Vogt-Lee
- Garming Sam
- Pascal Lamblin
- Vincent Dumoulin
- Glexin
- Simon Lefrancois
- Dmitry Mottl
- Adrian Seyboldt
- Thomas Wiecki
......@@ -567,7 +567,7 @@ import theano and print the config variable, as in:
String value: ``'None'``, ``'all'``, ``'0.3'``, ``'0.4'``, ``'0.4.1'``,
``'0.5'``, ``'0.6'``, ``'0.7'``, ``'0.8'``, ``'0.8.1'``, ``'0.8.2'``,
``'0.9'``, ``'0.10'``, ``'1.0'``, ``'1.0.1'``, ``'1.0.2'``
``'0.9'``, ``'0.10'``, ``'1.0'``, ``'1.0.1'``, ``'1.0.2'``, ``'1.0.3'``
Default: ``'0.9'``
......
......@@ -71,6 +71,5 @@ Further readings
../extending/graphstructures
loading_and_saving
aliasing
python-memory-management
multi_cores
faq_tutorial
This diff is collapsed.
......@@ -12,5 +12,3 @@ tutorials/exercises if you need to learn it or only need a refresher:
* `Dive into Python <http://diveintopython.net/>`__
* `Google Python Class <https://developers.google.com/edu/python/>`__
* `Enthought Python course <https://training.enthought.com/?utm_source=academic&utm_medium=email&utm_campaign=EToD-Launch#/courses>`__ (free for academics)
We have a tutorial on how :ref:`Python manages its memory <python-memory-management>`.
......@@ -23,9 +23,9 @@ def get_keywords():
# setup.py/versioneer.py will grep for the variable names, so they must
# each be defined on a line of their own. _version.py will just call
# get_keywords().
git_refnames = " (HEAD -> master, tag: rel-1.0.2)"
git_full = "3b51141a46affe9505f0e3f283020820b2c0251e"
git_date = "2018-05-23 10:05:50 -0400"
git_refnames = " (tag: rel-1.0.3)"
git_full = "65fefc3acbdbc498e09ea6c6fa8143e2b14dd9e8"
git_date = "2018-09-17 13:05:43 -0400"
keywords = {"refnames": git_refnames, "full": git_full, "date": git_date}
return keywords
......
......@@ -105,6 +105,11 @@ class OpFromGraph(gof.Op):
:class:`Variable <theano.gof.Variable>`. Each list element corresponds
to a specific output of R_op, length of list must be equal to number of outputs.
connection_pattern : list of list
If not ``None``, this will be used as the connection_pattern
for this op.
name : string, optional
A name for debugging purposes
......@@ -248,6 +253,7 @@ class OpFromGraph(gof.Op):
lop_overrides='default',
grad_overrides='default',
rop_overrides='default',
connection_pattern=None,
name=None, **kwargs
):
if not isinstance(outputs, list):
......@@ -298,6 +304,8 @@ class OpFromGraph(gof.Op):
self._lop_type = 'lop'
self.set_rop_overrides(rop_overrides)
self._connection_pattern = connection_pattern
if name is not None:
assert isinstance(name, str), 'name must be None or string object'
self.name = name
......@@ -637,6 +645,9 @@ class OpFromGraph(gof.Op):
Return connection pattern of subfgraph defined by inputs and outputs.
"""
if self._connection_pattern is not None:
return self._connection_pattern
inp_len = len(self.local_inputs)
out_len = len(self.local_outputs)
cpmat_self = io_connection_pattern(
......
......@@ -266,6 +266,37 @@ class T_OpFromGraph(unittest_tools.InferShapeTester):
# TODO list override case
@test_params
def test_connection_pattern_override(self, cls_ofg):
x, y = T.vectors('xy')
def f1(x, y):
del x
# but we know how to backpropagate for x for some reasons
# and we don't care about the gradient wrt y.
return y + T.round(y)
def f1_back(inputs, output_gradients):
return [
output_gradients[0],
theano.gradient.disconnected_type()]
op = cls_ofg(
inputs=[x, y],
outputs=[f1(x, y)],
grad_overrides=f1_back,
connection_pattern=[[True], [False]], # This is new
on_unused_input='ignore') # This is new
c = op(x, y)
g1 = theano.grad(c.sum(), x)
out = g1.eval({
x: np.ones((5,), dtype=np.float32),
y: np.ones((5,), dtype=np.float32)})
assert np.allclose(out, [1.] * 5)
@test_params
def test_nested(self, cls_ofg):
x, y = T.vectors('xy')
......
......@@ -752,7 +752,7 @@ AddConfigVar('warn.ignore_bug_before',
"[warn] flags."),
EnumStr('0.9', 'None', 'all', '0.3', '0.4', '0.4.1', '0.5', '0.6',
'0.7', '0.8', '0.8.1', '0.8.2', '0.9', '0.10', '1.0',
'1.0.1', '1.0.2',
'1.0.1', '1.0.2', '1.0.3',
allow_override=False),
in_c_key=False)
......
......@@ -7,6 +7,7 @@ import os
import socket # only used for gethostname()
import time
import logging
from six import PY3
from contextlib import contextmanager
......@@ -271,9 +272,14 @@ def lock(tmp_dir, timeout=notset, min_wait=None, max_wait=None, verbosity=1):
nb_wait += 1
time.sleep(random.uniform(min_wait, max_wait))
if PY3:
exception = FileExistsError # noqa
else:
exception = OSError
try:
os.mkdir(tmp_dir)
except OSError:
except exception:
# Error while creating the directory: someone else
# must have tried at the exact same time.
nb_error += 1
......
This diff is collapsed.
......@@ -1118,7 +1118,8 @@ def local_gpua_advanced_incsubtensor1(op, context_name, inputs, outputs):
set_instead_of_inc = op.set_instead_of_inc
if (x.ndim == 1 and y.ndim == 0 and
config.deterministic == 'default'):
config.deterministic == 'default' and
x.dtype not in ('int8', 'int16')):
x = x.dimshuffle(0, 'x')
y = y.dimshuffle('x', 'x')
ret = GpuAdvancedIncSubtensor1_dev20(
......@@ -1126,7 +1127,8 @@ def local_gpua_advanced_incsubtensor1(op, context_name, inputs, outputs):
ret = GpuDimShuffle(ret.type.broadcastable, [0])(ret)
return ret
elif (x.ndim != 2 or y.ndim != 2 or
config.deterministic == 'more'):
config.deterministic == 'more' or
x.dtype in ('int8', 'int16')):
return GpuAdvancedIncSubtensor1(
set_instead_of_inc=set_instead_of_inc)
else:
......
......@@ -2936,3 +2936,14 @@ def test_conv_guess_once_with_dtypes():
f_pseudo_half_config()
f_float_config()
f_double_config()
def test_opt_f16_prec32():
inputs = T.TensorType('float16', (False,) * 4)()
filters = T.TensorType('float16', (False,) * 4)()
conv = T.nnet.conv2d(inputs, filters)
gfilt = theano.grad(conv.sum(), filters)
# If this compiles we are good
theano.function([inputs, filters], [conv, gfilt], mode=mode_with_gpu)
......@@ -155,7 +155,8 @@ def test_advinc_subtensor1_vector_scalar():
shp = (3,)
for dtype1, dtype2 in [('float32', 'int8'), ('float32', 'float64'),
('float16', 'int8'), ('float16', 'float64'),
('float16', 'float16')]:
('float16', 'float16'), ('int8', 'int8'),
('int16', 'int16')]:
shared = gpuarray_shared_constructor
xval = np.arange(np.prod(shp), dtype=dtype1).reshape(shp) + 1
yval = np.asarray(10, dtype=dtype2)
......
......@@ -163,7 +163,7 @@ def debugprint(obj, depth=-1, print_type=False,
topo = obj.toposort()
order.extend([topo for item in obj.outputs])
elif isinstance(obj, (integer_types, float, np.ndarray)):
print(obj)
print(obj, file=_file)
elif isinstance(obj, (theano.In, theano.Out)):
results_to_print.append(obj.variable)
profile_list.append(None)
......
This diff is collapsed.
......@@ -102,7 +102,7 @@ def scan(fn,
* ...
* all time slices of the last sequence
* all past slices of the first output
* all past slices of the second otuput
* all past slices of the second output
* ...
* all past slices of the last output
* all other arguments (the list given as `non_sequences` to
......
......@@ -4008,28 +4008,34 @@ class Dot(gof.op.Op):
"sparse variable as inputs, but the inputs are "
"%s (%s) and %s (%s)." % (x, x.type, y, y.type))
if not x_is_sparse_var:
if x_is_sparse_var:
broadcast_x = (False,) * x.ndim
else:
x = tensor.as_tensor_variable(x)
broadcast_x = x.type.broadcastable
assert y.format in ["csr", "csc"]
if x.ndim not in (1, 2):
raise TypeError(
'theano.sparse.Dot: input 0 (0-indexed) must have ndim of '
'1 or 2, %d given.' % x.ndim)
if not y_is_sparse_var:
if y_is_sparse_var:
broadcast_y = (False,) * y.ndim
else:
y = tensor.as_tensor_variable(y)
broadcast_y = y.type.broadcastable
assert x.format in ["csr", "csc"]
if y.ndim not in (1, 2):
raise TypeError(
'theano.sparse.Dot: input 1 (1-indexed) must have ndim of '
'1 or 2, %d given.' % y.ndim)
if y.ndim == 1 or x.ndim == 1:
bz = (False,)
else:
bz = (False, False)
if len(broadcast_y) == 2:
broadcast_out = broadcast_x[:-1] + broadcast_y[1:]
elif len(broadcast_y) == 1:
broadcast_out = broadcast_x[:-1]
return gof.Apply(self, [x, y], [tensor.tensor(dtype=dtype_out,
broadcastable=bz)])
broadcastable=broadcast_out)])
def perform(self, node, inputs, out):
x, y = inputs
......
......@@ -464,6 +464,23 @@ class SparseInferShapeTester(utt.InferShapeTester):
config.floatX, 3))],
Dot)
def test_dot_broadcast(self):
for x, y in [
(SparseType('csr', 'float32')(), tensor.vector()[:, None]),
(SparseType('csr', 'float32')(), tensor.vector()[None, :]),
(SparseType('csr', 'float32')(), tensor.matrix()),
(tensor.vector()[:, None], SparseType('csr', 'float32')()),
(tensor.vector()[None, :], SparseType('csr', 'float32')()),
(tensor.matrix(), SparseType('csr', 'float32')())]:
sparse_out = theano.dot(x, y)
if isinstance(x, sparse.SparseVariable):
x = tensor.matrix()
if isinstance(y, sparse.SparseVariable):
y = tensor.matrix()
dense_out = tensor.dot(x, y)
assert dense_out.broadcastable == sparse_out.broadcastable
def test_structured_dot(self):
x = SparseType('csc', dtype=config.floatX)()
y = SparseType('csc', dtype=config.floatX)()
......
......@@ -6325,7 +6325,16 @@ def add_calculate(num, denum, aslist=False, out_type=None):
zero = theano._asarray(0, dtype=out_type.dtype)
# zero = 0.0 if out_type is None else theano._asarray(0,
# dtype=out_type.dtype)
v = reduce(np.add, num, zero) - reduce(np.add, denum, zero)
if out_type and out_type.dtype == 'bool':
if len(denum) == 0:
# NumPy 1.14 do not accept to do "bool - bool"
v = reduce(np.add, num, zero)
else:
raise Exception(
"bool subtraction not supported. This should not happen as"
" an earlier error should have been raised")
else:
v = reduce(np.add, num, zero) - reduce(np.add, denum, zero)
if aslist:
if np.all(v == 0):
return []
......
......@@ -1761,7 +1761,14 @@ class AdvancedSubtensor1(Op):
rval1 = [sparse_module_ref.construct_sparse_from_list(x, gz,
ilist)]
else:
rval1 = [advanced_inc_subtensor1(x.zeros_like(), gz, ilist)]
if x.dtype in theano.tensor.discrete_dtypes:
# The output dtype is the same as x
gx = x.zeros_like(dtype=theano.config.floatX)
elif x.dtype in theano.tensor.complex_dtypes:
raise NotImplementedError("No support for complex grad yet")
else:
gx = x.zeros_like()
rval1 = [advanced_inc_subtensor1(gx, gz, ilist)]
return rval1 + [DisconnectedType()()] * (len(inputs) - 1)
def R_op(self, inputs, eval_points):
......@@ -2238,9 +2245,15 @@ class AdvancedSubtensor(BaseAdvancedSubtensor):
def grad(self, inputs, grads):
gz, = grads
x = inputs[0]
if x.dtype in theano.tensor.discrete_dtypes:
# The output dtype is the same as x
gx = x.zeros_like(dtype=theano.config.floatX)
elif x.dtype in theano.tensor.complex_dtypes:
raise NotImplementedError("No support for complex grad yet")
else:
gx = x.zeros_like()
rest = inputs[1:]
return [advanced_inc_subtensor(theano.tensor.zeros_like(x), gz,
*rest)] + \
return [advanced_inc_subtensor(gx, gz, *rest)] + \
[DisconnectedType()()] * len(rest)
advanced_subtensor = AdvancedSubtensor()
......@@ -2258,9 +2271,15 @@ class AdvancedBooleanSubtensor(BaseAdvancedSubtensor):
def grad(self, inputs, grads):
gz, = grads
x = inputs[0]
if x.dtype in theano.tensor.discrete_dtypes:
# The output dtype is the same as x
gx = x.zeros_like(dtype=theano.config.floatX)
elif x.dtype in theano.tensor.complex_dtypes:
raise NotImplementedError("No support for complex grad yet")
else:
gx = x.zeros_like()
rest = inputs[1:]
return [advanced_boolean_inc_subtensor(theano.tensor.zeros_like(x), gz,
*rest)] + \
return [advanced_boolean_inc_subtensor(gx, gz, *rest)] + \
[DisconnectedType()()] * len(rest)
advanced_boolean_subtensor = AdvancedBooleanSubtensor()
......
......@@ -266,6 +266,11 @@ def test_add_canonizer_problem0():
f = function([label], r)
f(3)
# This was crashing in the past.
c0 = theano.tensor.constant([True])
c1 = theano.tensor.constant([True])
theano.function([], c0 + c1)
class test_greedy_distribute(unittest.TestCase):
def test_main(self):
......
......@@ -2,7 +2,7 @@ from __future__ import absolute_import, print_function, division
from theano._version import get_versions
FALLBACK_VERSION = "1.0.2+unknown"
FALLBACK_VERSION = "1.0.3+unknown"
info = get_versions()
if info['error'] is not None:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment