Commit c1261340 authored by Stephen Sinclair's avatar Stephen Sinclair

New upstream version 1.0.7

parent a3453999
COPYRIGHT
Copyright (c) 2016 - 2018, the respective contributors.
All rights reserved.
Each contributor holds copyright over their respective contributions.
The project versioning (Git) records all such contribution source information.
The initial code of this repository came from https://github.com/keras-team/keras
(the Keras repository), hence, for author information regarding commits
that occured earlier than the first commit in the present repository,
please see the original Keras repository.
LICENSE
The MIT License (MIT)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
include LICENSE
include README.md
include CONTRIBUTING.md
graft tests
Metadata-Version: 2.1
Name: Keras_Applications
Version: 1.0.6
Version: 1.0.7
Summary: Reference implementations of popular deep learning models
Home-page: https://github.com/keras-team/keras-applications
Author: Keras Team
License: MIT
Download-URL: https://github.com/keras-team/keras-applications/tarball/1.0.6
Download-URL: https://github.com/keras-team/keras-applications/tarball/1.0.7
Description:
Keras Applications is the `applications` module of
the Keras deep learning library.
......
......@@ -33,7 +33,14 @@ The input size used was 224x224 for all models except NASNetLarge (331x331), Inc
|----------------------------------------------------------------|-------------|-------------|-------------|--------|--------|---------------------------------------------|
| [VGG16](keras_applications/vgg16.py) | 28.732 | 9.950 | 8.834 | 138.4M | 14.7M | [[paper]](https://arxiv.org/abs/1409.1556) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py) |
| [VGG19](keras_applications/vgg19.py) | 28.744 | 10.012 | 8.774 | 143.7M | 20.0M | [[paper]](https://arxiv.org/abs/1409.1556) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/vgg.py) |
| [ResNet50](keras_applications/resnet50.py) | 25.072 | 7.940 | 6.828 | 25.6M | 23.6M | [[paper]](https://arxiv.org/abs/1512.03385) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v1.py) |
| [ResNet50](keras_applications/resnet50.py) | 25.072 | 7.940 | 6.828 | 25.6M | 23.6M | [[paper]](https://arxiv.org/abs/1512.03385) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v1.py) [[torch]](https://github.com/facebook/fb.resnet.torch/blob/master/models/resnet.lua) [[caffe]](https://github.com/KaimingHe/deep-residual-networks/blob/master/prototxt/ResNet-50-deploy.prototxt) |
| [ResNet101](keras_applications/resnet.py) | 23.580 | 7.214 | 6.092 | 44.7M | 42.7M | [[paper]](https://arxiv.org/abs/1512.03385) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v1.py) [[torch]](https://github.com/facebook/fb.resnet.torch/blob/master/models/resnet.lua) [[caffe]](https://github.com/KaimingHe/deep-residual-networks/blob/master/prototxt/ResNet-101-deploy.prototxt) |
| [ResNet152](keras_applications/resnet.py) | 23.396 | 6.882 | 5.908 | 60.4M | 58.4M | [[paper]](https://arxiv.org/abs/1512.03385) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v1.py) [[torch]](https://github.com/facebook/fb.resnet.torch/blob/master/models/resnet.lua) [[caffe]](https://github.com/KaimingHe/deep-residual-networks/blob/master/prototxt/ResNet-152-deploy.prototxt) |
| [ResNet50V2](keras_applications/resnet_v2.py) | 24.040 | 6.966 | 5.896 | 25.6M | 23.6M | [[paper]](https://arxiv.org/abs/1603.05027) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.py) [[torch]](https://github.com/facebook/fb.resnet.torch/blob/master/models/preresnet.lua) |
| [ResNet101V2](keras_applications/resnet_v2.py) | 22.766 | 6.184 | 5.158 | 44.7M | 42.6M | [[paper]](https://arxiv.org/abs/1603.05027) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.py) [[torch]](https://github.com/facebook/fb.resnet.torch/blob/master/models/preresnet.lua) |
| [ResNet152V2](keras_applications/resnet_v2.py) | 21.968 | 5.838 | 4.900 | 60.4M | 58.3M | [[paper]](https://arxiv.org/abs/1603.05027) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/resnet_v2.py) [[torch]](https://github.com/facebook/fb.resnet.torch/blob/master/models/preresnet.lua) |
| [ResNeXt50](keras_applications/resnext.py) | 22.260 | 6.190 | 5.410 | 25.1M | 23.0M | [[paper]](https://arxiv.org/abs/1611.05431) [[torch]](https://github.com/facebookresearch/ResNeXt/blob/master/models/resnext.lua) |
| [ResNeXt101](keras_applications/resnext.py) | 21.270 | 5.706 | 4.842 | 44.3M | 42.3M | [[paper]](https://arxiv.org/abs/1611.05431) [[torch]](https://github.com/facebookresearch/ResNeXt/blob/master/models/resnext.lua) |
| [InceptionV3](keras_applications/inception_v3.py) | 22.102 | 6.280 | 5.038 | 23.9M | 21.8M | [[paper]](https://arxiv.org/abs/1512.00567) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v3.py) |
| [InceptionResNetV2](keras_applications/inception_resnet_v2.py) | 19.744 | 4.748 | 3.962 | 55.9M | 54.3M | [[paper]](https://arxiv.org/abs/1602.07261) [[tf-models]](https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_resnet_v2.py) |
| [Xception](keras_applications/xception.py) | 20.994 | 5.548 | 4.738 | 22.9M | 20.9M | [[paper]](https://arxiv.org/abs/1610.02357) |
......
......@@ -92,4 +92,19 @@ def correct_pad(backend, inputs, kernel_size):
return ((correct[0] - adjust[0], correct[0]),
(correct[1] - adjust[1], correct[1]))
__version__ = '1.0.6'
__version__ = '1.0.7'
from . import vgg16
from . import vgg19
from . import resnet50
from . import inception_v3
from . import inception_resnet_v2
from . import xception
from . import mobilenet
from . import mobilenet_v2
from . import densenet
from . import nasnet
from . import resnet
from . import resnet_v2
from . import resnext
......@@ -155,10 +155,10 @@ def DenseNet(blocks,
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
......
......@@ -286,7 +286,7 @@ def _obtain_input_shape(input_shape,
if weights == 'imagenet' and require_flatten:
if input_shape is not None:
if input_shape != default_shape:
raise ValueError('When setting`include_top=True` '
raise ValueError('When setting `include_top=True` '
'and loading `imagenet` weights, '
'`input_shape` should be ' +
str(default_shape) + '.')
......
......@@ -205,10 +205,10 @@ def InceptionResNetV2(include_top=True,
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the last convolutional layer.
the 4D tensor output of the last convolutional block.
- `'avg'` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `'max'` means that global max pooling will be applied.
classes: optional number of classes to classify images
......@@ -241,7 +241,7 @@ def InceptionResNetV2(include_top=True,
default_size=299,
min_size=75,
data_format=backend.image_data_format(),
require_flatten=False,
require_flatten=include_top,
weights=weights)
if input_tensor is None:
......
......@@ -113,10 +113,10 @@ def InceptionV3(include_top=True,
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
......@@ -150,7 +150,7 @@ def InceptionV3(include_top=True,
default_size=299,
min_size=75,
data_format=backend.image_data_format(),
require_flatten=False,
require_flatten=include_top,
weights=weights)
if input_tensor is None:
......@@ -175,7 +175,7 @@ def InceptionV3(include_top=True,
x = conv2d_bn(x, 192, 3, 3, padding='valid')
x = layers.MaxPooling2D((3, 3), strides=(2, 2))(x)
# mixed 0, 1, 2: 35 x 35 x 256
# mixed 0: 35 x 35 x 256
branch1x1 = conv2d_bn(x, 64, 1, 1)
branch5x5 = conv2d_bn(x, 48, 1, 1)
......@@ -194,7 +194,7 @@ def InceptionV3(include_top=True,
axis=channel_axis,
name='mixed0')
# mixed 1: 35 x 35 x 256
# mixed 1: 35 x 35 x 288
branch1x1 = conv2d_bn(x, 64, 1, 1)
branch5x5 = conv2d_bn(x, 48, 1, 1)
......@@ -213,7 +213,7 @@ def InceptionV3(include_top=True,
axis=channel_axis,
name='mixed1')
# mixed 2: 35 x 35 x 256
# mixed 2: 35 x 35 x 288
branch1x1 = conv2d_bn(x, 64, 1, 1)
branch5x5 = conv2d_bn(x, 48, 1, 1)
......
......@@ -105,15 +105,16 @@ def MobileNet(input_shape=None,
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
alpha: controls the width of the network.
alpha: controls the width of the network. This is known as the
width multiplier in the MobileNet paper.
- If `alpha` < 1.0, proportionally decreases the number
of filters in each layer.
- If `alpha` > 1.0, proportionally increases the number
of filters in each layer.
- If `alpha` = 1, default number of filters from the paper
are used at each layer.
depth_multiplier: depth multiplier for depthwise convolution
(also called the resolution multiplier)
depth_multiplier: depth multiplier for depthwise convolution. This
is called the resolution multiplier in the MobileNet paper.
dropout: dropout rate
include_top: whether to include the fully-connected
layer at the top of the network.
......@@ -127,10 +128,10 @@ def MobileNet(input_shape=None,
when `include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
......
......@@ -82,6 +82,7 @@ import numpy as np
from . import correct_pad
from . import get_submodules_from_kwargs
from . import imagenet_utils
from .imagenet_utils import decode_predictions
from .imagenet_utils import _obtain_input_shape
......@@ -98,19 +99,13 @@ keras_utils = None
def preprocess_input(x, **kwargs):
"""Preprocesses a numpy array encoding a batch of images.
This function applies the "Inception" preprocessing which converts
the RGB values from [0, 255] to [-1, 1]. Note that this preprocessing
function is different from `imagenet_utils.preprocess_input()`.
# Arguments
x: a 4D numpy array consists of RGB values within [0, 255].
# Returns
Preprocessed array.
"""
x /= 128.
x -= 1.
return x.astype(np.float32)
return imagenet_utils.preprocess_input(x, mode='tf', **kwargs)
# This function is taken from the original tf repo.
......@@ -131,7 +126,6 @@ def _make_divisible(v, divisor, min_value=None):
def MobileNetV2(input_shape=None,
alpha=1.0,
depth_multiplier=1,
include_top=True,
weights='imagenet',
input_tensor=None,
......@@ -152,15 +146,14 @@ def MobileNetV2(input_shape=None,
do not match then we will throw an error.
E.g. `(160, 160, 3)` would be one valid value.
alpha: controls the width of the network. This is known as the
width multiplier in the MobileNetV2 paper.
width multiplier in the MobileNetV2 paper, but the name is kept for
consistency with MobileNetV1 in Keras.
- If `alpha` < 1.0, proportionally decreases the number
of filters in each layer.
- If `alpha` > 1.0, proportionally increases the number
of filters in each layer.
- If `alpha` = 1, default number of filters from the paper
are used at each layer.
depth_multiplier: depth multiplier for depthwise convolution
(also called the resolution multiplier)
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
......@@ -173,10 +166,10 @@ def MobileNetV2(input_shape=None,
when `include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
......@@ -190,8 +183,8 @@ def MobileNetV2(input_shape=None,
# Raises
ValueError: in case of invalid argument for `weights`,
or invalid input shape or invalid depth_multiplier, alpha,
rows when weights='imagenet'
or invalid input shape or invalid alpha, rows when
weights='imagenet'
"""
global backend, layers, models, keras_utils
backend, layers, models, keras_utils = get_submodules_from_kwargs(kwargs)
......@@ -291,10 +284,6 @@ def MobileNetV2(input_shape=None,
cols = input_shape[col_axis]
if weights == 'imagenet':
if depth_multiplier != 1:
raise ValueError('If imagenet weights are being loaded, '
'depth multiplier must be 1')
if alpha not in [0.35, 0.50, 0.75, 1.0, 1.3, 1.4]:
raise ValueError('If imagenet weights are being loaded, '
'alpha can be one of `0.35`, `0.50`, `0.75`, '
......
......@@ -114,10 +114,10 @@ def NASNet(input_shape=None,
when `include_top` is `False`.
- `None` means that the output of the model
will be the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a
2D tensor.
- `max` means that global max pooling will
......@@ -164,7 +164,7 @@ def NASNet(input_shape=None,
default_size=default_size,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
require_flatten=True,
weights=weights)
if backend.image_data_format() != 'channels_last':
......@@ -190,10 +190,10 @@ def NASNet(input_shape=None,
else:
img_input = input_tensor
if penultimate_filters % 24 != 0:
if penultimate_filters % (24 * (filter_multiplier ** 2)) != 0:
raise ValueError(
'For NASNet-A models, the value of `penultimate_filters` '
'needs to be divisible by 24. Current value: %d' %
'For NASNet-A models, the `penultimate_filters` must be a multiple '
'of 24 * (`filter_multiplier` ** 2). Current value: %d' %
penultimate_filters)
channel_dim = 1 if backend.image_data_format() == 'channels_first' else -1
......
"""ResNet models for Keras.
# Reference paper
- [Deep Residual Learning for Image Recognition]
(https://arxiv.org/abs/1512.03385) (CVPR 2016 Best Paper Award)
# Reference implementations
- [TensorNets]
(https://github.com/taehoonlee/tensornets/blob/master/tensornets/resnets.py)
- [Caffe ResNet]
(https://github.com/KaimingHe/deep-residual-networks/tree/master/prototxt)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from . import imagenet_utils
from .imagenet_utils import decode_predictions
from .resnet_common import ResNet50
from .resnet_common import ResNet101
from .resnet_common import ResNet152
def preprocess_input(x, **kwargs):
"""Preprocesses a numpy array encoding a batch of images.
# Arguments
x: a 4D numpy array consists of RGB values within [0, 255].
data_format: data format of the image tensor.
# Returns
Preprocessed array.
"""
return imagenet_utils.preprocess_input(x, mode='caffe', **kwargs)
......@@ -165,16 +165,16 @@ def ResNet50(include_top=True,
has to be `(224, 224, 3)` (with `channels_last` data format)
or `(3, 224, 224)` (with `channels_first` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 197.
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
......
This diff is collapsed.
"""ResNetV2 models for Keras.
# Reference paper
- [Aggregated Residual Transformations for Deep Neural Networks]
(https://arxiv.org/abs/1611.05431) (CVPR 2017)
# Reference implementations
- [TensorNets]
(https://github.com/taehoonlee/tensornets/blob/master/tensornets/resnets.py)
- [Torch ResNetV2]
(https://github.com/facebook/fb.resnet.torch/blob/master/models/preresnet.lua)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from . import imagenet_utils
from .imagenet_utils import decode_predictions
from .resnet_common import ResNet50V2
from .resnet_common import ResNet101V2
from .resnet_common import ResNet152V2
def preprocess_input(x, **kwargs):
"""Preprocesses a numpy array encoding a batch of images.
# Arguments
x: a 4D numpy array consists of RGB values within [0, 255].
data_format: data format of the image tensor.
# Returns
Preprocessed array.
"""
return imagenet_utils.preprocess_input(x, mode='tf', **kwargs)
"""ResNeXt models for Keras.
# Reference paper
- [Aggregated Residual Transformations for Deep Neural Networks]
(https://arxiv.org/abs/1611.05431) (CVPR 2017)
# Reference implementations
- [TensorNets]
(https://github.com/taehoonlee/tensornets/blob/master/tensornets/resnets.py)
- [Torch ResNeXt]
(https://github.com/facebookresearch/ResNeXt/blob/master/models/resnext.lua)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from . import imagenet_utils
from .imagenet_utils import decode_predictions
from .resnet_common import ResNeXt50
from .resnet_common import ResNeXt101
def preprocess_input(x, **kwargs):
"""Preprocesses a numpy array encoding a batch of images.
# Arguments
x: a 4D numpy array consists of RGB values within [0, 255].
data_format: data format of the image tensor.
# Returns
Preprocessed array.
"""
return imagenet_utils.preprocess_input(x, mode='torch', **kwargs)
......@@ -61,10 +61,10 @@ def VGG16(include_top=True,
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
......
......@@ -61,10 +61,10 @@ def VGG19(include_top=True,
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
......
......@@ -72,10 +72,10 @@ def Xception(include_top=True,
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
......@@ -126,7 +126,7 @@ def Xception(include_top=True,
default_size=299,
min_size=71,
data_format=backend.image_data_format(),
require_flatten=False,
require_flatten=include_top,
weights=weights)
if input_tensor is None:
......
......@@ -21,13 +21,13 @@ and is distributed under the MIT license.
'''
setup(name='Keras_Applications',
version='1.0.6',
version='1.0.7',
description='Reference implementations of popular deep learning models',
long_description=long_description,
author='Keras Team',
url='https://github.com/keras-team/keras-applications',
download_url='https://github.com/keras-team/'
'keras-applications/tarball/1.0.6',
'keras-applications/tarball/1.0.7',
license='MIT',
install_requires=['numpy>=1.9.1',
'h5py'],
......
import pytest
import random
import six
import numpy as np
import keras_applications
from keras.applications import densenet
from keras.applications import inception_resnet_v2
from keras.applications import inception_v3
from keras.applications import mobilenet
try:
from keras.applications import mobilenet_v2
except ImportError:
from keras.applications import mobilenetv2 as mobilenet_v2
from keras.applications import nasnet
from keras.applications import resnet50
from keras.applications import vgg16
from keras.applications import vgg19
from keras.applications import xception
from keras.preprocessing import image
from keras import backend
from keras import layers
from keras import models
from keras import utils
from multiprocessing import Process, Queue
def keras_modules_injection(base_fun):
def wrapper(*args, **kwargs):
if hasattr(keras_applications, 'get_submodules_from_kwargs'):
kwargs['backend'] = backend
kwargs['layers'] = layers
kwargs['models'] = models
kwargs['utils'] = utils
return base_fun(*args, **kwargs)
return wrapper
for (name, module) in [('resnet', keras_applications.resnet),
('resnet_v2', keras_applications.resnet_v2),
('resnext', keras_applications.resnext)]:
module.decode_predictions = keras_modules_injection(module.decode_predictions)
module.preprocess_input = keras_modules_injection(module.preprocess_input)
for app in dir(module):
if app[0].isupper():
setattr(module, app, keras_modules_injection(getattr(module, app)))
setattr(keras_applications, name, module)
RESNET_LIST = [keras_applications.resnet.ResNet50,
keras_applications.resnet.ResNet101,
keras_applications.resnet.ResNet152]
RESNETV2_LIST = [keras_applications.resnet_v2.ResNet50V2,
keras_applications.resnet_v2.ResNet101V2,
keras_applications.resnet_v2.ResNet152V2]
RESNEXT_LIST = [keras_applications.resnext.ResNeXt50,
keras_applications.resnext.ResNeXt101]
MOBILENET_LIST = [(mobilenet.MobileNet, mobilenet, 1024),
(mobilenet_v2.MobileNetV2, mobilenet_v2, 1280)]
DENSENET_LIST = [(densenet.DenseNet121, 1024),
(densenet.DenseNet169, 1664),
(densenet.DenseNet201, 1920)]
NASNET_LIST = [(nasnet.NASNetMobile, 1056),
(nasnet.NASNetLarge, 4032)]
def keras_test(func):
"""Function wrapper to clean up after TensorFlow tests.
# Arguments
func: test function to clean up after.
# Returns
A function wrapping the input function.
"""
@six.wraps(func)
def wrapper(*args, **kwargs):
output = func(*args, **kwargs)
if backend.backend() == 'tensorflow' or backend.backend() == 'cntk':
backend.clear_session()
return output
return wrapper
def _get_elephant(target_size):
# For models that don't include a Flatten step,
# the default is to accept variable-size inputs
# even when loading ImageNet weights (since it is possible).
# In this case, default to 299x299.
if target_size[0] is None:
target_size = (299, 299)
img = image.load_img('tests/data/elephant.jpg',
target_size=tuple(target_size))
x = image.img_to_array(img)
return np.expand_dims(x, axis=0)
def _get_output_shape(model_fn, preprocess_input=None):
if backend.backend() == 'cntk':
# Create model in a subprocess so that
# the memory consumed by InceptionResNetV2 will be
# released back to the system after this test
# (to deal with OOM error on CNTK backend).
# TODO: remove the use of multiprocessing from these tests
# once a memory clearing mechanism
# is implemented in the CNTK backend.
def target(queue):
model = model_fn()
if preprocess_input is None:
queue.put(model.output_shape)
else:
x = _get_elephant(model.input_shape[1:3])
x = preprocess_input(x)
queue.put((model.output_shape, model.predict(x)))
queue = Queue()
p = Process(target=target, args=(queue,))
p.start()
p.join()
# The error in a subprocess won't propagate
# to the main process, so we check if the model
# is successfully created by checking if the output shape
# has been put into the queue
assert not queue.empty(), 'Model creation failed.'
return queue.get_nowait()
else:
model = model_fn()
if preprocess_input is None:
return model.output_shape
else:
x = _get_elephant(model.input_shape[1:3])
x = preprocess_input(x)
return (model.output_shape, model.predict(x))
@keras_test
def _test_application_basic(app, last_dim=1000, module=None):
if module is None:
output_shape = _get_output_shape(lambda: app(weights=None))
assert output_shape == (None, None, None, last_dim)
else:
output_shape, preds = _get_output_shape(
lambda: app(weights='imagenet'), module.preprocess_input)
assert output_shape == (None, last_dim)
names = [p[1] for p in module.decode_predictions(preds)[0]]
# Test correct label is in top 3 (weak correctness test).
assert 'African_elephant' in names[:3]
@keras_test
def _test_application_notop(app, last_dim):
output_shape = _get_output_shape(
lambda: app(weights=None, include_top=False))
assert output_shape == (None, None, None, last_dim)
@keras_test
def _test_application_variable_input_channels(app, last_dim):
if backend.image_data_format() == 'channels_first':
input_shape = (1, None, None)
else:
input_shape = (None, None, 1)
output_shape = _get_output_shape(
lambda: app(weights=None, include_top=False, input_shape=input_shape))
assert output_shape == (None, None, None, last_dim)
if backend.image_data_format() == 'channels_first':
input_shape = (4, None, None)
else:
input_shape = (None, None, 4)
output_shape = _get_output_shape(
lambda: app(weights=None, include_top=False, input_shape=input_shape))
assert output_shape == (None, None, None, last_dim)
@keras_test
def _test_app_pooling(app, last_dim):
output_shape = _get_output_shape(
lambda: app(weights=None,
include_top=False,
pooling=random.choice(['avg', 'max'])))
assert output_shape == (None, last_dim)