Commit febef656 authored by Bas Couwenberg's avatar Bas Couwenberg

New upstream version 1.4.0

parent 55c87362
......@@ -11,7 +11,7 @@ addons:
env:
global:
- DEPENDS="numpy>=1.9.0 cython>=0.21 setuptools>=18.0"
- DEPENDS="numpy>=1.9.0 cython>=0.21 setuptools>=18.0 cftime"
- NO_NET=1
- MPI=0
......@@ -27,13 +27,13 @@ matrix:
# Absolute minimum dependencies.
- python: 2.7
env:
- DEPENDS="numpy==1.9.0 cython==0.21 ordereddict==1.1 setuptools==18.0"
- DEPENDS="numpy==1.9.0 cython==0.21 ordereddict==1.1 setuptools==18.0 cftime"
# test MPI
- python: 2.7
env:
- MPI=1
- CC=mpicc
- DEPENDS="numpy>=1.9.0 cython>=0.21 setuptools>=18.0 mpi4py>=1.3.1"
- DEPENDS="numpy>=1.9.0 cython>=0.21 setuptools>=18.0 mpi4py>=1.3.1 cftime"
- NETCDF_VERSION=4.4.1.1
- NETCDF_DIR=$HOME
- PATH=${NETCDF_DIR}/bin:${PATH} # pick up nc-config here
......@@ -48,6 +48,7 @@ notifications:
email: false
before_install:
- pip install Cython # workaround for pip bug
- pip install $DEPENDS
install:
......
......@@ -15,18 +15,6 @@ OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
parts of pyiso8601 are included in netcdftime under the following license:
Copyright (c) 2007 Michael Twomey
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
......
version 1.4.0 (not yet released)
=============================
* fixed bug in detection of CDF5 library support in setup.py (pull request
#736, issue #713).
* fixed reading of variables with zero-length dimensions in NETCDF3_CLASSIC
files (issue #743).
* allow integer-like objects in VLEN slices (not just python ints, issue
#526, pull request #757).
* treating _FillValue as a valid_min/valid_max was too surprising, despite
the fact the thet netcdf docs 'attribute best practices' suggests that
clients should to this. Revert this change from issue #576 (issue #761).
* remove netcdftime, since it is now a separate package. date2num, num2date
and date2index still importable from netCDF4.
* fix 'Unreachable code' cython warning (issue #767).
* Change behavior of string attributes so that nc.stringatt = ['foo','bar']
produces an vlen string array attribute in NETCDF4, instead of concatenating
into a single string ('foobar'). In NETCDF3/NETCDF4_CLASSIC, an IOError
is now raised, instead of writing 'foobar'. Issue #770.
* fix loading of enum type names (issue #775).
* make sure missing_value applies only to scaled short integers if
auto-scaling is on (issue #777).
* automatically create views of compound types with character arrays as
numpy strings (issue #773). Can be disabled using
'set_auto_chartostring(False)'. Numpy structured
array dtypes with 'SN' string subtypes can now be used to
define netcdf compound types (they get converted to ('S1',N)
character array types automatically).
* always return masked array by default, even if there are no
masked values (too surprising to get ndarray or MaskedArray depending
on slice, issue #785).
* treat valid_min/valid_max/_FillValue/missing_value as unsigned
integers if _Unsigned is set (to mimic behaviour of netcdf-java).
Conversion to unsigned type now occurs before masking and scale/offset
operation. Issue #794.
version 1.3.1 (tag v1.3.1rel)
=============================
* add parallel IO capabilities. netcdf-c and hdf5 must be compiled with MPI
......
......@@ -13,9 +13,8 @@ include examples/*ipynb
include examples/README.md
include test/*py
include test/*nc
include netcdftime/__init__.py
include netcdftime/_netcdftime.pyx
include netCDF4/__init__.py
include netCDF4/_netCDF4.pyx
include netCDF4/utils.py
include include/netCDF4.pxi
include include/mpi-compat.h
Metadata-Version: 1.1
Name: netCDF4
Version: 1.3.1
Version: 1.4.0
Author: Jeff Whitaker
Author-email: jeffrey s whitaker at noaa gov
Home-page: https://github.com/Unidata/netcdf4-python
......
# netcdf4-python
# [netcdf4-python](http://unidata.github.io/netcdf4-python)
[Python](http://python.org)/[numpy](http://numpy.org) interface to the netCDF [C library](https://github.com/Unidata/netcdf-c).
[![Linux Build Status](https://travis-ci.org/Unidata/netcdf4-python.svg?branch=master)](https://travis-ci.org/Unidata/netcdf4-python)
......@@ -8,7 +8,33 @@
## News
For details on the latest updates, see the [Changelog](https://github.com/Unidata/netcdf4-python/blob/master/Changelog).
11/01/2017: Version 1.3.1 released. Parallel IO support with MPI!
??/??/2018: Version [1.4.0](https://pypi.python.org/pypi/netCDF4/1.4.0) released. The netcdftime package is no longer
included, it is now a separate [package](https://pypi.python.org/pypi/cftime) dependency. In addition to several
bug fixes, there are a few important changes to the default behaviour to note:
* Slicing a netCDF variable will now always return masked array by default, even if there are no
masked values. The result depended on the slice before, which was too surprising.
If auto-masking is turned off (with `set_auto_mask(False)`) a numpy array will always
be returned.
* `_FillValue` is no longer treated as a valid_min/valid_max. This was too surprising, despite
the fact the thet netcdf docs [attribute best practices](https://www.unidata.ucar.edu/software/netcdf/docs/attribute_conventions.html) suggests that
clients should to this if `valid_min`, `valid_max` and `valid_range` are not set.
* Changed behavior of string attributes so that `nc.stringatt = ['foo','bar']`
produces an vlen string array attribute in NETCDF4, instead of concatenating
into a single string (`foobar`). In NETCDF3/NETCDF4_CLASSIC, an IOError
is now raised, instead of writing `foobar`.
* Retrieved compound-type variable data now returned with character array elements converted to
numpy strings ([issue #773](https://github.com/Unidata/netcdf4-python/issues/773)).
Works for assignment also. Can be disabled using
`set_auto_chartostring(False)`. Numpy structured
array dtypes with `'SN'` string subtypes can now be used to
define netcdf compound types in `createCompoundType` (they get converted to `('S1',N)`
character array types automatically).
* `valid_min`, `valid_max`, `_FillValue` and `missing_value` are now treated as unsigned
integers if `_Unsigned` variable attribute is set (to mimic behaviour of netcdf-java).
Conversion to unsigned type now occurs before masking and scale/offset
operation ([issue #794](https://github.com/Unidata/netcdf4-python/issues/794))
11/01/2017: Version [1.3.1](https://pypi.python.org/pypi/netCDF4/1.3.1) released. Parallel IO support with MPI!
Requires that netcdf-c and hdf5 be built with MPI support, and [mpi4py](http://mpi4py.readthedocs.io/en/stable).
To open a file for parallel access in a program running in an MPI environment
using mpi4py, just use `parallel=True` when creating
......
* create a release branch ('vX.Y.Zrel'). In the release branch...
* make sure version number in setup.py and netCDF4/_netCDF4.pyx are up to date
* make sure version number in PKG-INFO, setup.py and netCDF4/_netCDF4.pyx are up to date
(in _netCDF4.pyx, change 'Version' in first line of docstring at top of file,
and __version__ variable). If netcdftime module has any updates,
increment __version__ in netcdftime/_netcdftime.pyx. Update version number in
PKG_INFO.
and __version__ variable).
* update Changelog and README.md as needed.
* commit and push all of the above changes.
* install the module (python setup.py install), then run 'sh create_docs.sh'
......
environment:
# SDK v7.0 MSVC Express 2008's SetEnv.cmd script will fail if the
# /E:ON and /V:ON options are not enabled in the batch script interpreter
# See: http://stackoverflow.com/a/13751649/163740
CMD_IN_ENV: "cmd /E:ON /V:ON /C obvci_appveyor_python_build_env.cmd"
matrix:
- TARGET_ARCH: x64
CONDA_NPY: 111
CONDA_PY: 27
CONDA_INSTALL_LOCN: C:\\Miniconda-x64
- TARGET_ARCH: x64
CONDA_NPY: 114
CONDA_PY: 27
CONDA_INSTALL_LOCN: C:\\Miniconda-x64
- TARGET_ARCH: x64
CONDA_NPY: 111
CONDA_PY: 36
CONDA_INSTALL_LOCN: C:\\Miniconda35-x64
CONDA_INSTALL_LOCN: C:\\Miniconda36-x64
- TARGET_ARCH: x64
CONDA_NPY: 114
CONDA_PY: 36
CONDA_INSTALL_LOCN: C:\\Miniconda36-x64
# We always use a 64-bit machine, but can build x86 distributions
# with the TARGET_ARCH variable.
platform:
- x64
......@@ -33,24 +35,25 @@ install:
throw "There are newer queued builds for this pull request, failing early." }
# Add path, activate `conda` and update conda.
- cmd: set "PATH=%CONDA_INSTALL_LOCN%\\Scripts;%CONDA_INSTALL_LOCN%\\Library\\bin;%PATH%"
- cmd: set PYTHONUNBUFFERED=1
- cmd: call %CONDA_INSTALL_LOCN%\Scripts\activate.bat
# for obvci_appveyor_python_build_env.cmd
- cmd: conda update --all --yes
- cmd: conda install anaconda-client=1.6.3 --yes
- cmd: conda install -c conda-forge --yes obvious-ci
# for msinttypes and newer stuff
- cmd: conda config --prepend channels conda-forge
- cmd: conda config --set show_channel_urls yes
- cmd: conda config --set always_yes true
# For building conda packages
- cmd: conda install --yes conda-build jinja2 anaconda-client
# this is now the downloaded conda...
- cmd: conda info -a
- cmd: conda config --set always_yes yes --set changeps1 no --set show_channel_urls true --set auto_update_conda false
- cmd: conda update conda
# We need to pin conda until https://github.com/conda/conda/issues/6556 is fixed.
- cmd: conda config --system --add pinned_packages defaults::conda
- cmd: conda config --add channels conda-forge --force
# Install conda-build.
- cmd: conda install conda-build
- cmd: set PYTHONUNBUFFERED=1
- cmd: conda install vs2008_express_vc_python_patch
- cmd: call setup_x64.bat
- cmd: conda info --all
# Skip .NET project specific build phase.
build: off
test_script:
- "%CMD_IN_ENV% conda build conda.recipe --quiet"
- conda build conda.recipe
......@@ -8,4 +8,4 @@ echo netCDF4_libdir = %LIBRARY_LIB% >> %SITECFG%
echo netCDF4_incdir = %LIBRARY_INC% >> %SITECFG%
"%PYTHON%" setup.py install --single-version-externally-managed --record record.txt
if errorlevel 1 exit 1
if errorlevel 1 exit 1
\ No newline at end of file
#!/bin/bash
SETUPCFG=$SRC_DIR\setup.cfg
echo "[options]" > $SETUPCFG
echo "[directories]" >> $SETUPCFG
echo "netCDF4_dir = $PREFIX" >> $SETUPCFG
${PYTHON} setup.py install --single-version-externally-managed --record record.txt
......@@ -21,21 +21,24 @@ requirements:
- cython
- numpy x.x
- msinttypes # [win and py<35]
- hdf5 1.8.17|1.8.17.*
- libnetcdf 4.4.*
- hdf5
- libnetcdf
- cftime
run:
- python
- setuptools
- numpy x.x
- hdf5 1.8.17|1.8.17.*
- libnetcdf 4.4.*
- hdf5
- libnetcdf
- cython
- cftime
test:
source_files:
- test
imports:
- netCDF4
- netcdftime
- cftime
commands:
- ncinfo -h
- nc4tonc3 -h
......
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -162,18 +162,17 @@ datac2.real = datain['real']
datac2.imag = datain['imag']
print(datac.dtype,datac)
print(datac2.dtype,datac2)
# more complex compound type example.
from netCDF4 import chartostring, stringtoarr
f = Dataset('compound_example.nc','w') # create a new dataset.
# create an unlimited dimension call 'station'
f.createDimension('station',None)
# define a compound data type (can contain arrays, or nested compound types).
NUMCHARS = 80 # number of characters to use in fixed-length strings.
winddtype = numpy.dtype([('speed','f4'),('direction','i4')])
statdtype = numpy.dtype([('latitude', 'f4'), ('longitude', 'f4'),
('surface_wind',winddtype),
('temp_sounding','f4',10),('press_sounding','i4',10),
('location_name','S1',NUMCHARS)])
('location_name','S12')])
# use this data type definitions to create a compound data types
# called using the createCompoundType Dataset method.
# create a compound type for vector wind which will be nested inside
......@@ -182,12 +181,12 @@ wind_data_t = f.createCompoundType(winddtype,'wind_data')
# now that wind_data_t is defined, create the station data type.
station_data_t = f.createCompoundType(statdtype,'station_data')
# create nested compound data types to hold the units variable attribute.
winddtype_units = numpy.dtype([('speed','S1',NUMCHARS),('direction','S1',NUMCHARS)])
statdtype_units = numpy.dtype([('latitude', 'S1',NUMCHARS), ('longitude', 'S1',NUMCHARS),
winddtype_units = numpy.dtype([('speed','S12'),('direction','S12')])
statdtype_units = numpy.dtype([('latitude', 'S12'), ('longitude', 'S12'),
('surface_wind',winddtype_units),
('temp_sounding','S1',NUMCHARS),
('location_name','S1',NUMCHARS),
('press_sounding','S1',NUMCHARS)])
('temp_sounding','S12'),
('location_name','S12'),
('press_sounding','S12')])
# create the wind_data_units type first, since it will nested inside
# the station_data_units data type.
wind_data_units_t = f.createCompoundType(winddtype_units,'wind_data_units')
......@@ -196,35 +195,33 @@ f.createCompoundType(statdtype_units,'station_data_units')
# create a variable of of type 'station_data_t'
statdat = f.createVariable('station_obs', station_data_t, ('station',))
# create a numpy structured array, assign data to it.
data = numpy.empty(1,station_data_t)
data = numpy.empty(1,statdtype)
data['latitude'] = 40.
data['longitude'] = -105.
data['surface_wind']['speed'] = 12.5
data['surface_wind']['direction'] = 270
data['temp_sounding'] = (280.3,272.,270.,269.,266.,258.,254.1,250.,245.5,240.)
data['press_sounding'] = range(800,300,-50)
# variable-length string datatypes are not supported inside compound types, so
# to store strings in a compound data type, each string must be
# stored as fixed-size (in this case 80) array of characters.
data['location_name'] = stringtoarr('Boulder, Colorado, USA',NUMCHARS)
data['location_name'] = 'Boulder, CO'
# assign structured array to variable slice.
statdat[0] = data
# or just assign a tuple of values to variable slice
# (will automatically be converted to a structured array).
statdat[1] = (40.78,-73.99,(-12.5,90),
statdat[1] = numpy.array((40.78,-73.99,(-12.5,90),
(290.2,282.5,279.,277.9,276.,266.,264.1,260.,255.5,243.),
range(900,400,-50),stringtoarr('New York, New York, USA',NUMCHARS))
range(900,400,-50),'New York, NY'),data.dtype)
print(f.cmptypes)
windunits = numpy.empty(1,winddtype_units)
stationobs_units = numpy.empty(1,statdtype_units)
windunits['speed'] = stringtoarr('m/s',NUMCHARS)
windunits['direction'] = stringtoarr('degrees',NUMCHARS)
stationobs_units['latitude'] = stringtoarr('degrees north',NUMCHARS)
stationobs_units['longitude'] = stringtoarr('degrees west',NUMCHARS)
windunits['speed'] = 'm/s'
windunits['direction'] = 'degrees'
stationobs_units['latitude'] = 'degrees N'
stationobs_units['longitude'] = 'degrees W'
stationobs_units['surface_wind'] = windunits
stationobs_units['location_name'] = stringtoarr('None', NUMCHARS)
stationobs_units['temp_sounding'] = stringtoarr('Kelvin',NUMCHARS)
stationobs_units['press_sounding'] = stringtoarr('hPa',NUMCHARS)
stationobs_units['location_name'] = 'None'
stationobs_units['temp_sounding'] = 'Kelvin'
stationobs_units['press_sounding'] = 'hPa'
print(stationobs_units.dtype)
statdat.units = stationobs_units
# close and reopen the file.
f.close()
......@@ -234,22 +231,7 @@ statdat = f.variables['station_obs']
print(statdat)
# print out data in variable.
print('data in a variable of compound type:')
print('----')
for data in statdat[:]:
for name in statdat.dtype.names:
if data[name].dtype.kind == 'S': # a string
# convert array of characters back to a string for display.
units = chartostring(statdat.units[name])
print(name,': value =',chartostring(data[name]),\
': units=',units)
elif data[name].dtype.kind == 'V': # a nested compound type
units_list = [chartostring(s) for s in tuple(statdat.units[name])]
print(name,data[name].dtype.names,': value=',data[name],': units=',\
units_list)
else: # a numeric type.
units = chartostring(statdat.units[name])
print(name,': value=',data[name],': units=',units)
print('----')
print(statdat[:])
f.close()
f = Dataset('tst_vlen.nc','w')
......@@ -306,3 +288,35 @@ print(cloud_var)
print(cloud_var.datatype.enum_dict)
print(cloud_var[:])
f.close()
# dealing with strings
from netCDF4 import stringtochar
nc = Dataset('stringtest.nc','w',format='NETCDF4_CLASSIC')
nc.createDimension('nchars',3)
nc.createDimension('nstrings',None)
v = nc.createVariable('strings','S1',('nstrings','nchars'))
datain = numpy.array(['foo','bar'],dtype='S3')
v[:] = stringtochar(datain) # manual conversion to char array
print(v[:]) # data returned as char array
v._Encoding = 'ascii' # this enables automatic conversion
v[:] = datain # conversion to char array done internally
print(v[:]) # data returned in numpy string array
nc.close()
# strings in compound types
nc = Dataset('compoundstring_example.nc','w')
dtype = numpy.dtype([('observation', 'f4'),
('station_name','S12')])
station_data_t = nc.createCompoundType(dtype,'station_data')
nc.createDimension('station',None)
statdat = nc.createVariable('station_obs', station_data_t, ('station',))
data = numpy.empty(2,station_data_t.dtype_view)
data['observation'][:] = (123.,3.14)
data['station_name'][:] = ('Boulder','New York')
print(statdat.dtype) # strings actually stored as character arrays
statdat[:] = data # strings converted to character arrays internally
print(statdat[:]) # character arrays converted back to strings
print(statdat[:].dtype)
statdat.set_auto_chartostring(False) # turn off auto-conversion
statdat[:] = data.view(station_data_t.dtype)
print(statdat[:]) # now structured array with char array subtype is returned
nc.close()
This diff is collapsed.
......@@ -190,12 +190,13 @@ def _StartCountStride(elem, shape, dimensions=None, grp=None, datashape=None,\
else: # Convert single index to sequence
elem = [elem]
hasEllipsis=False
for e in elem:
if type(e)==type(Ellipsis):
if hasEllipsis:
raise IndexError("At most one ellipsis allowed in a slicing expression")
hasEllipsis=True
# ensure there is at most 1 ellipse
# we cannot use elem.count(Ellipsis), as with fancy indexing would occur
# np.array() == Ellipsis which gives ValueError: The truth value of an
# array with more than one element is ambiguous. Use a.any() or a.all()
if sum(1 for e in elem if e is Ellipsis) > 1:
raise IndexError("At most one ellipsis allowed in a slicing expression")
# replace boolean arrays with sequences of integers.
newElem = []
IndexErrorMsg=\
......@@ -526,9 +527,9 @@ def ncinfo():
print(f.dimensions[dim])
else:
if var is None and dim is None:
print(getgrp(f,group))
print(_getgrp(f,group))
else:
g = getgrp(f,group)
g = _getgrp(f,group)
if var is not None:
print(g.variables[var])
if dim is not None:
......@@ -743,11 +744,18 @@ def _nc3tonc4(filename3,filename4,unpackshort=True,
if dounpackshort:
if not quiet: sys.stdout.write('unpacking short integers to floats ...\n')
sys.stdout.write('')
# is there missing value?
if hasattr(ncvar, '_FillValue'):
FillValue = ncvar._FillValue
fillvalue3 = ncvar._FillValue
elif hasattr(ncvar, 'missing_value'):
fillvalue3 = ncvar.missing_value
else:
FillValue = None
var = ncfile4.createVariable(varname,datatype,ncvar.dimensions, fill_value=FillValue, least_significant_digit=lsd,zlib=zlib,complevel=complevel,shuffle=shuffle,fletcher32=fletcher32)
fillvalue3 = None
if fillvalue3 is not None:
fillvalue4 = fillvalue3 if not dounpackshort else mval
else:
fillvalue4 = None
var = ncfile4.createVariable(varname,datatype,ncvar.dimensions, fill_value=fillvalue4, least_significant_digit=lsd,zlib=zlib,complevel=complevel,shuffle=shuffle,fletcher32=fletcher32)
# fill variable attributes.
attdict = ncvar.__dict__
if '_FillValue' in attdict: del attdict['_FillValue']
......@@ -756,15 +764,8 @@ def _nc3tonc4(filename3,filename4,unpackshort=True,
if dounpackshort and 'scale_factor' in attdict:
del attdict['scale_factor']
if dounpackshort and 'missing_value' in attdict:
attdict['missing_value']=mval
attdict['missing_value'] = fillvalue4
var.setncatts(attdict)
#for attname in ncvar.ncattrs():
# if attname == '_FillValue': continue
# if dounpackshort and attname in ['add_offset','scale_factor']: continue
# if dounpackshort and attname == 'missing_value':
# setattr(var,attname,mval)
# else:
# setattr(var,attname,getattr(ncvar,attname))
# fill variables with data.
if hasunlimdim: # has an unlim dim, loop over unlim dim index.
# range to copy
......@@ -774,32 +775,11 @@ def _nc3tonc4(filename3,filename4,unpackshort=True,
for n in range(start, stop, step):
nmax = n+nchunk
if nmax > istop: nmax=istop
idata = ncvar[n:nmax]
if dounpackshort:
tmpdata = (ncvar.scale_factor*idata.astype('f')+ncvar.add_offset).astype('f')
if hasattr(ncvar,'missing_value'):
tmpdata = np.where(idata == ncvar.missing_value, mval, tmpdata)
else:
tmpdata = idata
var[n-istart:nmax-istart] = tmpdata
var[n-istart:nmax-istart] = ncvar[n:nmax]
else:
idata = ncvar[:]
if dounpackshort:
tmpdata = (ncvar.scale_factor*idata.astype('f')+ncvar.add_offset).astype('f')
if hasattr(ncvar,'missing_value'):
tmpdata = np.where(idata == ncvar.missing_value, mval, tmpdata)
else:
tmpdata = idata
var[0:len(unlimdim)] = tmpdata
var[0:len(unlimdim)] = ncvar[:]
else: # no unlim dim or 1-d variable, just copy all data at once.
idata = ncvar[:]
if dounpackshort:
tmpdata = (ncvar.scale_factor*idata.astype('f')+ncvar.add_offset).astype('f')
if hasattr(ncvar,'missing_value'):
tmpdata = np.where(idata == ncvar.missing_value, mval, tmpdata)
else:
tmpdata = idata
var[:] = tmpdata
var[:] = ncvar[:]
ncfile4.sync() # flush data to disk
# close files.
ncfile3.close()
......
from ._netcdftime import utime, JulianDayFromDate, DateFromJulianDay
from ._netcdftime import _parse_date, date2index, time2index
from ._netcdftime import DatetimeProlepticGregorian as datetime
from ._netcdftime import DatetimeNoLeap, DatetimeAllLeap, Datetime360Day, DatetimeJulian, \
DatetimeGregorian, DatetimeProlepticGregorian
from ._netcdftime import microsec_units, millisec_units, \
sec_units, hr_units, day_units, min_units
from ._netcdftime import __version__
This diff is collapsed.
......@@ -4,7 +4,7 @@ from setuptools import setup, Extension
from distutils.dist import Distribution
setuptools_extra_kwargs = {
"install_requires": ["numpy>=1.7"],
"install_requires": ["numpy>=1.7","cftime"],
"setup_requires": ['setuptools>=18.0', "cython>=0.19"],
"entry_points": {
'console_scripts': [
......@@ -78,10 +78,9 @@ def check_api(inc_dirs):
ncmetapath = os.path.join(d,'netcdf_meta.h')
if os.path.exists(ncmetapath):
has_cdf5 = False
for line in open(ncmetapath):
if line.startswith('#define NC_HAS_CDF5'):
has_cdf5 = True
has_cdf5_format = bool(int(line.split()[2]))
break
return has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
......@@ -463,13 +462,12 @@ netcdf_lib_version = getnetcdfvers(lib_dirs)
if netcdf_lib_version is None:
sys.stdout.write('unable to detect netcdf library version\n')
else:
netcdf_lib_version = str(netcdf_lib_version)
sys.stdout.write('using netcdf library version %s\n' % netcdf_lib_version)
cmdclass = {}
netcdf4_src_root = osp.join('netCDF4', '_netCDF4')
netcdf4_src_c = netcdf4_src_root + '.c'
netcdftime_src_root = osp.join('netcdftime', '_netcdftime')
netcdftime_src_c = netcdftime_src_root + '.c'
if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:]:
sys.stdout.write('using Cython to compile netCDF4.pyx...\n')
# remove _netCDF4.c file if it exists, so cython will recompile _netCDF4.pyx.
......@@ -478,12 +476,15 @@ if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:]:
if len(sys.argv) >= 2:
if os.path.exists(netcdf4_src_c):
os.remove(netcdf4_src_c)
# same for _netcdftime.c
if os.path.exists(netcdftime_src_c):
os.remove(netcdftime_src_c)
# this determines whether renameGroup and filepath methods will work.
has_rename_grp, has_nc_inq_path, has_nc_inq_format_extended, \
has_cdf5_format, has_nc_open_mem, has_nc_par = check_api(inc_dirs)
# for netcdf 4.4.x CDF5 format is always enabled.
if netcdf_lib_version is not None and\
(netcdf_lib_version > "4.4" and netcdf_lib_version < "4.5"):
has_cdf5_format = True
# disable parallel support if mpi4py not available.
try:
import mpi4py
except ImportError:
......@@ -546,15 +547,13 @@ if 'sdist' not in sys.argv[1:] and 'clean' not in sys.argv[1:]:
libraries=libs,
library_dirs=lib_dirs,
include_dirs=inc_dirs + ['include'],
runtime_library_dirs=runtime_lib_dirs),
Extension('netcdftime._netcdftime',
[netcdftime_src_root + '.pyx'])]
runtime_library_dirs=runtime_lib_dirs)]
else:
ext_modules = None
setup(name="netCDF4",
cmdclass=cmdclass,
version="1.3.1",
version="1.4.0",
long_description="netCDF version 4 has many features not found in earlier versions of the library, such as hierarchical groups, zlib compression, multiple unlimited dimensions, and new data types. It is implemented on top of HDF5. This module implements most of the new features, and can read and write netCDF files compatible with older versions of the library. The API is modelled after Scientific.IO.NetCDF, and should be familiar to users of that module.\n\nThis project is hosted on a `GitHub repository <https://github.com/Unidata/netcdf4-python>`_ where you may access the most up-to-date source.",
author="Jeff Whitaker",
author_email="jeffrey.s.whitaker@noaa.gov",
......@@ -578,6 +577,6 @@ setup(name="netCDF4",
"Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: System :: Archiving :: Compression",
"Operating System :: OS Independent"],
packages=['netcdftime', 'netCDF4'],
packages=['netCDF4'],
ext_modules=ext_modules,
**setuptools_extra_kwargs)
import glob, os, sys, unittest
import glob, os, sys, unittest, struct
from netCDF4 import getlibversion,__hdf5libversion__,__netcdf4libversion__,__version__
from netCDF4 import __has_cdf5_format__, __has_nc_inq_path__, __has_nc_par__
......@@ -21,7 +21,7 @@ if __netcdf4libversion__ < '4.2.1' or __has_nc_par__:
if not __has_nc_inq_path__:
test_files.remove('tst_filepath.py')
sys.stdout.write('not running tst_filepath.py ...\n')
if not __has_cdf5_format__:
if not __has_cdf5_format__ or struct.calcsize("P") < 8:
test_files.remove('tst_cdf5.py')
sys.stdout.write('not running tst_cdf5.py ...\n')
......
......@@ -30,6 +30,16 @@ class Test_Unsigned(unittest.TestCase):
data2 = f['soil_moisture'][:]
assert(data1.mask.sum() == data2.mask.sum())
f.close()
# issue 794
# test that valid_min/valid_max/_FillValue are
# treated as unsigned integers.
f=netCDF4.Dataset('20171025_2056.Cloud_Top_Height.nc')
data = f['HT'][:]
assert(data.mask.sum() == 57432)
assert(int(data.max()) == 15430)
assert(int(data.min()) == 0)
assert(data.dtype == np.float32)
f.close()
if __name__ == '__main__':
unittest.main()
......@@ -31,8 +31,11 @@ INTATT = 1
FLOATATT = math.pi
SEQATT = NP.arange(10)
STRINGSEQATT = ['mary ','','had ','a ','little ','lamb',]
#ATTDICT = {'stratt':STRATT,'floatatt':FLOATATT,'seqatt':SEQATT,
# 'stringseqatt':''.join(STRINGSEQATT), # changed in issue #770
# 'emptystratt':EMPTYSTRATT,'intatt':INTATT}
ATTDICT = {'stratt':STRATT,'floatatt':FLOATATT,'seqatt':SEQATT,
'stringseqatt':''.join(STRINGSEQATT),
'stringseqatt':STRINGSEQATT,
'emptystratt':EMPTYSTRATT,'intatt':INTATT}
class VariablesTestCase(unittest.TestCase):
......@@ -126,7 +129,8 @@ class VariablesTestCase(unittest.TestCase):
assert f.stratt == STRATT
assert f.emptystratt == EMPTYSTRATT
assert f.seqatt.tolist() == SEQATT.tolist()
assert f.stringseqatt == ''.join(STRINGSEQATT)
#assert f.stringseqatt == ''.join(STRINGSEQATT) # issue 770
assert f.stringseqatt == STRINGSEQATT
assert f.stringseqatt_array == STRINGSEQATT
assert f.getncattr('file_format') == 'netcdf4_format'
# variable attributes.
......@@ -141,7 +145,8 @@ class VariablesTestCase(unittest.TestCase):
assert v.floatatt == FLOATATT
assert v.stratt == STRATT
assert v.seqatt.tolist() == SEQATT.tolist()
assert v.stringseqatt == ''.join(STRINGSEQATT)
#assert v.stringseqatt == ''.join(STRINGSEQATT) # issue 770
assert v.stringseqatt == STRINGSEQATT
assert v.stringseqatt_array == STRINGSEQATT
assert v.getncattr('ndim') == 'three'
assert v.getncattr('foo') == 1
......@@ -178,7 +183,8 @@ class VariablesTestCase(unittest.TestCase):
assert g.stratt == STRATT
assert g.emptystratt == EMPTYSTRATT
assert g.seqatt.tolist() == SEQATT.tolist()
assert g.stringseqatt == ''.join(STRINGSEQATT)
#assert g.stringseqatt == ''.join(STRINGSEQATT) # issue 770
assert g.stringseqatt == STRINGSEQATT
assert g.stringseqatt_array == STRINGSEQATT
for key,val in ATTDICT.items():
if type(val) == NP.ndarray:
......@@ -190,7 +196,8 @@ class VariablesTestCase(unittest.TestCase):
assert v1.stratt == STRATT
assert v1.emptystratt == EMPTYSTRATT
assert v1.seqatt.tolist() == SEQATT.tolist()
assert v1.stringseqatt == ''.join(STRINGSEQATT)
#assert v1.stringseqatt == ''.join(STRINGSEQATT) # issue 770
assert v1.stringseqatt == STRINGSEQATT
assert v1.stringseqatt_array == STRINGSEQATT
assert getattr(v1,'nonexistantatt',None) == None
f.close()
......
......@@ -2,7 +2,7 @@ import sys
import unittest
import os
import tempfile
from netCDF4 import Dataset, CompoundType, chartostring, stringtoarr
from netCDF4 import Dataset, CompoundType
import numpy as np
from numpy.testing import assert_array_equal, assert_array_almost_equal
......@@ -16,15 +16,13 @@ GROUP_NAME = 'forecasts'