Commit e9c9952b authored by Ole Streicher's avatar Ole Streicher

Import Upstream version 0.6.1

concurrency = multiprocessing
source = pynpoint
# Python
# Distribution
# Testing
# Sphinx
# Mac
\ No newline at end of file
language: python
dist: xenial
- 2.7
- 3.6
- 3.7
- pip install -r requirements.txt
- pip install pytest-cov
- pip install coveralls
- pip install sphinx
- pip install sphinx_rtd_theme
- make docs
- make test
- coveralls
- webhooks:
- email: false
* @tomasstolker
\ No newline at end of file
This diff is collapsed.
include LICENSE
include Makefile
include README.rst
include requirements.txt
include tox.ini
include docs/*
include docs/_images/*
graft tests
\ No newline at end of file
.PHONY: help clean clean-build clean-python clean-test test test-all coverage docs
@echo "pypi - submit package to the PyPI server"
@echo "docs - generate Sphinx documentation"
@echo "test - run test cases"
@echo "coverage - check code coverage"
@echo "clean - remove all artifacts"
@echo "clean-build - remove build artifacts"
@echo "clean-python - remove Python artifacts"
@echo "clean-test - remove test artifacts"
python sdist bdist_wheel
twine check dist/*
twine upload dist/*
rm -f docs/pynpoint.core.rst
rm -f docs/pynpoint.readwrite.rst
rm -f docs/pynpoint.processing.rst
rm -f docs/pynpoint.util.rst
sphinx-apidoc -o docs/ pynpoint
$(MAKE) -C docs clean
$(MAKE) -C docs html
pytest --cov=pynpoint
coverage run --rcfile .coveragerc -m py.test
coverage combine
coverage report -m
coverage html
clean: clean-build clean-python clean-test
rm -rf dist/
rm -rf build/
rm -rf htmlcov/
rm -rf .eggs/
rm -rf docs/_build
find . -name '*.pyc' -exec rm -f {} +
find . -name '*.pyo' -exec rm -f {} +
find . -name '*~' -exec rm -f {} +
find . -name '__pycache__' -exec rm -rf {} +
rm -f coverage.xml
rm -f .coverage
rm -f .coverage.*
rm -rf .tox/
rm -rf pynpoint.egg-info/
rm -f junit-docs-ci.xml
rm -f junit-py27.xml
rm -f junit-py36.xml
rm -f junit-py37.xml
rm -rf .pytest_cache/
\ No newline at end of file
**Python package for processing and analysis of high-contrast imaging data**
.. image::
.. image::
.. image::
.. image::
.. image::
.. image::
.. image::
.. image::
PynPoint is an end-to-end pipeline for the data reduction and analysis of high-contrast imaging data of planetary and substellar companions, as well as circumstellar disks in scattered light.
The pipeline has a modular architecture with a central data storage in which all results are stored by the processing modules. These modules have specific tasks such as the subtraction of the thermal background emission, frame selection, centering, PSF subtraction, and photometric and astrometric measurements. The tags from the central data storage can be written to FITS, HDF5, and text files with the available IO modules.
PynPoint is under continuous development and the latest implementations can be pulled from Github repository. Bug reports, requests for new features, and contributions in the form of new functionalities and pipeline modules are highly appreciated. Instructions for writing of modules are provided in the documentation. Bug reports and functionality requests can be provided by creating an `issue <>`_ on the Github page.
An end-to-end example of a `SPHERE/ZIMPOL <>`_ H-alpha data set of the accreting M dwarf companion of HD 142527 can be downloaded `here <>`_.
Documentation can be found at ` <>`_, including installation instructions, details on the architecture of PynPoint, and end-to-end example for data obtained with dithering, and a description of all the pipeline modules and their input parameters.
Mailing list
Please subscribe to the `mailing list <>`_ if you want to be informed about new functionalities, pipeline modules, releases, and other PynPoint related news.
If you use PynPoint in your publication then please cite `Stolker et al. (2019) <>`_. Please also cite `Amara & Quanz (2012) <>`_ as the origin of PynPoint, which focused initially on the use of principal component analysis (PCA) as a PSF subtraction method. In case you use specifically the PCA-based background subtraction module or the wavelet based speckle suppression module, please give credit to `Hunziker et al. (2018) <>`_ or `Bonse, Quanz & Amara (2018) <>`_, respectively.
Copyright 2014-2018 Tomas Stolker, Markus Bonse, Sascha Quanz, Adam Amara, and contributors.
PynPoint is free software and distributed under the GNU General Public License v3. See the LICENSE file for the terms and conditions.
The PynPoint logo was designed by `Atlas Infographics <>`_ and is `available <>`_ for use in presentations.
# Minimal makefile for Sphinx documentation
# You can set these variables from the command line.
SPHINXBUILD = sphinx-build
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
\ No newline at end of file
.. _about:
.. _team:
Development Team
* Tomas Stolker <>
* Markus Bonse <>
* Sascha Quanz <>
* Adam Amara <>
.. _contributing:
If you encounter errors or problems when using PynPoint then please contact Tomas Stolker (ETH Zurich). Bug reports and functionality requests can be provided by creating an |issue| on the Github page. We also welcome active help with bug fixing and the development of new functionalities and processing modules which can be done be creating a |pull|.
.. |issue| raw:: html
<a href="" target="_blank">issue</a>
.. |pull| raw:: html
<a href="" target="_blank">pull request</a>
.. _attribution:
If you use PynPoint in your publication then please cite `Stolker et al. (2019) <>`_. Please also cite `Amara & Quanz (2012) <>`_ as the origin of PynPoint, which focused initially on the use of principal component analysis (PCA) as a PSF subtraction method. In case you use specifically the PCA-based background subtraction module or the wavelet based speckle suppression module, please give credit to `Hunziker et al. (2018) <>`_ or `Bonse, Quanz & Amara (2018) <>`_, respectively.
.. _acknowledgements:
We would like to thank several people who provided contributions and helped testing the package before its release:
* Anna Boehle (ETH Zurich)
* Alexander Bohn (Leiden University)
* Gabriele Cugno (ETH Zurich)
* Silvan Hunziker (ETH Zurich)
The PynPoint logo was designed by `Atlas Infographics <>`_ and is available `here <>`_.
.. _architecture:
PynPoint has evolved from PSF subtraction toolkit to an end-to-end pipeline for high-contrast imaging data obtained in pupil-stabilized mode. The architecture of PynPoint was redesigned in v0.3.0 with the goal to create a generic, modular, and open-source data reduction pipeline, which is extendable to new data processing techniques and data types in the future. An overview of the available IO and processing modules is provide in the :ref:`pynpoint-package` section.
The actual pipeline and processing functionalities are implemented in a different subpackages. Therefore it is possible to extend the processing functionalities of the pipeline without changing the core of the pipeline.
The UML class diagram below illustrates the pipeline architecture of PynPoint:
.. image:: _images/uml.png
:width: 100%
The diagram shows that the architecture is subdivided in three components:
* Data management
* Pipeline modules for reading, writing, and processing of data
* The actual pipeline
.. _database:
Central Database
The new architecture of PynPoint separates the data management from the data reduction steps for the following reasons:
1. Raw datasets can be very large, in particular in the 3--5 μm wavelength regime, which challenges the processing on a computer with a small amount of memory (RAM). A central database is used to store the data on a computer's hard drive.
2. Some data is used in different steps of the pipeline. A central database makes it easy to access that data without making a copy.
3. The central data storage on the hard drive will remain updated after each step. Therefore, processing steps that already finished remain unaffected if an error occurs or the data reduction is interrupted by the user.
Understanding the central data storage classes is important if you plan to write your own Pipeline modules (see :ref:`writing`). When running the pipeline, it is enough to understand the concept of database tags.
As already encountered in the :ref:`end-to-end` section, each pipeline module has input and/or output tags. A tag is a label of a specific dataset in the central database. A module with ``image_in_tag=im_arr`` will look for a stack of input images in the central database under the tag name `im_arr`. Similarly, a module with ``image_out_tag=im_arr_processed`` will a stack of processed images to the central database under the tag `im_arr_processed`. Note that input tags will never change the data in the database.
Accessing the data storage occurs through instances of :class:`PynPoint.Core.DataIO.Port` which allow pipeline modules to read data from and write data to central database.
.. _modules:
Central configuration
A central configuration file has to be stored in the ``working_place_in`` with the name ``PynPoint_config.ini``. The file will be created with default values in case it does not exist when the pipeline is initiated. The values of the configuration file are stored in a separate group of the central database, each time the pipeline is initiated.
The file contains two different sections of configuration parameters. The ``header`` section is used to link attributes in PynPoint with header values in the FITS files that will be imported into the database. For example, some of the pipeline modules require values for the dithering position. These attributes are stored as ``DITHER_X`` and ``DITHER_Y`` in the central database and are for example provided by the ``ESO SEQ CUMOFFSETX`` and ``ESO SEQ CUMOFFSETY`` values in the FITS header. Setting ``DITHER_X: ESO SEQ CUMOFFSETX`` in the ``header`` section of the configuration file makes sure that the relevant FITS header values are imported when :class:`PynPoint.IOmodules.FitsReading.FitsReadingModule` is executed. Therefore, FITS files have to be imported again if values in the ``header`` section are changes. Values can be set to ``None`` since ``header`` values are only required for some of the pipeline modules.
The second section of the configuration values contains the central settings that are used by the pipeline modules. These values are stored in the ``settings`` section of the configuration file. The pixel scale can be provided in arcsec per pixel (e.g. ``PIXSCALE: 0.027``), the number of images that will be simultaneously loaded into the memory (e.g. ``MEMORY: 1000``), and the number of cores that are used for pipeline modules that have multiprocessing capabilities (e.g. ``CPU: 8``) such as :class:`pynpoint.processing.PSFSubtractionPCA.PcaPsfSubtractionModule`, :class:`pynpoint.processing.FluxAndPosition.MCMCsamplingModule`, and :class:`pynpoint.processing.TimeDenoising.WaveletTimeDenoisingModule`.
Note that some of the pipeline modules provide also multithreading support, which by default runs on all available CPUs. The multithreading can be controlled from the command line by setting the ``OMP_NUM_THREADS`` environment variable::
$ export OMP_NUM_THREADS=8
In this case a maximum of 8 threads is used. So, if a modules provide both multiprocessing and multithreading support, then the total number of used cores is equal to the product of the values chosen for ``CPU`` in the configuration file and ``OMP_NUM_THREADS`` from the command line.
An complete example of the configuration file looks like::
MEMORY: 1000
CPU: 8
A pipeline module has a specific task that is appended to the internal queue of pipeline tasks. A module can read and write data tags from and to the central database through dedicated input and output connections. As illustration, this is the input and output structure of the :class:`pynpoint.processing.PSFSubtractionPCA.PSFSubtractionModule`:
.. image:: _images/module.jpg
:width: 70%
:align: center
The module requires two input tags (blue) which means that two internal input ports are used to access data from the central database. The first port imports the science images and the second port imports the reference images that are used to calculate the PSF model using principle component analysis (PCA). In this case, both input tags can have the same name and therefore point to the same data set.
The module parameters are listed in the center of the illustration, which includes the number of principle components and the additional derotation that is applied.
The output tags (red) are required to setup the internal output ports which store the results of the PSF subtraction (e.g., mean and variance of the residuals) to the central database.
In order to create a valid pipeline one should check that the required input tags are linked to data which was previously created by a pipeline module. In other words, there need to be a previous module with the same tag as output.
There are three types of pipeline modules:
1. :class:`pynpoint.core.processing.ReadingModule` - A module with only output tags/ports, used to read data to the central database.
2. :class:`pynpoint.core.processing.WritingModule` - A module with only input tags/ports, used to export data from the central database.
3. :class:`pynpoint.core.processing.ProcessingModule` - A module with both input and output tags/ports, used for processing of the data.
.. _pipeline:
The :class:`pynpoint.core.pypeline` module is the central component which manages the order and execution of the different pipeline modules. Each ``Pypeline`` instance has an ``working_place_in`` path which is where the central database and configuration file are stored, an ``input_place_in`` path which is the default data location for reading modules, and an ``output_place_in`` path which is the default output path where the data will be saved by the writing modules: ::
pipeline = Pypeline(working_place_in="/path/to/working_place",
A pipeline module is appended to the queue of modules as: ::
And can be removed from the queue with the following ``Pypeline`` method: ::
The names and order of the pipeline modules are listed with: ::
Running all modules attached to the pipeline is achieved with: ::
Or a single module is executed as: ::
Both run methods will check if the pipeline has valid input and output tags.
An instance of ``Pypeline`` can be used to directly access data from the central database. See the :ref:`hdf5-files` section for more information.
# -*- coding: utf-8 -*-
# Configuration file for the Sphinx documentation builder.
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
# causes error on rtd
# import pynpoint
# -- Project information -----------------------------------------------------
project = 'PynPoint'
copyright = '2014-2019, Tomas Stolker, Markus Bonse, Sascha Quanz, and Adam Amara'
author = 'Tomas Stolker, Markus Bonse, Sascha Quanz, and Adam Amara'
# The short X.Y version
with open('../pynpoint/') as initfile:
for line in initfile:
if '__version__' in line:
version = line.split("'")[1]
# The full version, including alpha/beta/rc tags
release = '0.6.1'
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
html_show_copyright = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {'collapse_navigation': False,
'display_version': False,
'sticky_navigation': True,
'prev_next_buttons_location': 'bottom',
'navigation_depth': 5,
'logo_only': True}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_images']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
# html_sidebars = {}
html_logo = '_images/logo.png'
# html_favicon = '_images/logo.jpg'
html_search_language = 'en'
html_context = {'display_github': True,
'github_user': 'PynPoint',
'github_repo': 'PynPoint',
'github_version': 'master/docs/'}
autoclass_content = 'both'
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'PynPointdoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
# Latex figure (float) alignment
# 'figure_align': 'htbp',
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'PynPoint.tex', 'PynPoint Documentation',
author, 'manual'),
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'pynpoint', 'PynPoint Documentation',
[author], 1)
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'PynPoint', 'PynPoint Documentation',
author, 'PynPoint', 'One line description of project.',
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
# epub_identifier = ''
# A unique identification for the text.
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
\ No newline at end of file
.. _index:
PynPoint is a Python package for processing and analysis of high-contrast imaging data of faint companions and circumstellar disks. The package has been developed at the |ipa| of ETH Zurich in a collaboration between the |spf| and the |cosmo|.
.. figure:: _images/eso.jpg
:width: 100 %
Credit: ESO/L. Calçada
.. |ipa| raw:: html
<a href="" target="_blank">Institute of Particle Physics and Astrophysics</a>
.. |spf| raw:: html
<a href="" target="_blank">Star and Planet Formation Group</a>
.. |cosmo| raw:: html
<a href="" target="_blank">Cosmology Research Group</a>
.. _contents:
User Guide
.. toctree::
:maxdepth: 2
API Documentation
.. toctree::
:maxdepth: 2
Mailing List
.. toctree::
:maxdepth: 2
.. toctree::
:maxdepth: 2
.. _mailing:
Mailing List
The PynPoint mailing list is used to announce releases, new functionalities, pipeline modules, and other updates. The mailing list can be joined by sending a blank email to
The mailing list can be consulted for suggestions and questions about PynPoint by sending an email to
Further information about the mailing list can be found on the |mailing|.
.. |mailing| raw:: html
<a href="" target="_blank">web interface</a>
.. _api:
.. toctree::
:maxdepth: 4
pynpoint.core package
pynpoint.core.attributes module
.. automodule:: pynpoint.core.attributes
pynpoint.core.dataio module
.. automodule:: pynpoint.core.dataio