Skip to content
Commits on Source (4)
Rtree.egg-info/
*.pyc
docs/build
build/
dist/
*.idx
*.dat
dist: trusty
cache:
- pip
- apt
language: python
matrix:
include:
- python: "2.7"
- python: "3.3"
- python: "3.4"
- python: "3.5"
- python: "3.6"
- python: "3.7"
sudo: required
dist: xenial
addons:
apt:
packages:
- libspatialindex-dev
install:
- pip install -e .
script:
- python -m pytest --doctest-modules rtree tests/test_*
Metadata-Version: 2.1
Name: Rtree
Version: 0.9.1
Summary: R-Tree spatial index for Python GIS
Home-page: https://github.com/Toblerity/rtree
Author: Sean Gillies
Author-email: sean.gillies@gmail.com
Maintainer: Howard Butler
Maintainer-email: howard@hobu.co
License: MIT
Description: Rtree: Spatial indexing for Python
------------------------------------------------------------------------------
`Rtree`_ is a `ctypes`_ Python wrapper of `libspatialindex`_ that provides a
number of advanced spatial indexing features for the spatially curious Python
user. These features include:
* Nearest neighbor search
* Intersection search
* Multi-dimensional indexes
* Clustered indexes (store Python pickles directly with index entries)
* Bulk loading
* Deletion
* Disk serialization
* Custom storage implementation (to implement spatial indexing in ZODB, for example)
Documentation and Website
..............................................................................
https://rtree.readthedocs.io/en/latest/
Requirements
..............................................................................
* `libspatialindex`_ 1.8.5+.
Download
..............................................................................
* PyPI http://pypi.python.org/pypi/Rtree/
* Windows binaries http://www.lfd.uci.edu/~gohlke/pythonlibs/#rtree
Development
..............................................................................
* https://github.com/Toblerity/Rtree
.. _`R-trees`: http://en.wikipedia.org/wiki/R-tree
.. _`ctypes`: http://docs.python.org/library/ctypes.html
.. _`libspatialindex`: http://libspatialindex.github.com
.. _`Rtree`: http://toblerity.github.com/rtree/
Keywords: gis spatial index r-tree
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: C
Classifier: Programming Language :: C++
Classifier: Programming Language :: Python
Classifier: Topic :: Scientific/Engineering :: GIS
Classifier: Topic :: Database
Provides-Extra: test
Provides-Extra: all
pr:
branches:
include:
- master
jobs:
- template: ./ci/azp/linux.yml
- template: ./ci/azp/win.yml
- template: ./ci/azp/osx.yml
jobs:
- job:
displayName: ubuntu-16.04
pool:
vmImage: 'ubuntu-16.04'
strategy:
matrix:
Python36_185:
python.version: '3.6'
sidx.version: '1.8.5'
Python36_193:
python.version: '3.6'
sidx.version: '1.9.3'
Python37:
python.version: '3.7'
sidx.version: '1.9.3'
Python38:
python.version: '3.8'
sidx.version: '1.9.3'
steps:
- bash: echo "##vso[task.prependpath]$CONDA/bin"
displayName: Add conda to PATH
- bash: conda create --yes --quiet --name rtree
displayName: Create Anaconda environment
- bash: |
source activate rtree
conda install --yes --quiet --name rtree python=$PYTHON_VERSION libspatialindex=$SIDX_VERSION
displayName: Install Anaconda packages
- bash: |
source activate rtree
pip install pytest numpy
python -m pytest --doctest-modules rtree tests/test_*
displayName: pytest
# -*- mode: yaml -*-
jobs:
- job:
displayName: macOS-10.13
pool:
vmImage: 'macOS-10.13'
strategy:
matrix:
Python36_185:
python.version: '3.6'
sidx.version: '1.8.5'
Python36_193:
python.version: '3.6'
sidx.version: '1.9.3'
Python37:
python.version: '3.7'
sidx.version: '1.9.3'
Python38:
python.version: '3.8'
sidx.version: '1.9.3'
steps:
- script: |
echo "Removing homebrew from Azure to avoid conflicts."
curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall > ~/uninstall_homebrew
chmod +x ~/uninstall_homebrew
~/uninstall_homebrew -fq
rm ~/uninstall_homebrew
displayName: Remove homebrew
- bash: |
echo "##vso[task.prependpath]$CONDA/bin"
sudo chown -R $USER $CONDA
displayName: Add conda to PATH
- bash: conda create --yes --quiet --name rtree
displayName: Create Anaconda environment
- bash: |
source activate rtree
conda install --yes --quiet --name rtree python=$PYTHON_VERSION libspatialindex=$SIDX_VERSION
displayName: Install Anaconda packages
- bash: |
source activate rtree
pip install pytest numpy
python -m pytest --doctest-modules rtree tests/test_*
displayName: pytest
# -*- mode: yaml -*-
jobs:
- job:
displayName: vs2017-win2016
pool:
vmImage: 'vs2017-win2016'
strategy:
matrix:
Python36_185:
python.version: '3.6'
sidx.version: '1.8.5'
Python36_193:
python.version: '3.6'
sidx.version: '1.9.3'
Python37:
python.version: '3.7'
sidx.version: '1.9.3'
Python38:
python.version: '3.8'
sidx.version: '1.9.3'
steps:
- powershell: Write-Host "##vso[task.prependpath]$env:CONDA\Scripts"
displayName: Add conda to PATH
- script: conda create --yes --quiet --name rtree
displayName: Create Anaconda environment
- script: |
call activate rtree
conda install --yes --quiet --name rtree python=%PYTHON_VERSION% libspatialindex=%SIDX_VERSION%
displayName: Install Anaconda packages
- script: |
call activate rtree
pip install pytest numpy
python -m pytest --doctest-modules rtree tests
displayName: pytest
python-rtree (0.9.1+ds-2) UNRELEASED; urgency=medium
python-rtree (0.9.2-1) unstable; urgency=medium
* New upstream release.
* Update watch file to use GitHub releases.
* Drop Name field from upstream metadata.
-- Bas Couwenberg <sebastic@debian.org> Mon, 09 Dec 2019 08:24:47 +0100
-- Bas Couwenberg <sebastic@debian.org> Tue, 10 Dec 2019 05:50:34 +0100
python-rtree (0.9.1+ds-1) unstable; urgency=medium
......
......@@ -3,18 +3,26 @@
Changes
..............................................................................
0.9.2: 2019-12-09
===============
- Refactored tests to be based on unittest https://github.com/Toblerity/rtree/pull/129
- Update libspatialindex library loading code to adapt previous behavior https://github.com/Toblerity/rtree/pull/128
- Empty data streams throw exceptions and do not partially construct indexes https://github.com/Toblerity/rtree/pull/127
0.9.0: 2019-11-24
===============
- Add Index.GetResultSetOffset()
- Add Index.GetResultSetOffset()
- Add Index.contains() method for object and id (requires libspatialindex 1.9.3+) #116
- Add Index.Flush() #107
- Add Index.Flush() #107
- Add TPRTree index support (thanks @sdhiscocks #117 )
- Return container sizes without returning objects #90
- Add set_result_limit and set_result_offset for Index paging 44ad21aecd3f7b49314b9be12f3334d8bae7e827
## Bug fixes
- Better exceptions in cases where stream functions throw #80
Bug fixes:
- Better exceptions in cases where stream functions throw #80
- Migrated CI platform to Azure Pipelines https://dev.azure.com/hobuinc/rtree/_build?definitionId=5
- Minor test enhancements and fixups. Both libspatialindex 1.8.5 and libspatialindex 1.9.3 are tested with CI
......@@ -45,13 +53,13 @@ Changes
- Number of results for :py:meth:`~rtree.index.Index.nearest` defaults to 1.
- libsidx C library of 0.5.0 removed and included in libspatialindex
- objects="raw" in :py:meth:`~rtree.index.Index.intersection` to return the object sent in (for speed).
- :py:meth:`~rtree.index.Index.count` method to return the intersection count without the overhead
- :py:meth:`~rtree.index.Index.count` method to return the intersection count without the overhead
of returning a list (thanks Leonard Norrgård).
- Improved bulk loading performance
- Supposedly no memory leaks :)
- Many other performance tweaks (see docs).
- Bulk loader supports interleaved coordinates
- Leaf queries. You can return the box and ids of the leaf nodes of the index.
- Leaf queries. You can return the box and ids of the leaf nodes of the index.
Useful for visualization, etc.
- Many more docstrings, sphinx docs, etc
......@@ -70,9 +78,9 @@ available as a result of this refactoring.
* bulk loading of indexes at instantiation time
* ability to quickly return the bounds of the entire index
* ability to return the bounds of index entries
* much better windows support
* much better windows support
* libspatialindex 1.4.0 required.
0.4.3: 2009-06-05
=================
- Fix reference counting leak #181
......@@ -99,7 +107,7 @@ available as a result of this refactoring.
- Reraise index query errors as Python exceptions.
- Improved persistence.
0.2:
0.2:
==================
- Link spatialindex system library.
......
name: _rtree
channels:
- defaults
- conda-forge
dependencies:
- python>=3.5
- libspatialindex
python:
version: 3
pip_install: true
conda:
file: environment.yml
......@@ -2,4 +2,4 @@ from .index import Rtree
from .core import rt
__version__ = '0.9.1'
__version__ = '0.9.2'
......@@ -76,35 +76,71 @@ def free_error_msg_ptr(result, func, cargs):
rt.Index_Free(p)
return retvalue
def _load_library(dllname, loadfunction, dllpaths=('', )):
"""Load a DLL via ctypes load function. Return None on failure.
Try loading the DLL from the current package directory first,
then from the Windows DLL search path.
"""
try:
dllpaths = (os.path.abspath(os.path.dirname(__file__)),
) + dllpaths
except NameError:
pass # no __file__ attribute on PyPy and some frozen distributions
for path in dllpaths:
if path:
# temporarily add the path to the PATH environment variable
# so Windows can find additional DLL dependencies.
try:
oldenv = os.environ['PATH']
os.environ['PATH'] = path + ';' + oldenv
except KeyError:
oldenv = None
try:
return loadfunction(os.path.join(path, dllname))
except (WindowsError, OSError):
pass
finally:
if path and oldenv is not None:
os.environ['PATH'] = oldenv
return None
if os.name == 'nt':
base_name = 'spatialindex_c'
if '64' in platform.architecture()[0]:
arch = '64'
else:
arch = '32'
if 'conda' in sys.version:
os.environ['PATH'] = "{};{}".format(os.environ['PATH'], os.path.join(sys.prefix, "Library", "bin"))
rt = ctypes.CDLL('%s-%s.dll' % (base_name, arch))
lib_name = '%s-%s.dll' % (base_name, arch)
if 'SPATIALINDEX_C_LIBRARY' in os.environ:
lib_path, lib_name = os.path.split(os.environ['SPATIALINDEX_C_LIBRARY'])
rt = _load_library(lib_name, ctypes.cdll.LoadLibrary, (lib_path,))
elif 'conda' in sys.version:
lib_path = os.path.join(sys.prefix, "Library", "bin")
rt = _load_library(lib_name, ctypes.cdll.LoadLibrary, (lib_path,))
else:
rt = _load_library(lib_name, ctypes.cdll.LoadLibrary)
if not rt:
raise OSError("could not find or load %s" % lib_name)
elif os.name == 'posix':
if 'conda' in sys.version:
os.environ['PATH'] = "{};{}".format(os.environ['PATH'], os.path.join(sys.prefix, "lib"))
lib_name = find_library('spatialindex_c')
if not lib_name:
if 'linux' in sys.platform:
lib_name = 'libspatialindex_c.so'
elif 'darwin' in sys.platform:
lib_name = 'libspatialindex_c.dylib'
else:
lib_name = 'libspatialindex_c'
rt = ctypes.CDLL(lib_name)
if 'SPATIALINDEX_C_LIBRARY' in os.environ:
lib_name = os.environ['SPATIALINDEX_C_LIBRARY']
rt = ctypes.CDLL(lib_name)
elif 'conda' in sys.version:
lib_path = os.path.join(sys.prefix, "lib")
lib_name = find_library('spatialindex_c')
rt = _load_library(lib_name, ctypes.cdll.LoadLibrary, (lib_path,))
else:
lib_name = find_library('spatialindex_c')
rt = ctypes.CDLL(lib_name)
if not rt:
raise OSError("Could not load libspatialindex_c library")
else:
raise RTreeError('Unsupported OS "%s"' % os.name)
......
......@@ -290,11 +290,7 @@ class Index(object):
if stream and self.properties.type == RT_RTree:
self._exception = None
try:
self.handle = self._create_idx_from_stream(stream)
except:
if self._exception:
raise self._exception
self.handle = self._create_idx_from_stream(stream)
if self._exception:
raise self._exception
else:
......@@ -1171,6 +1167,9 @@ class Item(object):
self.bounds = _get_bounds(
self.handle, core.rt.IndexItem_GetBounds, False)
def __gt__(self, other):
return self.id > other.id
@property
def bbox(self):
"""Returns the bounding box of the index entry"""
......
#!/usr/bin/env python
from rtree import index
import ogr
def quick_create_layer_def(lyr, field_list):
# Each field is a tuple of (name, type, width, precision)
# Any of type, width and precision can be skipped. Default type is string.
for field in field_list:
name = field[0]
if len(field) > 1:
type = field[1]
else:
type = ogr.OFTString
field_defn = ogr.FieldDefn(name, type)
if len(field) > 2:
field_defn.SetWidth(int(field[2]))
if len(field) > 3:
field_defn.SetPrecision(int(field[3]))
lyr.CreateField(field_defn)
field_defn.Destroy()
import sys
shape_drv = ogr.GetDriverByName('ESRI Shapefile')
shapefile_name = sys.argv[1].split('.')[0]
shape_ds = shape_drv.CreateDataSource(shapefile_name)
leaf_block_lyr = shape_ds.CreateLayer('leaf', geom_type=ogr.wkbPolygon)
point_block_lyr = shape_ds.CreateLayer('point', geom_type=ogr.wkbPolygon)
point_lyr = shape_ds.CreateLayer('points', geom_type=ogr.wkbPoint)
quick_create_layer_def(
leaf_block_lyr,
[
('BLK_ID', ogr.OFTInteger),
('COUNT', ogr.OFTInteger),
])
quick_create_layer_def(
point_block_lyr,
[
('BLK_ID', ogr.OFTInteger),
('COUNT', ogr.OFTInteger),
])
quick_create_layer_def(
point_lyr,
[
('ID', ogr.OFTInteger),
('BLK_ID', ogr.OFTInteger),
])
p = index.Property()
p.filename = sys.argv[1]
p.overwrite = False
p.storage = index.RT_Disk
idx = index.Index(sys.argv[1])
leaves = idx.leaves()
# leaves[0] == (0L, [2L, 92L, 51L, 55L, 26L], [-132.41727847799999,
# -96.717721818399994, -132.41727847799999, -96.717721818399994])
from liblas import file
f = file.File(sys.argv[1])
def area(minx, miny, maxx, maxy):
width = abs(maxx - minx)
height = abs(maxy - miny)
return width*height
def get_bounds(leaf_ids, lasfile, block_id):
# read the first point and set the bounds to that
p = lasfile.read(leaf_ids[0])
minx, maxx = p.x, p.x
miny, maxy = p.y, p.y
print(len(leaf_ids))
print(leaf_ids[0:10])
for p_id in leaf_ids:
p = lasfile.read(p_id)
minx = min(minx, p.x)
maxx = max(maxx, p.x)
miny = min(miny, p.y)
maxy = max(maxy, p.y)
feature = ogr.Feature(feature_def=point_lyr.GetLayerDefn())
g = ogr.CreateGeometryFromWkt('POINT (%.8f %.8f)' % (p.x, p.y))
feature.SetGeometry(g)
feature.SetField('ID', p_id)
feature.SetField('BLK_ID', block_id)
result = point_lyr.CreateFeature(feature)
del result
return (minx, miny, maxx, maxy)
def make_poly(minx, miny, maxx, maxy):
wkt = 'POLYGON ((%.8f %.8f, %.8f %.8f, %.8f %.8f, %.8f %.8f, %.8f %.8f))'\
% (minx, miny, maxx, miny, maxx, maxy, minx, maxy, minx, miny)
shp = ogr.CreateGeometryFromWkt(wkt)
return shp
def make_feature(lyr, geom, id, count):
feature = ogr.Feature(feature_def=lyr.GetLayerDefn())
feature.SetGeometry(geom)
feature.SetField('BLK_ID', id)
feature.SetField('COUNT', count)
result = lyr.CreateFeature(feature)
del result
t = 0
for leaf in leaves:
id = leaf[0]
ids = leaf[1]
count = len(ids)
# import pdb;pdb.set_trace()
if len(leaf[2]) == 4:
minx, miny, maxx, maxy = leaf[2]
else:
minx, miny, maxx, maxy, minz, maxz = leaf[2]
if id == 186:
print(leaf[2])
print(leaf[2])
leaf = make_poly(minx, miny, maxx, maxy)
print('leaf: ' + str([minx, miny, maxx, maxy]))
pminx, pminy, pmaxx, pmaxy = get_bounds(ids, f, id)
point = make_poly(pminx, pminy, pmaxx, pmaxy)
print('point: ' + str([pminx, pminy, pmaxx, pmaxy]))
print('point bounds: ' +
str([point.GetArea(), area(pminx, pminy, pmaxx, pmaxy)]))
print('leaf bounds: ' +
str([leaf.GetArea(), area(minx, miny, maxx, maxy)]))
print('leaf - point: ' + str([abs(point.GetArea() - leaf.GetArea())]))
print([minx, miny, maxx, maxy])
# if shp2.GetArea() != shp.GetArea():
# import pdb;pdb.set_trace()
# sys.exit(1)
make_feature(leaf_block_lyr, leaf, id, count)
make_feature(point_block_lyr, point, id, count)
t += 1
# if t ==2:
# break
leaf_block_lyr.SyncToDisk()
point_lyr.SyncToDisk()
shape_ds.Destroy()
[egg_info]
tag_build =
tag_date = 0
import numpy as np
import rtree
import time
def random_tree_stream(points_count, include_object):
properties = rtree.index.Property()
properties.dimension = 3
points_random = np.random.random((points_count,3,3))
points_bounds = np.column_stack((points_random.min(axis=1),
points_random.max(axis=1)))
stacked = zip(np.arange(points_count),
points_bounds,
np.arange(points_count))
tic = time.time()
tree = rtree.index.Index(stacked,
properties = properties)
toc = time.time()
print('creation, objects:', include_object, '\tstream method: ', toc-tic)
return tree
def random_tree_insert(points_count, include_object):
properties = rtree.index.Property()
properties.dimension = 3
points_random = np.random.random((points_count,3,3))
points_bounds = np.column_stack((points_random.min(axis=1),
points_random.max(axis=1)))
tree = rtree.index.Index(properties = properties)
if include_object:
stacked = zip(np.arange(points_count),
points_bounds,
np.arange(points_count))
else:
stacked = zip(np.arange(points_count),
points_bounds)
tic = time.time()
for arg in stacked:
tree.insert(*arg)
toc = time.time()
print ('creation, objects:', include_object, '\tinsert method: ', toc-tic)
return tree
def check_tree(tree, count):
# tid should intersect every box,
# as our random boxes are all inside [0,0,0,1,1,1]
tic = time.time()
tid = list(tree.intersection([-1,-1,-1,2,2,2]))
toc = time.time()
ok = (np.unique(tid) - np.arange(count) == 0).all()
print ('intersection, id method: ', toc-tic, '\t query ok:', ok)
tic = time.time()
tid = [i.object for i in tree.intersection([-1,-1,-1,2,2,2], objects=True)]
toc = time.time()
ok = (np.unique(tid) - np.arange(count) == 0).all()
print ('intersection, object method:', toc-tic, '\t query ok:', ok)
if __name__ == '__main__':
count = 10000
print ('\nChecking stream loading\n---------------')
tree = random_tree_stream(count, False)
tree = random_tree_stream(count, True)
check_tree(tree, count)
print ('\nChecking insert loading\n---------------')
tree = random_tree_insert(count, False)
tree = random_tree_insert(count, True)
check_tree(tree, count)
Bounding Box Checking
=====================
See http://trac.gispython.org/projects/PCL/ticket/127.
Adding with bogus bounds
------------------------
>>> import rtree
>>> index = rtree.Rtree()
>>> index.add(1, (0.0, 0.0, -1.0, 1.0)) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
RTreeError: Coordinates must not have minimums more than maximums
>>> index.intersection((0.0, 0.0, -1.0, 1.0)) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
RTreeError: Coordinates must not have minimums more than maximums
Adding with invalid bounds argument should raise an exception
>>> index.add(1, 1) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
TypeError: Bounds must be a sequence
import numpy as np
import pytest
import rtree.index
def test_container():
container = rtree.index.RtreeContainer()
objects = list()
# Insert
boxes15 = np.genfromtxt('boxes_15x15.data')
for coordinates in boxes15:
objects.append(object())
container.insert(objects[-1], coordinates)
# Contains and length
assert all(obj in container for obj in objects)
assert len(container) == len(boxes15)
# Delete
for obj, coordinates in zip(objects, boxes15[:5]):
container.delete(obj, coordinates)
assert all(obj in container for obj in objects[5:])
assert all(obj not in container for obj in objects[:5])
assert len(container) == len(boxes15) - 5
# Delete already deleted object
with pytest.raises(IndexError):
container.delete(objects[0], boxes15[0])
# Insert duplicate object, at different location
container.insert(objects[5], boxes15[0])
assert objects[5] in container
# And then delete it, but check object still present
container.delete(objects[5], boxes15[0])
assert objects[5] in container
# Intersection
obj = objects[10]
results = container.intersection(boxes15[10])
assert obj in results
# Intersection with bbox
obj = objects[10]
results = container.intersection(boxes15[10], bbox=True)
result = [result for result in results if result.object is obj][0]
assert np.array_equal(result.bbox, boxes15[10])
# Nearest
obj = objects[8]
results = container.intersection(boxes15[8])
assert obj in results
# Nearest with bbox
obj = objects[8]
results = container.nearest(boxes15[8], bbox=True)
result = [result for result in results if result.object is obj][0]
assert np.array_equal(result.bbox, boxes15[8])
# Test iter method
assert objects[12] in set(container)
Shows how to create custom storage backend.
Derive your custom storage for rtree.index.CustomStorage and override the methods
shown in this example.
You can also derive from rtree.index.CustomStorageBase to get at the raw C buffers
if you need the extra speed and want to avoid translating from/to python strings.
The essential methods are the load/store/deleteByteArray. The rtree library calls
them whenever it needs to access the data in any way.
Example storage which maps the page (ids) to the page data.
>>> from rtree.index import Rtree, CustomStorage, Property
>>> class DictStorage(CustomStorage):
... """ A simple storage which saves the pages in a python dictionary """
... def __init__(self):
... CustomStorage.__init__( self )
... self.clear()
...
... def create(self, returnError):
... """ Called when the storage is created on the C side """
...
... def destroy(self, returnError):
... """ Called when the storage is destroyed on the C side """
...
... def clear(self):
... """ Clear all our data """
... self.dict = {}
...
... def loadByteArray(self, page, returnError):
... """ Returns the data for page or returns an error """
... try:
... return self.dict[page]
... except KeyError:
... returnError.contents.value = self.InvalidPageError
...
... def storeByteArray(self, page, data, returnError):
... """ Stores the data for page """
... if page == self.NewPage:
... newPageId = len(self.dict)
... self.dict[newPageId] = data
... return newPageId
... else:
... if page not in self.dict:
... returnError.value = self.InvalidPageError
... return 0
... self.dict[page] = data
... return page
...
... def deleteByteArray(self, page, returnError):
... """ Deletes a page """
... try:
... del self.dict[page]
... except KeyError:
... returnError.contents.value = self.InvalidPageError
...
... hasData = property( lambda self: bool(self.dict) )
... """ Returns true if we contains some data """
Now let's test drive our custom storage.
First let's define the basic properties we will use for all rtrees:
>>> settings = Property()
>>> settings.writethrough = True
>>> settings.buffering_capacity = 1
Notice that there is a small in-memory buffer by default. We effectively disable
it here so our storage directly receives any load/store/delete calls.
This is not necessary in general and can hamper performance; we just use it here
for illustrative and testing purposes.
Let's start with a basic test:
Create the storage and hook it up with a new rtree:
>>> storage = DictStorage()
>>> r = Rtree( storage, properties = settings )
Interestingly enough, if we take a look at the contents of our storage now, we
can see the Rtree has already written two pages to it. This is for header and
index.
>>> state1 = storage.dict.copy()
>>> list(state1.keys())
[0, 1]
Let's add an item:
>>> r.add(123, (0, 0, 1, 1))
Make sure the data in the storage before and after the addition of the new item
is different:
>>> state2 = storage.dict.copy()
>>> state1 != state2
True
Now perform a few queries and assure the tree is still valid:
>>> item = list(r.nearest((0, 0), 1, objects=True))[0]
>>> int(item.id)
123
>>> r.valid()
True
Check if the stored data is a byte string
>>> isinstance(list(storage.dict.values())[0], bytes)
True
Delete an item
>>> r.delete(123, (0, 0, 1, 1))
>>> r.valid()
True
Just for reference show how to flush the internal buffers (e.g. when
properties.buffer_capacity is > 1)
>>> r.clearBuffer()
>>> r.valid()
True
Let's get rid of the tree, we're done with it
>>> del r
Show how to empty the storage
>>> storage.clear()
>>> storage.hasData
False
>>> del storage
Ok, let's create another small test. This time we'll test reopening our custom
storage. This is useful for persistent storages.
First create a storage and put some data into it:
>>> storage = DictStorage()
>>> r1 = Rtree( storage, properties = settings, overwrite = True )
>>> r1.add(555, (2, 2))
>>> del r1
>>> storage.hasData
True
Then reopen the storage with a new tree and see if the data is still there
>>> r2 = Rtree( storage, properties = settings, overwrite = False )
>>> r2.count( (0,0,10,10) ) == 1
True
>>> del r2