This release will be the last to support Python 3.5, next version of Zarr will be Python 3.6+.
- DirectoryStore now uses os.scandir, which should make listing large store faster, #563
- Remove a few remaining Python 2-isms. By Poruri Sai Rahul; #393.
- Fix minor bug in N5Store. By @gsakkis, #550.
- Improve error message in Jupyter when trying to use the
ipytreeinstalled. By Zain Patel; #537
- Add typing informations to many of the core functions #589
- Explicitly close stores during testing. By Elliott Sales de Andrade; #442
- Many of the convenience functions to emit errors (
zarr.errorshave been replaced by
ValueErrorsubclasses. The corresponding
err_*function have been removed. #590, #614)
- Improve consistency of terminology regarding arrays and datasets in the documentation. By Josh Moore; #571.
- Added support for generic URL opening by
fsspec, where the URLs have the form “protocol://[server]/path” or can be chained URls with “::” separators. The additional argument
storage_optionsis passed to the backend, see the
fsspecdocs. By Martin Durant; #546
- Added support for fetching multiple items via
getitemsmethod of a store, if it exists. This allows for concurrent fetching of data blocks from stores that implement this; presently HTTP, S3, GCS. Currently only applies to reading. By Martin Durant; #606
- Add key normalization option for
N5Store. By James Bourbeau; #459.
Group.arraysmethods. By James Bourbeau; #458.
- Use uniform chunking for all dimensions when specifying
chunksas an integer. Also adds support for specifying
-1to chunk across an entire dimension. By James Bourbeau; #456.
MemoryStore. By James Bourbeau; #455.
.tree()pretty representation to use
ipytree. Allows it to work in both the Jupyter Notebook and JupyterLab. By John Kirkham; #450.
- Do not rename Blosc parameters in n5 backend and add blocksize parameter, compatible with n5-blosc. By @axtimwalde, #485.
DirectoryStoreto create files with more permissive permissions. By Eduardo Gonzalez and James Bourbeau; #493
math.ceilfor scalars. By John Kirkham; #500.
- Ensure contiguous data using
astype. By John Kirkham; #513.
- Refactor out
DirectoryStore. By John Kirkham; #503.
h5py.Filecompatibility. By Chris Barnes; #509.
- Fix hyperlink in
README.md. By Anderson Banihirwe; #531.
- Replace “nuimber” with “number”. By John Kirkham; #512.
- Fix azure link rendering in tutorial. By James Bourbeau; #507.
READMEfile to be more detailed. By Zain Patel; #495.
- Import blosc from numcodecs in tutorial. By James Bourbeau; #491.
- Adds logo to docs. By James Bourbeau; #462.
- Fix N5 link in tutorial. By James Bourbeau; #480.
- Fix typo in code snippet. By Joe Jevnik; #461.
- Fix URLs to point to zarr-python By John Kirkham; #453.
- Add documentation build to CI. By James Bourbeau; #516.
ensure_ndarrayin a few more places. By John Kirkham; #506.
- Support Python 3.8. By John Kirkham; #499.
- Require Numcodecs 0.6.4+ to use text handling functionality from it. By John Kirkham; #497.
- Updates tests to use
pytest.importorskip. By James Bourbeau; #492
- Removed support for Python 2. By @jhamman; #393, #470.
- Upgrade dependencies in the test matrices and resolve a compatibility issue with testing against the Azure Storage Emulator. By @alimanfoo; #468, #467.
unittest.mockon Python 3. By Elliott Sales de Andrade; #426.
ConsolidatedMetadataStore. By John Kirkham; #452.
- New storage backend, backed by Azure Blob Storage, class
zarr.storage.ABSStore. All data is stored as block blobs. By Shikhar Goenka, Tim Crone and Zain Patel; #345.
- Add “consolidated” metadata as an experimental feature: use
zarr.convenience.consolidate_metadata()to copy all metadata from the various metadata keys within a dataset hierarchy under a single key, and
zarr.convenience.open_consolidated()to use this single key. This can greatly cut down the number of calls to the storage backend, and so remove a lot of overhead for reading remote data. By Martin Durant, Alistair Miles, Ryan Abernathey, #268, #332, #338.
- Support has been added for structured arrays with sub-array shape and/or nested fields. By Tarik Onalan, #111, #296.
- Adds the SQLite-backed
zarr.storage.SQLiteStoreclass enabling an SQLite database to be used as the backing store for an array or group. By John Kirkham, #368, #365.
- Efficient iteration over arrays by decompressing chunkwise. By Jerome Kelleher, #398, #399.
- Adds the Redis-backed
zarr.storage.RedisStoreclass enabling a Redis database to be used as the backing store for an array or group. By Joe Hamman, #299, #372.
- Adds the MongoDB-backed
zarr.storage.MongoDBStoreclass enabling a MongoDB database to be used as the backing store for an array or group. By Noah D Brenowitz, Joe Hamman, #299, #372, #401.
- New storage class for N5 containers. The
zarr.n5.N5Storehas been added, which uses
zarr.storage.NestedDirectoryStoreto support reading and writing from and to N5 containers. By Jan Funke and John Kirkham.
- The implementation of the
zarr.storage.DirectoryStoreclass has been modified to ensure that writes are atomic and there are no race conditions where a chunk might appear transiently missing during a write operation. By sbalmer, #327, #263.
- Avoid raising in
__setitem__when file already exists. By Justin Swaney, #272, #318.
- The required version of the Numcodecs package has been upgraded to 0.6.2, which has enabled some code simplification and fixes a failing test involving msgpack encoding. By John Kirkham, #361, #360, #352, #355, #324.
- Failing tests related to pickling/unpickling have been fixed. By Ryan Williams, #273, #308.
- Corrects handling of
timedelta64in various compressors (by John Kirkham; #344).
bytesto facilitate comparisons and protect against writes. By John Kirkham, #350.
- Test and fix an issue (w.r.t. fill values) when storing complex data to
Array. By John Kirkham, #363.
- Always use a
tuplewhen indexing a NumPy
ndarray. By John Kirkham, #376.
- Ensure when
dict-based chunk store that it only contains
bytesto facilitate comparisons and protect against writes. Drop the copy for the no filter/compressor case as this handles that case. By John Kirkham, #359.
- Simplify directory creation and removal in
DirectoryStore.rename. By John Kirkham, #249.
- CI and test environments have been upgraded to include Python 3.7, drop Python 3.4, and upgrade all pinned package requirements. Alistair Miles, #308.
- Start using pyup.io to maintain dependencies. Alistair Miles, #326.
- Configure flake8 line limit generally. John Kirkham, #335.
- Add missing coverage pragmas. John Kirkham, #343, #355.
- Fix missing backslash in docs. John Kirkham, #254, #353.
- Include tests for stores’
popmethods. By John Kirkham, #378, #380.
- Include tests for different compressors, endianness, and attributes. By John Kirkham, #378, #380.
- Test validity of stores’ contents. By John Kirkham, #359, #408.
- Advanced indexing. The
Arrayclass has several new methods and properties that enable a selection of items in an array to be retrieved or updated. See the Advanced indexing tutorial section for more information. There is also a notebook with extended examples and performance benchmarks. #78, #89, #112, #172.
- New package for compressor and filter codecs. The classes previously
defined in the
zarr.codecsmodule have been factored out into a separate package called Numcodecs. The Numcodecs package also includes several new codec classes not previously available in Zarr, including compressor codecs for Zstd and LZ4. This change is backwards-compatible with existing code, as all codec classes defined by Numcodecs are imported into the
zarr.codecsnamespace. However, it is recommended to import codecs from the new package, see the tutorial sections on Compressors and Filters for examples. With contributions by John Kirkham; #74, #102, #120, #123, #139.
- New storage class for DBM-style databases. The
zarr.storage.DBMStoreclass enables any DBM-style database such as gdbm, ndbm or Berkeley DB, to be used as the backing store for an array or group. See the tutorial section on Storage alternatives for some examples. #133, #186.
- New storage class for LMDB databases. The
zarr.storage.LMDBStoreclass enables an LMDB “Lightning” database to be used as the backing store for an array or group. #192.
- New storage class using a nested directory structure for chunk files. The
zarr.storage.NestedDirectoryStorehas been added, which is similar to the existing
zarr.storage.DirectoryStoreclass but nests chunk files for multidimensional arrays into sub-directories. #155, #177.
- New tree() method for printing hierarchies. The
Groupclass has a new
zarr.hierarchy.Group.tree()method which enables a tree representation of a group hierarchy to be printed. Also provides an interactive tree representation when used within a Jupyter notebook. See the Array and group diagnostics tutorial section for examples. By John Kirkham; #82, #140, #184.
- Visitor API. The
Groupclass now implements the h5py visitor API, see docs for the
zarr.hierarchy.Group.visitvalues()methods. By John Kirkham, #92, #122.
- Viewing an array as a different dtype. The
Arrayclass has a new
zarr.core.Array.astype()method, which is a convenience that enables an array to be viewed as a different dtype. By John Kirkham, #94, #96.
- New open(), save(), load() convenience functions. The function
zarr.convenience.open()provides a convenient way to open a persistent array or group, using either a
ZipStoreas the backing store. The functions
zarr.convenience.load()are also available and provide a convenient way to save an entire NumPy array to disk and load back into memory later. See the tutorial section Persistent arrays for examples. #104, #105, #141, #181.
- IPython completions. The
Groupclass now implements
_ipython_key_completions_()which enables tab-completion for group members to be used in any IPython interactive environment. #170.
- New info property; changes to __repr__. The
Arrayclasses have a new
infoproperty which can be used to print diagnostic information, including compression ratio where available. See the tutorial section on Array and group diagnostics for examples. The string representation (
__repr__) of these classes has been simplified to ensure it is cheap and quick to compute in all circumstances. #83, #115, #132, #148.
- Chunk options. When creating an array,
chunks=Falsecan be specified, which will result in an array with a single chunk only. Alternatively,
chunks=Truewill trigger an automatic chunk shape guess. See Chunk optimizations for more on the
chunksparameter. #106, #107, #183.
- Zero-dimensional arrays and are now supported; by Prakhar Goel, #154, #161.
- Arrays with one or more zero-length dimensions are now fully supported; by Prakhar Goel, #150, #154, #160.
- The .zattrs key is now optional and will now only be created when the first custom attribute is set; #121, #200.
- New Group.move() method supports moving a sub-group or array to a different location within the same hierarchy. By John Kirkham, #191, #193, #196.
- ZipStore is now thread-safe; #194, #192.
- New Array.hexdigest() method computes an
Array’s hash with
hashlib. By John Kirkham, #98, #203.
- Improved support for object arrays. In previous versions of Zarr,
creating an array with
dtype=objectwas possible but could under certain circumstances lead to unexpected errors and/or segmentation faults. To make it easier to properly configure an object array, a new
object_codecparameter has been added to array creation functions. See the tutorial section on Object arrays for more information and examples. Also, runtime checks have been added in both Zarr and Numcodecs so that segmentation faults are no longer possible, even with a badly configured array. This API change is backwards compatible and previous code that created an object array and provided an object codec via the
filtersparameter will continue to work, however a warning will be raised to encourage use of the
object_codecparameter. #208, #212.
- Added support for datetime64 and timedelta64 data types; #85, #215.
- Array and group attributes are now cached by default to improve performance with slow stores, e.g., stores accessing data via the network; #220, #218, #204.
- New LRUStoreCache class. The class
zarr.storage.LRUStoreCachehas been added and provides a means to locally cache data in memory from a store that may be slow, e.g., a store that retrieves data from a remote server via the network; #223.
- New copy functions. The new functions
zarr.convenience.copy_all()provide a way to copy groups and/or arrays between HDF5 and Zarr, or between two Zarr groups. The
zarr.convenience.copy_store()provides a more efficient way to copy data directly between two Zarr stores. #87, #113, #137, #217.
- Fixed bug where
read_onlykeyword argument was ignored when creating an array; #151, #179.
- Fixed bugs when using a
ZipStoreopened in ‘w’ mode; #158, #182.
- Fill values can now be provided for fixed-length string arrays; #165, #176.
- Fixed a bug where the number of chunks initialized could be counted incorrectly; #97, #174.
- Fixed a bug related to the use of an ellipsis (…) in indexing statements; #93, #168, #172.
- Fixed a bug preventing use of other integer types for indexing; #143, #147.
- Some changes have been made to the Zarr storage specification version 2 document to clarify ambiguities and add some missing information. These changes do not break compatibility with any of the material as previously implemented, and so the changes have been made in-place in the document without incrementing the document version number. See the section on Changes in the specification document for more information.
- A new Advanced indexing section has been added to the tutorial.
- A new String arrays section has been added to the tutorial (#135, #175).
- The Chunk optimizations tutorial section has been reorganised and updated.
- The Persistent arrays and Storage alternatives tutorial sections have been updated with new examples (#100, #101, #103).
- A new tutorial section on Pickle support has been added (#91).
- A new tutorial section on Datetimes and timedeltas has been added.
- A new tutorial section on Array and group diagnostics has been added.
- The tutorial sections on Parallel computing and synchronization and Configuring Blosc have been updated to provide information about how to avoid program hangs when using the Blosc compressor with multiple processes (#199, #201).
- A data fixture has been included in the test suite to ensure data format compatibility is maintained; #83, #146.
- The test suite has been migrated from nosetests to pytest; #189, #225.
- Various continuous integration updates and improvements; #118, #124, #125, #126, #109, #114, #171.
- Bump numcodecs dependency to 0.5.3, completely remove nose dependency, #237.
- Fix compatibility issues with NumPy 1.14 regarding fill values for structured arrays, #222, #238, #239.
Various minor improvements, including:
Group objects support member access
via dot notation (
__getattr__); fixed metadata caching for
property and derivatives; added
Array.ndim property; fixed
Array.__array__ method arguments; fixed bug in pickling
fixed bug in pickling
- Group objects now support member deletion via
zarr.storage.TempStoreclass for convenience to provide storage via a temporary directory (#59).
- Fixed performance issues with
- The Blosc extension has been modified to return bytes instead of array objects from compress and decompress function calls. This should improve compatibility and also provides a small performance increase for compressing high compression ratio data (#55).
overwritekeyword argument to array and group creation methods on the
cache_metadatakeyword argument to array creation methods.
- The functions
zarr.hierarchy.open_group()now accept any store as first argument (#56).
The bundled Blosc library has been upgraded to version 1.11.1.
To accommodate support for hierarchies and filters, the Zarr metadata format
has been modified. See the Zarr storage specification version 2 for more information. To migrate an
array stored using Zarr version 1.x, use the
The bundled Blosc library has been upgraded to version 1.11.0.
- The bundled Blosc library has been upgraded to version 1.10.0. The ‘zstd’ internal compression library is now available within Blosc. See the tutorial section on Compressors for an example.
- When using the Blosc compressor, the default internal compression library is now ‘lz4’.
- The default number of internal threads for the Blosc compressor has been increased to a maximum of 8 (previously 4).
- Added convenience functions
This release includes a complete re-organization of the code base. The major version number has been bumped to indicate that there have been backwards-incompatible changes to the API and the on-disk storage format. However, Zarr is still in an early stage of development, so please do not take the version number as an indicator of maturity.
The main motivation for re-organizing the code was to create an
abstraction layer between the core array logic and data storage (#21).
In this release, any
object that implements the
MutableMapping interface can be used as
an array store. See the tutorial sections on Persistent arrays
and Storage alternatives, the Zarr storage specification version 1, and the
zarr.storage module documentation for more information.
Please note also that the file organization and file name conventions
used when storing a Zarr array in a directory on the file system have
changed. Persistent Zarr arrays created using previous versions of the
software will not be compatible with this version. See the
zarr.storage API docs and the Zarr storage specification version 1 for more
An abstraction layer has also been created between the core array
logic and the code for compressing and decompressing array
chunks. This release still bundles the c-blosc library and uses Blosc
as the default compressor, however other compressors including zlib,
BZ2 and LZMA are also now supported via the Python standard
library. New compressors can also be dynamically registered for use
with Zarr. See the tutorial sections on Compressors and
Configuring Blosc, the Zarr storage specification version 1, and the
zarr.compressors module documentation for more information.
The synchronization code has also been refactored to create a layer of
abstraction, enabling Zarr arrays to be used in parallel computations
with a number of alternative synchronization methods. For more
information see the tutorial section on Parallel computing and synchronization and the
zarr.sync module documentation.
Changes to the Blosc extension¶
NumPy is no longer a build dependency for the
extension, so setup.py will run even if NumPy is not already
installed, and should automatically install NumPy as a runtime
dependency. Manual installation of NumPy prior to installing Zarr is
still recommended, however, as the automatic installation of NumPy may
fail or be sub-optimal on some platforms.
Some optimizations have been made within the
extension to avoid unnecessary memory copies, giving a ~10-20%
performance improvement for multi-threaded compression operations.
zarr.blosc extension now automatically detects whether it
is running within a single-threaded or multi-threaded program and
adapts its internal behaviour accordingly (#27). There is no need for
the user to make any API calls to switch Blosc between contextual and
non-contextual (global lock) mode. See also the tutorial section on
The internal code for managing chunks has been rewritten to be more efficient. Now no state is maintained for chunks outside of the array store, meaning that chunks do not carry any extra memory overhead not accounted for by the store. This negates the need for the “lazy” option present in the previous release, and this has been removed.
The memory layout within chunks can now be set as either “C” (row-major) or “F” (column-major), which can help to provide better compression for some data (#7). See the tutorial section on Chunk memory layout for more information.
A bug has been fixed within the
machinery for slicing arrays, to properly handle getting and setting