Table Of Contents

Previous topic

File Objects

Next topic

Datasets

This Page

Group Objects

Creating and using groups

Groups are the container mechanism by which HDF5 files are organized. From a Python perspective, they operate somewhat like dictionaries. In this case the “keys” are the names of group members, and the “values” are the members themselves (Group and Dataset) objects.

Group objects also contain most of the machinery which makes HDF5 useful. The File object does double duty as the HDF5 root group, and serves as your entry point into the file:

>>> f = h5py.File('foo.hdf5','w')
>>> f.name
'/'
>>> f.keys()
[]

New groups are easy to create:

>>> grp = f.create_group("bar")
>>> grp.name
'/bar'
>>> subgrp = grp.create_group("baz")
>>> subgrp.name
'/bar/baz'

Datasets are also created by a Group method:

>>> dset = subgrp.create_dataset("MyDS", (100,100), dtype='i')
>>> dset.name
'/bar/baz/MyDS'

Accessing objects

Groups implement a subset of the Python dictionary convention. They have methods like keys(), values() and support iteration. Most importantly, they support the indexing syntax, and standard exceptions:

>>> myds = subgrp["MyDS"]
>>> missing = subgrp["missing"]
KeyError: "Name doesn't exist (Symbol table: Object not found)"

Objects can be deleted from the file using the standard syntax:

>>> del subgroup["MyDataset"]

Group objects implement the following subset of the Python “mapping” interface:

  • Container syntax: if name in group
  • Iteration; yields member names: for name in group
  • Length: len(group)
  • keys()
  • values()
  • items()
  • iterkeys()
  • itervalues()
  • iteritems()
  • __setitem__()
  • __getitem__()
  • __delitem__()
  • get()

Reference

class h5py.Group(parent_object, name, create=False, _rawid=None)

Represents an HDF5 group.

It’s recommended to use the Group/File method create_group to create these objects, rather than trying to create them yourself.

Groups implement a basic dictionary-style interface, supporting __getitem__, __setitem__, __len__, __contains__, keys(), values() and others.

They also contain the necessary methods for creating new groups and datasets. Group attributes can be accessed via <group>.attrs.

``Group`` methods

__setitem__(name, obj)

Add an object to the group. The name must not already be in use.

The action taken depends on the type of object assigned:

Named HDF5 object (Dataset, Group, Datatype)
A hard link is created at “name” which points to the given object.
SoftLink or ExternalLink
Create the corresponding link.
Numpy ndarray
The array is converted to a dataset object, with default settings (contiguous storage, etc.).
Numpy dtype
Commit a copy of the datatype as a named datatype in the file.
Anything else
Attempt to convert it to an ndarray and store it. Scalar values are stored as scalar datasets. Raise ValueError if we can’t understand the resulting array dtype.
__getitem__(name)
Open an object attached to this group.
create_group(name)
Create and return a subgroup. Fails if the group already exists.
create_dataset(name, *args, **kwds)

Create and return a new dataset. Fails if “name” already exists.

create_dataset(name, shape, [dtype=<Numpy dtype>], **kwds) create_dataset(name, data=<Numpy array>, **kwds)

The default dtype is ‘=f4’ (single-precision float).

Additional keywords (“*” is default):

chunks
Tuple of chunk dimensions or None*
maxshape
None* or a tuple giving maximum dataset size. An element of None indicates an unlimited dimension. Dataset can be expanded by calling resize()
compression
Compression strategy; None*, ‘gzip’, ‘szip’ or ‘lzf’. An integer is interpreted as a gzip level.
compression_opts
Optional compression settings; for gzip, this may be an int. For szip, it should be a 2-tuple (‘ec’|’nn’, int(0-32)).
shuffle
Use the shuffle filter (increases compression performance for gzip and LZF). True/False*.
fletcher32
Enable error-detection. True/False*.
require_group(name)
Check if a group exists, and create it if not. TypeError if an incompatible object exists.
require_dataset(name, shape, dtype, exact=False, **kwds)

Open a dataset, or create it if it doesn’t exist.

Checks if a dataset with compatible shape and dtype exists, and creates one if it doesn’t. Raises TypeError if an incompatible dataset (or group) already exists.

By default, datatypes are compared for loss-of-precision only. To require an exact match, set keyword “exact” to True. Shapes are always compared exactly.

Keyword arguments are only used when creating a new dataset; they are ignored if an dataset with matching shape and dtype already exists. See create_dataset for a list of legal keywords.

copy(source, dest, name=None)

Copy an object or group (Requires HDF5 1.8).

The source can be a path, Group, Dataset, or Datatype object. The destination can be either a path or a Group object. The source and destinations need not be in the same file.

If the source is a Group object, all objects contained in that group will be copied recursively.

When the destination is a Group object, by default the target will be created in that group with its current name (basename of obj.name). You can override that by setting “name” to a string.

Example:

>>> f = File('myfile.hdf5')
>>> f.listnames()
['MyGroup']
>>> f.copy('MyGroup', 'MyCopy')
>>> f.listnames()
['MyGroup', 'MyCopy']
visit(func)

Recursively visit all names in this group and subgroups (HDF5 1.8).

You supply a callable (function, method or callable object); it will be called exactly once for each link in this group and every group below it. Your callable must conform to the signature:

func(<member name>) => <None or return value>

Returning None continues iteration, returning anything else stops and immediately returns that value from the visit method. No particular order of iteration within groups is guranteed.

Example:

>>> # List the entire contents of the file
>>> f = File("foo.hdf5")
>>> list_of_names = []
>>> f.visit(list_of_names.append)
visititems(func)

Recursively visit names and objects in this group (HDF5 1.8).

You supply a callable (function, method or callable object); it will be called exactly once for each link in this group and every group below it. Your callable must conform to the signature:

func(<member name>, <object>) => <None or return value>

Returning None continues iteration, returning anything else stops and immediately returns that value from the visit method. No particular order of iteration within groups is guranteed.

Example:

# Get a list of all datasets in the file >>> mylist = [] >>> def func(name, obj): ... if isinstance(obj, Dataset): ... mylist.append(name) ... >>> f = File(‘foo.hdf5’) >>> f.visititems(func)

Dictionary-like methods

keys()
Get a list containing member names
values()
Get a list containing member objects
items()
Get a list of tuples containing (name, object) pairs
iterkeys()
Get an iterator over member names
itervalues()
Get an iterator over member objects
iteritems()
Get an iterator over (name, object) pairs
get(name, default=None, getclass=False, getlink=False)

Retrieve item “name”, or “default” if it’s not in this group.

getclass
If True, returns the class of object (Group, Dataset, etc.) instead of the object itself.
getlink
If True, return SoftLink and ExternalLink instances instead of the objects they point to.

Properties common to all HDF5 objects:

file
Return a File instance associated with this object
parent

Return the parent group of this object.

This is always equivalent to file[posixpath.basename(obj.name)].

name
Name of this object in the HDF5 file. Not necessarily unique.
id
Low-level identifier appropriate for this object
ref
An (opaque) HDF5 reference to this object
attrs
Provides access to HDF5 attributes. See AttributeManager.