C++ API Reference (Extras)#

Operator overloading#

The following optional include directive imports the special value self.

#include <nanobind/operators.h>

The underlying type exposes various C++ operators that enable a shorthand notation to bind operators to python. See the operator overloading example in the main documentation for details.

class detail::self_t#

This is an internal class that should be accessed through the singleton self value.

It supports the overloaded operators listed below. Depending on whether self is the left or right argument of a binary operation, the binding will map to different Python methods as shown below.

C++ operator

Python method (left or right)

operator-

__sub__, __rsub__

operator+

__add__, __radd__

operator*

__mul__, __rmul__

operator/

__truediv__, __rtruediv__

operator%

__mod__, __rmod__

operator<<

__lshift__, __rlshift__

operator>>

__rshift__, __rrshift__

operator&

__and__, __rand__

operator^

__xor__, __rxor__

operator|

__or__, __ror__

operator>

__gt__, __lt__

operator>=

__ge__, __le__

operator<

__lt__, __gt__

operator<=

__le__, __ge__

operator==

__eq__

operator!=

__ne__

operator+=

__iadd__

operator-=

__isub__

operator*=

__mul__

operator/=

__itruediv__

operator%=

__imod__

operator<<=

__ilrshift__

operator>>=

__ilrshift__

operator&=

__iand__

operator^=

__ixor__

operator|=

__ior__

operator- (unary)

__neg__

operator+ (unary)

__pos__

operator~ (unary)

__invert__

operator! (unary)

__bool__ (with extra negation)

nb::abs(..)

__abs__

nb::hash(..)

__hash__

detail::self_t self#

Trampolines#

The following macros to implement trampolines that forward virtual function calls to Python require an additional include directive:

#include <nanobind/trampoline.h>

See the section on trampolines for further detail.

NB_TRAMPOLINE(base, size)#

Install a trampoline in an alias class to enable dispatching C++ virtual function calls to a Python implementation. Refer to the documentation on trampolines to see how this macro can be used.

NB_OVERRIDE(func, ...)#

Dispatch the call to a Python method named "func" if it is overloaded on the Python side, and forward the function arguments specified in the variable length argument .... Otherwise, call the C++ implementation func in the base class.

Refer to the documentation on trampolines to see how this macro can be used.

NB_OVERRIDE_PURE(func, ...)#

Dispatch the call to a Python method named "func" if it is overloaded on the Python side, and forward the function arguments specified in the variable length argument .... Otherwise, raise an exception. This macro should be used when the C++ function is pure virtual.

Refer to the documentation on trampolines to see how this macro can be used.

NB_OVERRIDE_NAME(name, func, ...)#

Dispatch the call to a Python method named name if it is overloaded on the Python side, and forward the function arguments specified in the variable length argument .... Otherwise, call the C++ function func in the base class.

This function differs from NB_OVERRIDE() in that C++ and Python functions can be named differently (e.g., operator+ and __add__). Refer to the documentation on trampolines to see how this macro can be used.

NB_OVERRIDE_PURE_NAME(name, func, ...)#

Dispatch the call to a Python method named name if it is overloaded on the Python side, and forward the function arguments specified in the variable length argument .... Otherwise, raise an exception. This macro should be used when the C++ function is pure virtual.

This function differs from NB_OVERRIDE_PURE() in that C++ and Python functions can be named differently (e.g., operator+ and __add__). Although the C++ base implementation cannot be called, its name is still important since nanobind uses it to infer the return value type. Refer to the documentation on trampolines to see how this macro can be used.

STL vector bindings#

The following function can be used to expose std::vector<...> variants in Python. It is not part of the core nanobind API and requires an additional include directive:

#include <nanobind/stl/bind_vector.h>
template<typename Vector, typename ...Args>
class_<Vector> bind_vector(handle scope, const char *name, Args&&... args)#

Bind the STL vector-derived type Vector to the identifier name and place it in scope (e.g., a module_). The variable argument list can be used to pass a docstring and other class binding annotations.

The type includes the following methods resembling list:

Signature

Documentation

__init__(self)

Default constructor

__init__(self, arg: Vector)

Copy constructor

__init__(self, arg: typing.Sequence)

Construct from another sequence type

__len__(self) -> int

Return the number of elements

__repr__(self) -> str

Generate a string representation

__contains__(self, arg: Value)

Check if the vector contains arg

__eq__(self, arg: Vector)

Check if the vector is equal to arg

__ne__(self, arg: Vector)

Check if the vector is not equal to arg

__bool__(self) -> bool

Check whether the vector is empty

__iter__(self) -> iterator

Instantiate an iterator to traverse the elements

__getitem__(self, arg: int) -> Value

Return an element from the list (supports negative indexing)

__setitem__(self, arg0: int, arg1: Value)

Assign an element in the list (supports negative indexing)

__delitem__(self, arg: int)

Delete an item from the list (supports negative indexing)

__setitem__(self, arg: slice) -> Vector

Slice-based getter

__setitem__(self, arg0: slice, arg1: Value)

Slice-based assignment

__delitem__(self, arg: slice)

Slice-based deletion

clear(self)

Remove all items from the list

append(self, arg: Value)

Append a list item

insert(self, arg0: int, arg1: Value)

Insert a list item (supports negative indexing)

pop(self, index: int = -1)

Pop an element at position index (the end by default)

extend(self, arg: Vector)

Extend self by appending elements from arg.

count(self, arg: Value)

Count the number of times that arg is contained in the vector

remove(self, arg: Value)

Remove all occurrences of arg.

In contrast to std::vector<...>, all bound functions perform range checks to avoid undefined behavior. When the type underlying the vector is not comparable or copy-assignable, some of these functions will not be generated.

The binding operation is a no-op if the vector type has already been registered with nanobind.

Warning

While this function creates a type resembling a Python list, it has a major caveat: the item accessor __getitem__ copies the accessed element by default (the bottom of this paragraph explains how this copy can be avoided).

Consequently, writes to elements may not propagate in the expected way. Consider the following C++ bindings:

struct A {
    int value;
};

nb::class_<A>(m, "A")
    .def(nb::init<int>())
    .def_rw("value", &A::value);

nb::bind_vector<std::vector<A>>(m, "VecA");

On the Python end, they yield the following surprising behavior:

from my_ext import A, VecA

va = VecA()
va.append(A(123))
a[0].value = 456
assert a[0].value == 456 # <-- assertion fails!

To actually modify va, another write is needed.

v = a[0]
v.value = 456
a[0] = v

This may seem like a strange design, so it is worth explaining why the implementation works in this way.

The key issue is that any particular value (e.g., va[0]) lies within a memory buffer managed by the std::vector. It is not safe for nanobind to refer to objects within this buffer using their absolute or relative memory address. For example, inserting an element at position 0 will rearrange the buffer’s contents and shift all subsequent A instances. If nanobind A objects could be “views” into the std::vector, then an insertion would cause the contents of unrelated A Python objects to change unexpectedly. Insertion may also require reallocation of the buffer, invalidating all current addresses, and this could lead to undefined behavior (use-after-free) if nanobind did not make a copy.

There are three situations in which the surprising behavior is avoided:

  1. If the modification of the array is performed using in-place operations like

    v[i] += 5
    

    In-place operators automatically perform an array assignment, causing the issue to disappear. This means that if you work with a vector type like std::vector<int> or std::vector<std::string> with an immutable element type like int or str on the Python end, it will behave completely naturally in Python.

  2. If the array contains STL shared pointers (e.g., std::vector<std::shared_ptr<T>>), the added indirection and ownership tracking removes the need for extra copies.

  3. If the array contains pointers to reference-counted objects (e.g., std::vector<ref<T>> via the ref wrapper) and T uses the intrusive reference counting approach explained here, the added indirection and ownership tracking removes the need for extra copies.

You should never use this class to bind pointer-valued vectors std::vector<T*> when T does not use intrusive reference counting. Some kind of ownership tracking (points 2 and 3 of the above list) is needed in this case.

STL map bindings#

The following function can be used to expose std::map<...> or std::unordered_map<...> variants in Python. It is not part of the core nanobind API and requires an additional include directive:

#include <nanobind/stl/bind_map.h>
template<typename Map, typename ...Args>
class_<Map> bind_map(handle scope, const char *name, Args&&... args)#

Bind the STL map-derived type Map (ordered or unordered) to the identifier name and place it in scope (e.g., a module_). The variable argument list can be used to pass a docstring and other class binding annotations.

The type includes the following methods resembling dict:

Signature

Documentation

__init__(self)

Default constructor

__init__(self, arg: Map)

Copy constructor

__init__(self, arg: dict)

Construct from a Python dictionary

__len__(self) -> int

Return the number of elements

__repr__(self) -> str

Generate a string representation

__contains__(self, arg: Key)

Check if the map contains arg

__eq__(self, arg: Map)

Check if the map is equal to arg

__ne__(self, arg: Map)

Check if the map is not equal to arg

__bool__(self) -> bool

Check whether the map is empty

__iter__(self) -> iterator

Instantiate an iterator to traverse the set of map keys

__getitem__(self, arg: Key) -> Value

Return an element from the map

__setitem__(self, arg0: Key, arg1: Value)

Assign an element in the map

__delitem__(self, arg: Key)

Delete an item from the map

clear(self)

Remove all items from the list

update(self, arg: Map)

Update the map with elements from arg.

keys(self, arg: Map) -> Map.KeyView

Returns an iterable view of the map’s keys

values(self, arg: Map) -> Map.ValueView

Returns an iterable view of the map’s values

items(self, arg: Map) -> Map.ItemView

Returns an iterable view of the map’s items

The binding operation is a no-op if the map type has already been registered with nanobind.

The binding routine ideally expects the involved types to be:

  • copy-constructible

  • copy-assignable

  • equality-comparable

If not all of these properties are available, then a subset of the above methods will be omitted. Please refer to bind_map.h for details on the logic.

Warning

While this function creates a type resembling a Python dict, it has a major caveat: the item accessor __getitem__ copies the accessed element by default.

Please refer to the STL vector bindings for a discussion of the problem and possible solutions. Everything applies equally to the map case.

Unique pointer deleter#

The following deleter should be used to gain maximal flexibility in combination with std::unique_ptr<..>. It requires the following additional include directive:

#include <nanobind/stl/unique_ptr.h>

See the two documentation sections on unique pointers for further detail (#1, #2).

template<typename T>
struct deleter#
deleter() = default#

Create a deleter that destroys the object using a delete expression.

deleter(handle h)#

Create a deleter that destroys the object by reducing the Python reference count.

bool owned_by_python() const#

Check if the object is owned by Python.

bool owned_by_cpp() const#

Check if the object is owned by C++.

void operator()(void *p) noexcept#

Destroy the object at address p.

Iterator bindings#

The following functions can be used to expose existing C++ iterators in Python. They are not part of the core nanobind API and require an additional include directive:

#include <nanobind/make_iterator.h>
template<rv_policy Policy = rv_policy::reference_internal, typename Iterator, typename ...Extra>
auto make_iterator(handle scope, const char *name, Iterator &&first, Iterator &&last, Extra&&... extra)#

Create a Python iterator wrapping the C++ iterator represented by the range [first, last). The Extra parameter can be used to pass additional function binding annotations.

This function lazily creates a new Python iterator type identified by name, which is stored in the given scope. Usually, some kind of keep_alive annotation is needed to tie the lifetime of the parent container to that of the iterator.

The return value is a typed iterator (iterator wrapped using typed), whose template parameter is given by the type of *first.

Here is an example of what this might look like for a STL vector:

using IntVec = std::vector<int>;

nb::class_<IntVec>(m, "IntVec")
   .def("__iter__",
        [](const IntVec &v) {
            return nb::make_iterator(nb::type<IntVec>(), "iterator",
                                     v.begin(), v.end());
        }, nb::keep_alive<0, 1>());
template<rv_policy Policy = rv_policy::reference_internal, typename Type, typename ...Extra>
auto make_iterator(handle scope, const char *name, Type &value, Extra&&... extra)#

This convenience wrapper calls the above make_iterator() variant with first and last set to std::begin(value) and std::end(value), respectively.

template<rv_policy Policy = rv_policy::reference_internal, typename Iterator, typename ...Extra>
iterator make_key_iterator(handle scope, const char *name, Iterator &&first, Iterator &&last, Extra&&... extra)#

make_iterator() specialization for C++ iterators that return key-value pairs. make_key_iterator() returns the first pair element to iterate over keys.

The return value is a typed iterator (iterator wrapped using typed), whose template parameter is given by the type of (*first).first.

template<rv_policy Policy = rv_policy::reference_internal, typename Iterator, typename ...Extra>
iterator make_value_iterator(handle scope, const char *name, Iterator &&first, Iterator &&last, Extra&&... extra)#

make_iterator() specialization for C++ iterators that return key-value pairs. make_value_iterator() returns the second pair element to iterate over values.

The return value is a typed iterator (iterator wrapped using typed), whose template parameter is given by the type of (*first).second.

N-dimensional array type#

The following type can be used to exchange n-dimension arrays with frameworks like NumPy, PyTorch, Tensorflow, JAX, and others. It requires an additional include directive:

#include <nanobind/ndarray.h>

Detailed documentation including example code is provided in a separate section.

bool ndarray_check(handle h) noexcept#

Test whether the Python object represents an ndarray.

Objects with a __dlpack__ attribute or objects that implement the buffer protocol are considered as ndarray objects. In addition, arrays from NumPy, PyTorch, TensorFlow and XLA are also regarded as ndarrays.

template<typename ...Args>
class ndarray#
ndarray() = default#

Create an invalid array.

template<typename ...Args2>
explicit ndarray(const ndarray<Args2...> &other)#

Reinterpreting constructor that wraps an existing nd-array (parameterized by Args) into a new ndarray (parameterized by Args2). No copy or conversion is made.

Dropping parameters is always safe. For example, a function that returns different array types could call it to convert ndarray<T> to ndarray<>. When adding constraints, the constructor is only safe to use following a runtime check to ensure that newly created array actually possesses the advertised properties.

ndarray(const ndarray&)#

Copy constructor. Increases the reference count of the referenced array.

ndarray(ndarray&&)#

Move constructor. Steals the referenced array without changing reference counts.

~ndarray()#

Decreases the reference count of the referenced array and potentially destroy it.

ndarray &operator=(const ndarray&)#

Copy assignment operator. Increases the reference count of the referenced array. Decreases the reference count of the previously referenced array and potentially destroy it.

ndarray &operator=(ndarray&&)#

Move assignment operator. Steals the referenced array without changing reference counts. Decreases the reference count of the previously referenced array and potentially destroy it.

ndarray(void *data, size_t ndim, const size_t *shape, handle owner = nanobind::handle(), const int64_t *strides = nullptr, dlpack::dtype dtype = nanobind::dtype<Scalar>(), int32_t device_type = device::cpu::value, int32_t device_id = 0)#

Create an array wrapping an existing memory allocation. The following parameters can be specified:

  • data: pointer address of the memory region. When the ndarray is parameterized by a constant scalar type to indicate read-only access, a const pointer must be passed instead.

  • ndim: the number of dimensions.

  • shape: specifies the size along each axis. The referenced array must must have ndim entries.

  • owner: if provided, the array will hold a reference to this object until it is destructed.

  • strides is optional; a value of nullptr implies C-style strides.

  • dtype describes the data type (floating point, signed/unsigned integer) and bit depth.

  • The device_type and device_id indicate the device and address space associated with the pointer value.

ndarray(void *data, const std::initializer_list<size_t> shape, handle owner = nanobind::handle(), std::initializer_list<int64_t> strides = {}, dlpack::dtype dtype = nanobind::dtype<Scalar>(), int32_t device_type = device::cpu::value, int32_t device_id = 0)#

Alternative form of the above constructor, which accepts the shape and strides arguments using a std::initializer_list. It automatically infers the value of ndim based on the size of shape.

dlpack::dtype dtype() const#

Return the data type underlying the array

size_t ndim() const#

Return the number of dimensions.

size_t size() const#

Return the size of the array (i.e. the product of all dimensions).

size_t itemsize() const#

Return the size of a single array element in bytes. The returned value is rounded up to the next full byte in case of bit-level representations (query dtype::bits for bit-level granularity).

size_t nbytes() const#

Return the size of the entire array bytes. The returned value is rounded up to the next full byte in case of bit-level representations.

size_t shape(size_t i) const#

Return the size of dimension i.

int64_t stride(size_t i) const#

Return the stride (in number of elements) of dimension i.

const int64_t *shape_ptr() const#

Return a pointer to the shape array. Note that the return type is const int64_t*, which may be unexpected as the scalar version shape() casts its result to a size_t.

This is a consequence of the DLPack tensor representation that uses signed 64-bit integers for all of these fields.

const int64_t *stride_ptr() const#

Return pointer to the stride array.

bool is_valid() const#

Check whether the array is in a valid state.

int32_t device_type() const#

ID denoting the type of device hosting the array. This will match the value field of a device class, such as device::cpu::value or device::cuda::value.

int32_t device_id() const#

In a multi-device/GPU setup, this function returns the ID of the device storing the array.

const Scalar *data() const#

Return a const pointer to the array data.

Scalar *data()#

Return a mutable pointer to the array data. Only enabled when Scalar is not itself const.

template<typename ...Extra>
auto view()#

Returns an nd-array view that is optimized for fast array access on the CPU. You may optionally specify additional ndarray constraints via the Extra parameter (though a runtime check should first be performed to ensure that the array possesses these properties).

The returned view provides the operations data(), ndim(), shape(), stride(), and operator() following the conventions of the ndarray type.

template<typename ...Ts>
auto &operator()(Ts... indices)#

Return a mutable reference to the element at stored at the provided index/indices. sizeof(Ts) must match ndim().

This accessor is only available when the scalar type and array dimension were specified as template parameters.

Data types#

Nanobind uses the DLPack ABI to represent metadata describing n-dimensional arrays (even when they are exchanged using the buffer protocol). Consequently, the set of possible dtypes is more restricted than that of other nd-array libraries (e.g., NumPy). Relevant data structures are located in the nanobind::dlpack sub-namespace.

enum class dlpack::dtype_code : uint8_t#

This enumeration characterizes the elementary array data type regardless of bit depth.

enumerator Int = 0#

Signed integer format

enumerator UInt = 1#

Unsigned integer format

enumerator Float = 2#

IEEE-754 floating point format

enumerator Bfloat = 4#

“Brain” floating point format

enumerator Complex = 5#

Complex numbers parameterized by real and imaginary component

struct dlpack::dtype#

Represents the data type underlying an n-dimensional array. Use the dtype<T>() function to return a populated instance of this data structure given a scalar C++ arithmetic type.

uint8_t code = 0;#

This field must contain the value of one of the dlpack::dtype_code enumerants.

uint8_t bits = 0;#

Number of bits per entry (e.g., 32 for a C++ single precision float)

uint16_t lanes = 0;#

Number of SIMD lanes (typically 1)

template<typename T>
dlpack::dtype dtype()#

Returns a populated instance of the dlpack::dtype structure given a scalar C++ arithmetic type.

Array annotations#

The ndarray<..> class admits optional template parameters. They constrain the type of array arguments that may be passed to a function.

The following are supported:

Data type#

The data type of the underlying scalar element. The following are supported.

  • [u]int8_t up to [u]int64_t and other variations (unsigned long long, etc.)

  • float, double

  • bool

Annotate the data type with const to indicate a read-only array. Note that only the buffer protocol/NumPy interface considers const-ness at the moment; data exchange with other array libraries will ignore this annotation.

When the is unspecified (e.g., to accept arbitrary input arrays), the ro annotation can instead be used to denote read-only access:

class ro#

Indicate read-only access (use only when no data type is specified.)

nanobind does not support non-standard types as documented in the section on dtype limitations.

Shape#

template<ssize_t... Is>
class shape#

Require the array to have sizeof...(Is) dimensions. Each entry of Is specifies a fixed size constraint for that specific dimension. An entry equal to -1 indicates that any size should be accepted for this dimension.

(An alias named nb::any representing -1 was removed in nanobind 2).

template<size_t N>
class ndim#

Alternative to the above that only constrains the array dimension. nb::ndim<2> is equivalent to nb::shape<-1, -1>.

Contiguity#

class c_contig#

Request that the array storage uses a C-contiguous representation.

class f_contig#

Request that the array storage uses a F (Fortran)-contiguous representation.

class any_contig#

Don’t place any demands on array contiguity (the default).

Device type#

class device#

The following helper classes can be used to constrain the device and address space of an array. Each class has a static constexpr int32_t value field that will then match up with ndarray::device_id().

class cpu#

CPU heap memory

class cuda#

NVIDIA CUDA device memory

class cuda_host#

NVIDIA CUDA host-pinned memory

class cuda_managed#

NVIDIA CUDA managed memory

class vulkan#

Vulkan device memory

class metal#

Apple Metal device memory

class rocm#

AMD ROCm device memory

class rocm_host#

AMD ROCm host memory

class oneapi#

Intel OneAPI device memory

Framework#

Framework annotations cause nb::ndarray objects to convert into an equivalent representation in one of the following frameworks:

class numpy#
class tensorflow#
class pytorch#
class jax#

Eigen convenience type aliases#

The following helper type aliases require an additional include directive:

#include <nanobind/eigen/dense.h>
using DStride = Eigen::Stride<Eigen::Dynamic, Eigen::Dynamic>#

This type alias refers to an Eigen stride object that is sufficiently flexible so that can be easily called with NumPy arrays and array slices.

template<typename T>
using DRef = Eigen::Ref<T, 0, DStride>#

This templated type alias creates an Eigen::Ref<..> with flexible strides for zero-copy data exchange between Eigen and NumPy.

template<typename T>
using DMap = Eigen::Map<T, 0, DStride>#

This templated type alias creates an Eigen::Map<..> with flexible strides for zero-copy data exchange between Eigen and NumPy.

Timestamp and duration conversions#

nanobind supports bidirectional conversions of timestamps and durations between their standard representations in Python (datetime.datetime, datetime.timedelta) and in C++ (std::chrono::time_point, std::chrono::duration). A few unidirectional conversions from other Python types to these C++ types are also provided and explained below.

These type casters require an additional include directive:

#include <nanobind/stl/chrono.h>

An overview of clocks in C++11#

The C++11 standard defines three different clocks, and users can define their own. Each std::chrono::time_point is defined relative to a particular clock. When using the chrono type caster, you must be aware that only std::chrono::system_clock is guaranteed to convert to a Python datetime object; other clocks may convert to timedelta if they don’t represent calendar time.

The first clock defined by the standard is std::chrono::system_clock. This clock measures the current date and time, much like the Python time.time() function. It can change abruptly due to administrative actions, daylight savings time transitions, or synchronization with an external time server. That makes this clock a poor choice for timing purposes, but a good choice for wall-clock time.

The second clock defined by the standard is std::chrono::steady_clock. This clock ticks at a steady rate and is never adjusted, like time.monotonic() in Python. That makes it excellent for timing purposes, but the value in this clock does not correspond to the current date and time. Often this clock will measure the amount of time your system has been powered on. This clock will never be the same clock as the system clock, because the system clock can change but steady clocks cannot.

The third clock defined in the standard is std::chrono::high_resolution_clock. This clock is the clock that has the highest resolution out of all the clocks in the system. It is normally an alias for either system_clock or steady_clock, but can be its own independent clock. Due to this uncertainty, conversions of time measured on the high_resolution_clock to Python produce platform-dependent types: you’ll get a datetime if high_resolution_clock is an alias for system_clock on your system, or a timedelta value otherwise.

Provided conversions#

The C++ types described in this section may be instantiated with any precision. Conversions to a less-precise type will round towards zero. Since Python’s built-in date and time objects support only microsecond precision, any precision beyond that on the C++ side will be lost when converting to Python.

C++ to Python

  • std::chrono::system_clock::time_pointdatetime.datetime

    A system clock time will be converted to a Python datetime instance. The result describes a time in the local timezone, but does not have any timezone information attached to it (it is a naive datetime object).

  • std::chrono::durationdatetime.timedelta

    A duration will be converted to a Python timedelta. Any precision beyond microseconds is lost by rounding towards zero.

  • std::chrono::[other_clock]::time_pointdatetime.timedelta

    A time on any clock except the system clock will be converted to a Python timedelta, which measures the number of seconds between the clock’s epoch and the time point of interest.

Python to C++

  • datetime.datetime or datetime.date or datetime.timestd::chrono::system_clock::time_point

    A Python date, time, or datetime object can be converted into a system clock timepoint. A time with no date information is treated as that time on January 1, 1970. A date with no time information is treated as midnight on that date. Any timezone information is ignored.

  • datetime.timedeltastd::chrono::duration

    A Python time delta object can be converted into a duration that describes the same number of seconds (modulo precision limitations).

  • datetime.timedeltastd::chrono::[other_clock]::time_point

    A Python time delta object can be converted into a timepoint on a clock other than the system clock. The resulting timepoint will be that many seconds after the target clock’s epoch time.

  • floatstd::chrono::duration

    A floating-point value can be converted into a duration. The input is treated as a number of seconds, and fractional seconds are supported to the extent representable.

  • floatstd::chrono::[other_clock]::time_point

    A floating-point value can be converted into a timepoint on a clock other than the system clock. The input is treated as a number of seconds, and fractional seconds are supported to the extent representable. The resulting timepoint will be that many seconds after the target clock’s epoch time.

Evaluating Python expressions from strings#

The following functions can be used to evaluate Python functions and expressions. They require an additional include directive:

#include <nanobind/eval.h>

Detailed documentation including example code is provided in a separate section.

enum class eval_mode#

This enumeration specifies how the content of a string should be interpreted. Used in Py_CompileString().

enumerator eval_expr = Py_eval_input#

Evaluate a string containing an isolated expression

enumerator eval_single_statement = Py_single_input#

Evaluate a string containing a single statement. Returns c None

enumerator eval_statements = Py_file_input#

Evaluate a string containing a sequence of statement. Returns c None

template<eval_mode start = eval_expr, size_t N>
object eval(const char (&s)[N], handle global = handle(), handle local = handle())#

Evaluate the given Python code in the given global/local scopes, and return the value.

inline void exec(const str &expr, handle global = handle(), handle local = handle())#

Execute the given Python code in the given global/local scopes.

Intrusive reference counting helpers#

The following functions and classes can be used to augment user-provided classes with intrusive reference counting that greatly simplifies shared ownership in larger C++/Python binding projects.

This functionality requires the following include directives:

#include <nanobind/intrusive/counter.h>
#include <nanobind/intrusive/ref.h>

These headers reference several functions, whose implementation must be provided. You can do so by including the following file from a single .cpp file of your project:

#include <nanobind/intrusive/counter.inl>

The functionality in these files consist of the following classes and functions:

class intrusive_counter#

Simple atomic reference counter that can optionally switch over to Python-based reference counting.

The various copy/move assignment/constructors intentionally don’t transfer the reference count. This is so that the contents of classes containing an intrusive_counter can be copied/moved without disturbing the reference counts of the associated instances.

intrusive_counter() noexcept = default#

Initialize with a reference count of zero.

intrusive_counter(const intrusive_counter &o)#

Copy constructor, which produces a zero-initialized counter. Does not copy the reference count from o.

intrusive_counter(intrusive_counter &&o)#

Move constructor, which produces a zero-initialized counter. Does not copy the reference count from o.

intrusive_counter &operator=(const intrusive_counter &o)#

Copy assignment operator. Does not copy the reference count from o.

intrusive_counter &operator=(intrusive_counter &&o)#

Move assignment operator. Does not copy the reference count from o.

void inc_ref() const noexcept#

Increase the reference count. When the counter references an object managed by Python, the operation calls Py_INCREF() to increase the reference count of the Python object instead.

The inc_ref() top-level function encapsulates this logic for subclasses of intrusive_base.

bool dec_ref() const noexcept#

Decrease the reference count. When the counter references an object managed by Python, the operation calls Py_DECREF() to decrease the reference count of the Python object instead.

When the C++-managed reference count reaches zero, the operation returns true to signal to the caller that it should use a delete expression to destroy the instance.

The dec_ref() top-level function encapsulates this logic for subclasses of intrusive_base.

void set_self_py(PyObject *self)#

Set the Python object associated with this instance. This operation is usually called by nanobind when ownership is transferred to the Python side.

Any references from prior calls to intrusive_counter::inc_ref() are converted into Python references by calling Py_INCREF() repeatedly.

PyObject *self_py()#

Return the Python object associated with this instance (or nullptr).

class intrusive_base#

Simple polymorphic base class for a intrusively reference-counted object hierarchy. The member functions expose corresponding functionality of intrusive_counter.

void inc_ref() const noexcept#

See intrusive_counter::inc_ref().

bool dec_ref() const noexcept#

See intrusive_counter::dec_ref().

void set_self_py(PyObject *self)#

See intrusive_counter::set_self_py().

PyObject *self_py()#

See intrusive_counter::self_py().

void intrusive_init(void (*intrusive_inc_ref_py)(PyObject*) noexcept, void (*intrusive_dec_ref_py)(PyObject*) noexcept)#

Function to register reference counting hooks with the intrusive reference counter class. This allows its implementation to not depend on Python.

You would usually call this function as follows from the initialization routine of a Python extension:

NB_MODULE(my_ext, m) {
    nb::intrusive_init(
        [](PyObject * o) noexcept {
            nb::gil_scoped_acquire guard;
            Py_INCREF(o);
        },
        [](PyObject * o) noexcept {
            nb::gil_scoped_acquire guard;
            Py_DECREF(o);
        });

    // ...
}
inline void inc_ref(intrusive_base *o) noexcept#

Reference counting helper function that calls o->inc_ref() if o is not equal to nullptr.

inline void dec_ref(intrusive_base *o) noexcept#

Reference counting helper function that calls o->dec_ref() if o is not equal to nullptr and delete o when the reference count reaches zero.

template<typename T>
class ref#

RAII scoped reference counting helper class

ref<T> is a simple RAII wrapper class that encapsulates a pointer to an instance with intrusive reference counting.

It takes care of increasing and decreasing the reference count as needed and deleting the instance when the count reaches zero.

For this to work, compatible functions inc_ref() and dec_ref() must be defined before including the file nanobind/intrusive/ref.h. Default implementations for subclasses of the type intrusive_base are already provided as part of the file counter.h.

ref() = default#

Create a null reference

ref(T *ptr)#

Create a reference from a pointer. Increases the reference count of the object (if not nullptr).

ref(const ref &r)#

Copy a reference. Increase the reference count of the object (if not nullptr).

ref(ref &&r) noexcept#

Move a reference. Object reference counts are unaffected by this operation.

~ref()#

Destroy a reference. Decreases the reference count of the object (if not nullptr).

ref &operator=(ref &&r) noexcept#

Move-assign another reference into this one.

ref &operator=(const ref &r)#

Copy-assign another reference into this one.

ref &operator=(const T *ptr)#

Overwrite this reference with a pointer to another object

void reset()#

Clear the reference and reduces the reference count of the object (if not nullptr)

bool operator==(const ref &r) const#

Compare this reference with another reference (pointer equality)

bool operator!=(const ref &r) const#

Compare this reference with another reference (pointer inequality)

bool operator==(const T *ptr) const#

Compare this reference with another object (pointer equality)

bool operator!=(const T *ptr) const#

Compare this reference with another object (pointer inequality)

T *operator->()#

Access the object referenced by this reference

const T *operator->() const#

Access the object referenced by this reference (const version)

T &operator*()#

Return a C++ reference to the referenced object

const T &operator*() const#

Return a C++ reference to the referenced object (const version)

T *get()#

Return a C++ pointer to the referenced object

const T *get() const#

Return a C++ pointer to the referenced object (const version)

Typing#

The following functions for typing-related functionality require an additional include directive:

#include <nanobind/typing.h>
template<typename ...Args>
object type_var(Args&&... args)#

Create a type variable (i.e., an instance of typing.TypeVar). All arguments of the original Python construction are supported, e.g.:

m.attr("T") = nb::type_var("T",
                           "contravariant"_a = true,
                           "covariant"_a = false,
                           "bound"_a = nb::type<MyClass>());
template<typename ...Args>
object type_var_tuple(Args&&... args)#

Analogousto type_var(), create a type variable tuple (i.e., an instance of typing.TypeVarTuple).

object any_type()#

Convenience wrapper, which returns typing.Any.