From 226e8ddc4ae8da9b893364aef53fdf87d641f65a Mon Sep 17 00:00:00 2001 From: AntoinePrv Date: Mon, 29 Nov 2021 14:53:12 -0500 Subject: [PATCH] Refer to xtensor lib in italic --- docs/source/adaptor.rst | 6 ++-- docs/source/api/iterator_index.rst | 2 +- docs/source/build-options.rst | 14 +++++----- docs/source/builder.rst | 4 +-- docs/source/changelog.rst | 20 ++++++------- docs/source/closure-semantics.rst | 8 +++--- docs/source/container.rst | 8 +++--- docs/source/dev-build-options.rst | 18 ++++++------ docs/source/developer/assignment.rst | 8 +++--- docs/source/developer/concepts.rst | 6 ++-- docs/source/developer/expression_tree.rst | 14 +++++----- .../developer/implementation_classes.rst | 8 +++--- .../source/developer/iterating_expression.rst | 6 ++-- docs/source/developer/xtensor_internals.rst | 4 +-- docs/source/expression.rst | 4 +-- docs/source/external-structures.rst | 18 ++++++------ docs/source/file_loading.rst | 4 +-- docs/source/getting_started.rst | 10 +++---- docs/source/index.rst | 12 ++++---- docs/source/installation.rst | 12 ++++---- docs/source/missing.rst | 2 +- docs/source/operator.rst | 28 +++++++++---------- docs/source/pitfall.rst | 4 +-- docs/source/quickref/basic.rst | 2 +- docs/source/quickref/builder.rst | 4 +-- docs/source/quickref/math.rst | 2 +- docs/source/quickref/operator.rst | 2 +- docs/source/rank.rst | 2 +- docs/source/related.rst | 18 ++++++------ docs/source/scalar.rst | 2 +- docs/source/view.rst | 12 ++++---- 31 files changed, 132 insertions(+), 132 deletions(-) diff --git a/docs/source/adaptor.rst b/docs/source/adaptor.rst index 6f679ddce..c4bd153ac 100644 --- a/docs/source/adaptor.rst +++ b/docs/source/adaptor.rst @@ -7,14 +7,14 @@ Adapting 1-D containers ======================= -`xtensor` can adapt one-dimensional containers in place, and provide them a tensor interface. +*xtensor* can adapt one-dimensional containers in place, and provide them a tensor interface. Only random access containers can be adapted. Adapting std::vector -------------------- The following example shows how to bring an ``std::vector`` into the expression system of -`xtensor`: +*xtensor*: .. code:: @@ -44,7 +44,7 @@ the corresponding value in ``v``: Adapting C-style arrays ----------------------- -`xtensor` provides two ways for adapting a C-style array; the first one does not take the +*xtensor* provides two ways for adapting a C-style array; the first one does not take the ownership of the array: .. code:: diff --git a/docs/source/api/iterator_index.rst b/docs/source/api/iterator_index.rst index 6fb8186f6..7640a740a 100644 --- a/docs/source/api/iterator_index.rst +++ b/docs/source/api/iterator_index.rst @@ -7,7 +7,7 @@ Iterators ========= -In addition to the iterators defined in the different types of expressions, ``xtensor`` provides +In addition to the iterators defined in the different types of expressions, *xtensor* provides classes that allow to iterate over slices of an expression along a specified axis. .. toctree:: diff --git a/docs/source/build-options.rst b/docs/source/build-options.rst index 44b4022e0..b076c422b 100644 --- a/docs/source/build-options.rst +++ b/docs/source/build-options.rst @@ -12,15 +12,15 @@ Build and configuration Configuration ------------- -`xtensor` can be configured via macros which must be defined *before* including +*xtensor* can be configured via macros which must be defined *before* including any of its headers. This can be achieved the following ways: - either define them in the CMakeLists of your project, with ``target_compile_definitions`` cmake command. - or create a header where you define all the macros you want and then include the headers you - need. Then include this header whenever you need `xtensor` in your project. + need. Then include this header whenever you need *xtensor* in your project. -The following macros are already defined in `xtensor` but can be overwritten: +The following macros are already defined in *xtensor* but can be overwritten: - ``XTENSOR_DEFAULT_DATA_CONTAINER(T, A)``: defines the type used as the default data container for tensors and arrays. ``T`` is the ``value_type`` of the container and ``A`` its ``allocator_type``. @@ -35,8 +35,8 @@ The following macros are already defined in `xtensor` but can be overwritten: The following macros are helpers for debugging, they are not defined by default: -- ``XTENSOR_ENABLE_ASSERT``: enables assertions in `xtensor`, such as bound check. -- ``XTENSOR_ENABLE_CHECK_DIMENSION``: enables the dimensions check in `xtensor`. Note that this option should not be turned +- ``XTENSOR_ENABLE_ASSERT``: enables assertions in *xtensor*, such as bound check. +- ``XTENSOR_ENABLE_CHECK_DIMENSION``: enables the dimensions check in *xtensor*. Note that this option should not be turned on if you expect ``operator()`` to perform broadcasting. .. _external-dependencies: @@ -47,14 +47,14 @@ External dependencies The last group of macros is for using external libraries to achieve maximum performance (see next section for additional requirements): -- ``XTENSOR_USE_XSIMD``: enables SIMD acceleration in `xtensor`. This requires that you have xsimd_ installed +- ``XTENSOR_USE_XSIMD``: enables SIMD acceleration in *xtensor*. This requires that you have xsimd_ installed on your system. - ``XTENSOR_USE_TBB``: enables parallel assignment loop. This requires that you have tbb_ installed on your system. - ``XTENSOR_DISABLE_EXCEPTIONS``: disables c++ exceptions. - ``XTENSOR_USE_OPENMP``: enables parallel assignment loop using OpenMP. This requires that OpenMP is available on your system. -Defining these macros in the CMakeLists of your project before searching for `xtensor` will trigger automatic finding +Defining these macros in the CMakeLists of your project before searching for *xtensor* will trigger automatic finding of dependencies, so you don't have to include the ``find_package(xsimd)`` and ``find_package(TBB)`` commands in your CMakeLists: diff --git a/docs/source/builder.rst b/docs/source/builder.rst index dd071e9fa..6d02e7a30 100644 --- a/docs/source/builder.rst +++ b/docs/source/builder.rst @@ -7,8 +7,8 @@ Expression builders =================== -`xtensor` provides functions to ease the build of common N-dimensional expressions. The expressions -returned by these functions implement the laziness of `xtensor`, that is, they don't hold any value. +*xtensor* provides functions to ease the build of common N-dimensional expressions. The expressions +returned by these functions implement the laziness of *xtensor*, that is, they don't hold any value. Values are computed upon request. Ones and zeros diff --git a/docs/source/changelog.rst b/docs/source/changelog.rst index 6d5700cad..a13c4e23b 100644 --- a/docs/source/changelog.rst +++ b/docs/source/changelog.rst @@ -544,7 +544,7 @@ Other changes `#1888 `_ - Fixed ``reshape`` return `#1886 `_ -- Enabled ``add_subdirectory`` for ``xsimd`` +- Enabled ``add_subdirectory`` for *xsimd* `#1889 `_ - Support ``ddof`` argument for ``xt::variance`` `#1893 `_ @@ -827,7 +827,7 @@ Other changes `#1556 `_ - Fixed ``real``, ``imag``, and ``functor_view`` `#1554 `_ -- Allows to include ``xsimd`` without defining ``XTENSOR_USE_XSIMD`` +- Allows to include *xsimd* without defining ``XTENSOR_USE_XSIMD`` `#1548 `_ - Fixed ``argsort`` in column major `#1547 `_ @@ -863,7 +863,7 @@ Other changes `#1497 `_ - Removed unused capture `#1499 `_ -- Upgraded to ``xtl`` 0.6.2 +- Upgraded to *xtl* 0.6.2 `#1502 `_ - Added missing methods in ``xshared_expression`` `#1503 `_ @@ -908,7 +908,7 @@ Breaking changes `#1389 `_ - Removed deprecated type ``slice_vector`` `#1459 `_ -- Upgraded to ``xtl`` 0.6.1 +- Upgraded to *xtl* 0.6.1 `#1468 `_ - Added ``keep_dims`` option to reducers `#1474 `_ @@ -1080,7 +1080,7 @@ Other changes `#1339 `_. - Prevent embiguity with `xsimd::reduce` `#1343 `_. -- Require `xtl` 0.5.3 +- Require *xtl* 0.5.3 `#1346 `_. - Use concepts instead of SFINAE `#1347 `_. @@ -1330,7 +1330,7 @@ Other changes `#1074 `_. - Clean documentation for views `#1131 `_. -- Build with ``xsimd`` on Windows fixed +- Build with *xsimd* on Windows fixed `#1127 `_. - Implement ``mime_bundle_repr`` for ``xmasked_view`` `#1132 `_. @@ -2013,7 +2013,7 @@ Breaking changes `#482 `_. - Change ``edge_items`` print option to ``edgeitems`` for better numpy consistency `#489 `_. -- xtensor now depends on ``xtl`` version `~0.3.3` +- *xtensor* now depends on *xtl* version `~0.3.3` `#508 `_. New features @@ -2063,13 +2063,13 @@ Other changes Breaking changes ~~~~~~~~~~~~~~~~ -- ``xtensor`` now depends on ``xtl`` version `0.2.x` +- *xtensor* now depends on *xtl* version `0.2.x` `#421 `_. New features ~~~~~~~~~~~~ -- ``xtensor`` has an optional dependency on ``xsimd`` for enabling simd acceleration +- *xtensor* has an optional dependency on *xsimd* for enabling simd acceleration `#426 `_. - All expressions have an additional safe access function (``at``) @@ -2082,7 +2082,7 @@ New features correctly defined `#446 `_. -- expressions tags added so ``xtensor`` expression system can be extended +- expressions tags added so *xtensor* expression system can be extended `#447 `_. Other changes diff --git a/docs/source/closure-semantics.rst b/docs/source/closure-semantics.rst index aa6f2ddda..068537672 100644 --- a/docs/source/closure-semantics.rst +++ b/docs/source/closure-semantics.rst @@ -9,7 +9,7 @@ Closure semantics ================= -The ``xtensor`` library is a tensor expression library implementing numpy-style broadcasting and universal functions but in a lazy fashion. +The *xtensor* library is a tensor expression library implementing numpy-style broadcasting and universal functions but in a lazy fashion. If ``x`` and ``y`` are two tensor expressions with compatible shapes, the result of ``x + y`` is not a tensor but an expression that does not hold any value. Values of ``x + y`` are computed upon access or when the result is assigned to a container such as :cpp:type:`xt::xtensor` or @@ -19,7 +19,7 @@ In order to be able to perform the differed computation of ``x + y``, the return copies of the members ``x`` and ``y``, depending on how arguments were passed to ``operator+``. The actual types held by the expressions are the **closure types**. -The concept of closure type is key in the implementation of ``xtensor`` and appears in all the expressions defined in xtensor, and the utility functions and metafunctions complement the tools of the standard library for the move semantics. +The concept of closure type is key in the implementation of *xtensor* and appears in all the expressions defined in xtensor, and the utility functions and metafunctions complement the tools of the standard library for the move semantics. Basic rules for determining closure types ----------------------------------------- @@ -78,7 +78,7 @@ Using this mechanism, we were able to Closure types and scalar wrappers --------------------------------- -A requirement for ``xtensor`` is the ability to mix scalars and tensors in tensor expressions. In order to do so, +A requirement for *xtensor* is the ability to mix scalars and tensors in tensor expressions. In order to do so, scalar values are wrapped into the ``xscalar`` wrapper, which is a cheap 0-D tensor expression holding a single scalar value. @@ -209,7 +209,7 @@ utility to achieve this: } Note: writing a lambda is just sugar for writing a functor. -Also, using `auto x` as the function argument enables automatic `xsimd` acceleration. +Also, using ``auto x`` as the function argument enables automatic *xsimd* acceleration. As the data flow through the lambda is entirely transparent to the compiler, using this construct is generally faster than using ``xshared_expressions``. The usage of ``xshared_expression`` also diff --git a/docs/source/container.rst b/docs/source/container.rst index 7b12d0005..fc9799203 100644 --- a/docs/source/container.rst +++ b/docs/source/container.rst @@ -10,7 +10,7 @@ Arrays and tensors Internal memory layout ---------------------- -A multi-dimensional array of `xtensor` consists of a contiguous one-dimensional buffer combined with an indexing scheme that maps +A multi-dimensional array of *xtensor* consists of a contiguous one-dimensional buffer combined with an indexing scheme that maps unsigned integers to the location of an element in the buffer. The range in which the indices can vary is specified by the `shape` of the array. @@ -21,7 +21,7 @@ The scheme used to map indices into a location in the buffer is a strided indexi - the row-major layout (or C layout) is a strided index scheme where the strides grow from right to left - the column-major layout (or Fortran layout) is a strided index scheme where the strides grow from left to right -`xtensor` provides a :cpp:enum:`xt::layout_type` enum that helps to specify the layout used by multidimensional arrays. +*xtensor* provides a :cpp:enum:`xt::layout_type` enum that helps to specify the layout used by multidimensional arrays. This enum can be used in two ways: - at compile time, as a template argument. The value :cpp:enumerator:`xt::layout_type::dynamic` allows specifying any @@ -174,11 +174,11 @@ Instead, it has to be assigned to a temporary variable before being copied into A typical case where this happens is when the destination container is involved in the expression and has to be resized. This phenomenon is known as *aliasing*. -To prevent this, `xtensor` assigns the expression to a temporary variable before copying it. +To prevent this, *xtensor* assigns the expression to a temporary variable before copying it. In the case of :cpp:type:`xt::xarray`, this results in an extra dynamic memory allocation and copy. However, if the left-hand side is not involved in the expression being assigned, no temporary variable should be required. -`xtensor` cannot detect such cases automatically and applies the "temporary variable rule" by default. +*xtensor* cannot detect such cases automatically and applies the "temporary variable rule" by default. A mechanism is provided to forcibly prevent usage of a temporary variable: .. code:: diff --git a/docs/source/dev-build-options.rst b/docs/source/dev-build-options.rst index f53b9bc7d..67fbab9a6 100644 --- a/docs/source/dev-build-options.rst +++ b/docs/source/dev-build-options.rst @@ -10,15 +10,15 @@ Build and configuration Build ----- -``xtensor`` build supports the following options: +*xtensor* build supports the following options: - ``BUILD_TESTS``: enables the ``xtest`` and ``xbenchmark`` targets (see below). - ``DOWNLOAD_GTEST``: downloads ``gtest`` and builds it locally instead of using a binary installation. - ``GTEST_SRC_DIR``: indicates where to find the ``gtest`` sources instead of downloading them. -- ``XTENSOR_ENABLE_ASSERT``: activates the assertions in ``xtensor``. -- ``XTENSOR_CHECK_DIMENSION``: turns on ``XTENSOR_ENABLE_ASSERT`` and activates dimension checks in ``xtensor``. +- ``XTENSOR_ENABLE_ASSERT``: activates the assertions in *xtensor*. +- ``XTENSOR_CHECK_DIMENSION``: turns on ``XTENSOR_ENABLE_ASSERT`` and activates dimension checks in *xtensor*. Note that the dimensions check should not be activated if you expect ``operator()`` to perform broadcasting. -- ``XTENSOR_USE_XSIMD``: enables simd acceleration in ``xtensor``. This requires that you have xsimd_ installed +- ``XTENSOR_USE_XSIMD``: enables simd acceleration in *xtensor*. This requires that you have xsimd_ installed on your system. - ``XTENSOR_USE_TBB``: enables parallel assignment loop. This requires that you have you have tbb_ installed on your system. @@ -35,7 +35,7 @@ If the ``BUILD_TESTS`` option is enabled, the following targets are available: - xtest: builds an run the test suite. - xbenchmark: builds and runs the benchmarks. -For instance, building the test suite of ``xtensor`` with assertions enabled: +For instance, building the test suite of *xtensor* with assertions enabled: .. code:: @@ -44,7 +44,7 @@ For instance, building the test suite of ``xtensor`` with assertions enabled: cmake -DBUILD_TESTS=ON -DXTENSOR_ENABLE_ASSERT=ON ../ make xtest -Building the test suite of ``xtensor`` where the sources of ``gtest`` are +Building the test suite of *xtensor* where the sources of ``gtest`` are located in e.g. ``/usr/share/gtest``: .. code:: @@ -59,13 +59,13 @@ located in e.g. ``/usr/share/gtest``: Configuration ------------- -``xtensor`` can be configured via macros, which must be defined *before* +*xtensor* can be configured via macros, which must be defined *before* including any of its header. Here is a list of available macros: - ``XTENSOR_ENABLE_ASSERT``: enables assertions in xtensor, such as bound check. -- ``XTENSOR_ENABLE_CHECK_DIMENSION``: enables the dimensions check in ``xtensor``. Note that this option should not be turned +- ``XTENSOR_ENABLE_CHECK_DIMENSION``: enables the dimensions check in *xtensor*. Note that this option should not be turned on if you expect ``operator()`` to perform broadcasting. -- ``XTENSOR_USE_XSIMD``: enables SIMD acceleration in ``xtensor``. This requires that you have xsimd_ installed +- ``XTENSOR_USE_XSIMD``: enables SIMD acceleration in *xtensor*. This requires that you have xsimd_ installed on your system. - ``XTENSOR_USE_TBB``: enables parallel assignment loop. This requires that you have you have tbb_ installed on your system. diff --git a/docs/source/developer/assignment.rst b/docs/source/developer/assignment.rst index 0da6eaf79..3ab1d3a3d 100644 --- a/docs/source/developer/assignment.rst +++ b/docs/source/developer/assignment.rst @@ -10,7 +10,7 @@ Assignment ========== In this section, we consider the class :cpp:type:`xt::xarray` and its semantic bases (``xcontainer_semantic`` and -``xsemantic_base``) to illustrate how the assignment works. `xtensor` provides different mechanics of +``xsemantic_base``) to illustrate how the assignment works. *xtensor* provides different mechanics of assignment depending on the type of expression. Extended copy semantic @@ -159,7 +159,7 @@ tag: // ... }; -`xtensor` provides specializations for ``xtensor_expression_tag`` and ``xoptional_expression_tag``. +*xtensor* provides specializations for ``xtensor_expression_tag`` and ``xoptional_expression_tag``. When implementing a new function type whose API is unrelated to the one of ``xfunction_base``, the ``xexpression_assigner`` should be specialized so that the assignment relies on this specific API. @@ -172,10 +172,10 @@ during the resize phase, is the nature of the assignment: trivial or not. The as trivial when the memory layout of the lhs and rhs are such that assignment can be done by iterating over a 1-D sequence on both sides. In that case, two options are possible: -- if ``xtensor`` is compiled with the optional ``xsimd`` dependency, and if the layout and the +- if *xtensor* is compiled with the optional *xsimd* dependency, and if the layout and the ``value_type`` of each expression allows it, the assignment is a vectorized index-based loop operating on the expression buffers. -- if the ``xsimd`` assignment is not possible (for any reason), an iterator-based loop operating +- if the *xsimd* assignment is not possible (for any reason), an iterator-based loop operating on the expresion buffers is used instead. These methods are implemented in specializations of the ``trivial_assigner`` class. diff --git a/docs/source/developer/concepts.rst b/docs/source/developer/concepts.rst index dc20401c7..30219da8a 100644 --- a/docs/source/developer/concepts.rst +++ b/docs/source/developer/concepts.rst @@ -9,7 +9,7 @@ Concepts ======== -`xtensor`'s core is built upon key concepts captured in interfaces that are put together in derived +*xtensor*'s core is built upon key concepts captured in interfaces that are put together in derived classes through CRTP (`Curiously Recurring Template Pattern `_) and multiple inheritance. Interfaces and classes that model expressions implement *value semantic*. CRTP and value semantic @@ -89,13 +89,13 @@ you to iterate over a N-dimensional expression in row-major order or column-majo const_reverse_iterator crend() const noexcept; This template parameter is defaulted to ``XTENSOR_DEFAULT_TRAVERSAL`` (see :ref:`configuration-label`), so -that `xtensor` expressions can be used in generic code such as: +that *xtensor* expressions can be used in generic code such as: .. code:: std::copy(a.cbegin(), a.cend(), b.begin()); -where ``a`` and ``b`` can be arbitrary types (from `xtensor`, the STL or any external library) +where ``a`` and ``b`` can be arbitrary types (from *xtensor*, the STL or any external library) supporting standard iteration. ``xiterable`` inherits from ``xconst_iterable`` and provides non-const counterpart of methods diff --git a/docs/source/developer/expression_tree.rst b/docs/source/developer/expression_tree.rst index a5e6c2eb9..a42ef80f1 100644 --- a/docs/source/developer/expression_tree.rst +++ b/docs/source/developer/expression_tree.rst @@ -7,14 +7,14 @@ Expression tree =============== -Most of the expressions in `xtensor` are lazy-evaluated, they do not hold any value, the values are computed upon -access or when the expression is assigned to a container. This means that `xtensor` needs somehow to keep track of +Most of the expressions in *xtensor* are lazy-evaluated, they do not hold any value, the values are computed upon +access or when the expression is assigned to a container. This means that *xtensor* needs somehow to keep track of the expression tree. xfunction ~~~~~~~~~ -A node in the expression tree may be represented by different classes in `xtensor`; here we focus on basic arithmetic +A node in the expression tree may be represented by different classes in *xtensor*; here we focus on basic arithmetic operations and mathematical functions, which are represented by an instance of ``xfunction``. This is a template class whose parameters are: @@ -105,7 +105,7 @@ This latter is responsible for setting the remaining template parameters of ``xf } The first line computes the ``expression_tag`` of the expression. This tag is used for selecting the right class -class modeling a function. In `xtensor`, two tags are provided, with the following mapping: +class modeling a function. In *xtensor*, two tags are provided, with the following mapping: - ``xtensor_expression_tag`` -> ``xfunction`` - ``xoptional_expression_tag`` -> ``xfunction`` @@ -114,7 +114,7 @@ In the case of ``xfunction``, the tag is also used to select a mixin base class Any expression may define a tag as its ``expression_tag`` inner type. If not, ``xtensor_expression_tag`` is used by default. Tags have different priorities so that a resulting tag can be computed for expressions involving different tag types. As we -will see in the next section, this system of tags and mapping make it easy to plug new functions types in `xtensor` and have +will see in the next section, this system of tags and mapping make it easy to plug new functions types in *xtensor* and have them working with all the mathematical functions already implemented. The function class mapped to the expression tag is retrieved in the third line of ``make_xfunction``, that is: @@ -135,7 +135,7 @@ Once all the types are known, ``make_xfunction`` can instantiate the right funct Plugging new function types ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -As mentioned in the section above, one can define a new function class and have it used by `xtensor`'s expression system. Let's +As mentioned in the section above, one can define a new function class and have it used by *xtensor*'s expression system. Let's illustrate this with an hypothetical ``xmapped_function`` class, which provides additional mapping access operators. The first thing to do is to define a new tag: @@ -170,7 +170,7 @@ This is done by specializing the ``expression_tag_and`` metafunction available i The second specialization simply forwards to the first one so we don't duplicate code. Note that when plugging your own function class, these specializations can be skipped if the new function class (and its corresponding tag) is not compatible, -and thus not supposed to be mixed, with the function classes provided by `xtensor`. +and thus not supposed to be mixed, with the function classes provided by *xtensor*. The last requirement is to specialize the ``select_xfunction_expression`` metafunction, as it is shown below: diff --git a/docs/source/developer/implementation_classes.rst b/docs/source/developer/implementation_classes.rst index 8fc03d227..32f97d80f 100644 --- a/docs/source/developer/implementation_classes.rst +++ b/docs/source/developer/implementation_classes.rst @@ -10,7 +10,7 @@ Implementation classes Requirements ~~~~~~~~~~~~ -An implementation class in `xtensor` is a final class that models a specific +An implementation class in *xtensor* is a final class that models a specific kind of expression. It must inherit (either directly or indirectly) from :cpp:type:`xt::xexpression` and define (or inherit from classes that define) the following types: @@ -112,7 +112,7 @@ methods, and inherits from a semantic class to provide assignment operators. List of available expression classes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -`xtensor` provides the following expression classes: +*xtensor* provides the following expression classes: **Containers** @@ -138,7 +138,7 @@ so that their templates parameters are deduced. **Scalar** -`xtensor` provides the ``xscalar`` class to adapt scalar values and give them the required API. +*xtensor* provides the ``xscalar`` class to adapt scalar values and give them the required API. **Optional containers** @@ -157,7 +157,7 @@ Most of the mehtods of these classes are defined in their base class ``xoptional - ``xmasked_view`` : View on optional expression hiding values depending on a mask When the index of an element in the underlying expression of a view can be computed thanks to a strided scheme, -the slice used in this view is said to be a strided slice. `xtensor` provides the following strided slices: +the slice used in this view is said to be a strided slice. *xtensor* provides the following strided slices: - ``xrange`` - ``xstepped_range`` diff --git a/docs/source/developer/iterating_expression.rst b/docs/source/developer/iterating_expression.rst index ce504fa01..b4dfaac8b 100644 --- a/docs/source/developer/iterating_expression.rst +++ b/docs/source/developer/iterating_expression.rst @@ -12,7 +12,7 @@ Iterating over expressions xiterable and inner types ~~~~~~~~~~~~~~~~~~~~~~~~~ -`xtensor` provides two base classes for making expressions iterable: ``xconst_iterable`` and ``xiterable``. They define +*xtensor* provides two base classes for making expressions iterable: ``xconst_iterable`` and ``xiterable``. They define the API for iterating as described in :ref:`concepts-label`. For an expression to be iterable, it must inherit directly or indirectly from one of these classes. For instance, the ``xbroadcast`` class is defined as following: @@ -137,7 +137,7 @@ in row-major order. Thus, if we assume that ``p`` is a pointer to the last eleme of the stepper are ``p + 1`` in row-major, and ``p + 3`` in column-major order. A stepper is specific to an expression type, therefore implementing a new kind of expression usually requires to implement a new -kind of stepper. However `xtensor` provides a generic ``xindexed_stepper`` class, that can be used with any kind of expressions. +kind of stepper. However *xtensor* provides a generic ``xindexed_stepper`` class, that can be used with any kind of expressions. Even though it is generally not optimal, authors of new expression types can make use of the generic index stepper in a first implementation. @@ -200,7 +200,7 @@ with different dimension arguments. Iterators ~~~~~~~~~ -`xtensor` iterator is implemented in the ``xiterator`` class. This latter provides a STL compliant iterator interface, and is built +*xtensor* iterator is implemented in the ``xiterator`` class. This latter provides a STL compliant iterator interface, and is built upon the steppers. Whereas the steppers are tied to the expression they refer to, ``xiterator`` is generic enough to work with any kind of stepper. diff --git a/docs/source/developer/xtensor_internals.rst b/docs/source/developer/xtensor_internals.rst index f03c844e0..ff1e897dd 100644 --- a/docs/source/developer/xtensor_internals.rst +++ b/docs/source/developer/xtensor_internals.rst @@ -7,8 +7,8 @@ Internals of xtensor ==================== -This section provides information about `xtensor`'s internals and its architecture. It is intended for developers -who want to contribute to `xtensor` or simply understand how it works under the hood. `xtensor` makes heavy use +This section provides information about *xtensor*'s internals and its architecture. It is intended for developers +who want to contribute to *xtensor* or simply understand how it works under the hood. *xtensor* makes heavy use of the CRTP pattern, template meta-programming, universal references and perfect forwarding. One should be familiar with these notions before going any further. diff --git a/docs/source/expression.rst b/docs/source/expression.rst index 27692a0c4..bd0f27cfa 100644 --- a/docs/source/expression.rst +++ b/docs/source/expression.rst @@ -10,7 +10,7 @@ Expressions and lazy evaluation =============================== -`xtensor` is more than an N-dimensional array library: it is an expression engine that allows numerical computation on any object implementing the expression interface. +*xtensor* is more than an N-dimensional array library: it is an expression engine that allows numerical computation on any object implementing the expression interface. These objects can be in-memory containers such as :cpp:type:`xt::xarray\` and :cpp:type:`xt::xtensor\`, but can also be backed by a database or a representation on the file system. This also enables creating adaptors as expressions for other data structures. @@ -90,7 +90,7 @@ Broadcasting The number of dimensions of an :cpp:type:`xt::xexpression` and the sizes of these dimensions are provided by the :cpp:func:`~xt::xexpression::shape` method, which returns a sequence of unsigned integers specifying the size of each dimension. We can operate on expressions of different shapes of dimensions in an elementwise fashion. -Broadcasting rules of `xtensor` are similar to those of Numpy_ and libdynd_. +Broadcasting rules of *xtensor* are similar to those of Numpy_ and libdynd_. In an operation involving two arrays of different dimensions, the array with the lesser dimensions is broadcast across the leading dimensions of the other. For example, if ``A`` has shape ``(2, 3)``, and ``B`` has shape ``(4, 2, 3)``, the result of a broadcast operation with ``A`` and ``B`` has shape ``(4, 2, 3)``. diff --git a/docs/source/external-structures.rst b/docs/source/external-structures.rst index 23f2a0b49..3f14a1bb7 100644 --- a/docs/source/external-structures.rst +++ b/docs/source/external-structures.rst @@ -7,14 +7,14 @@ Extending xtensor ================= -``xtensor`` provides means to plug external data structures into its expression engine without +*xtensor* provides means to plug external data structures into its expression engine without copying any data. Adapting one-dimensional containers ----------------------------------- You may want to use your own one-dimensional container as a backend for tensor data containers -and even for the shape or the strides. This is the simplest structure to plug into ``xtensor``. +and even for the shape or the strides. This is the simplest structure to plug into *xtensor*. In the following example, we define new container and adaptor types for user-specified storage and shape types. .. code:: @@ -39,7 +39,7 @@ A requirement for the user-specified containers is to provide a minimal ``std::v - iterator methods (``begin``, ``end``, ``cbegin``, ``cend``) - ``size`` and ``reshape``, ``resize`` methods -``xtensor`` does not require that the container has a contiguous memory layout, only that it +*xtensor* does not require that the container has a contiguous memory layout, only that it provides the aforementioned interface. In fact, the container could even be backed by a file on the disk, a database or a binary message. @@ -47,7 +47,7 @@ Structures that embed shape and strides --------------------------------------- Some structures may gather data container, shape and strides, making them impossible to plug -into ``xtensor`` with the method above. This section illustrates how to adapt such structures +into *xtensor* with the method above. This section illustrates how to adapt such structures with the following simple example: .. code:: @@ -71,7 +71,7 @@ with the following simple example: Define inner types ~~~~~~~~~~~~~~~~~~ -The following tells ``xtensor`` which types must be used for getting shape, strides, and data: +The following tells *xtensor* which types must be used for getting shape, strides, and data: .. code:: @@ -117,13 +117,13 @@ Next step is to inherit from the ``xcontainer`` and the ``xcontainer_semantic`` }; Thanks to definition of the previous structures, inheriting from ``xcontainer`` brings almost all the container -API available in the other entities of ``xtensor``, while inheriting from ``xtensor_semantic`` brings the support +API available in the other entities of *xtensor*, while inheriting from ``xtensor_semantic`` brings the support for mathematical operations. Define semantic ~~~~~~~~~~~~~~~ -``xtensor`` classes have full value semantic, so you may define the constructors specific to your structures, +*xtensor* classes have full value semantic, so you may define the constructors specific to your structures, and use the default copy and move constructors and assign operators. Note these last ones *must* be declared as they are declared as ``protected`` in the base class. @@ -174,7 +174,7 @@ The last two methods are extended copy constructor and assign operator. They all Implement the resize methods ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The next methods to define are the overloads of ``resize``. ``xtensor`` provides utility functions to compute +The next methods to define are the overloads of ``resize``. *xtensor* provides utility functions to compute strides based on the shape and the layout, so the implementation of the ``resize`` overloads is straightforward: .. code:: @@ -368,7 +368,7 @@ constructor and assign operator. Implement access operators ~~~~~~~~~~~~~~~~~~~~~~~~~~ -``xtensor`` requires that the following access operators are defined +*xtensor* requires that the following access operators are defined .. code:: diff --git a/docs/source/file_loading.rst b/docs/source/file_loading.rst index 9b413806a..980754fbe 100644 --- a/docs/source/file_loading.rst +++ b/docs/source/file_loading.rst @@ -7,8 +7,8 @@ File input and output ===================== -`xtensor` has some built-in mechanisms to make loading and saving data easy. -The base `xtensor` package allows to save and load data in the ``.csv``, ``.json`` and ``.npy`` +*xtensor* has some built-in mechanisms to make loading and saving data easy. +The base *xtensor* package allows to save and load data in the ``.csv``, ``.json`` and ``.npy`` format. Please note that many more input and output formats are available in the `xtensor-io `_ package. diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst index 4bccfffb8..8fdd0f4e7 100644 --- a/docs/source/getting_started.rst +++ b/docs/source/getting_started.rst @@ -7,7 +7,7 @@ Getting started =============== -This short guide explains how to get started with `xtensor` once you have installed it with one of +This short guide explains how to get started with *xtensor* once you have installed it with one of the methods described in the installation section. First example @@ -43,8 +43,8 @@ array. Compiling the first example --------------------------- -`xtensor` is a header-only library, so there is no library to link with. The only constraint -is that the compiler must be able to find the headers of `xtensor` (and `xtl`), this is usually done +*xtensor* is a header-only library, so there is no library to link with. The only constraint +is that the compiler must be able to find the headers of *xtensor* (and *xtl*), this is usually done by having the directory containing the headers in the include path. With G++, use the ``-I`` option to achieve this. Assuming the first example code is located in ``example.cpp``, the compilation command is: @@ -53,7 +53,7 @@ is: g++ -I /path/to/xtensor/ -I /path/to/xtl/ example.cpp -o example -Note that if you installed `xtensor` and `xtl` with `cmake`, their headers will be located in the same +Note that if you installed *xtensor* and *xtl* with `cmake`, their headers will be located in the same directory, so you will need to provide only one path with the ``-I`` option. When you run the program, it produces the following output: @@ -65,7 +65,7 @@ When you run the program, it produces the following output: Building with cmake ------------------- -A better alternative for building programs using `xtensor` is to use `cmake`, especially if you are +A better alternative for building programs using *xtensor* is to use `cmake`, especially if you are developing for several platforms. Assuming the following folder structure: .. code:: bash diff --git a/docs/source/index.rst b/docs/source/index.rst index 0752ffd84..63f164196 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -12,25 +12,25 @@ Multi-dimensional arrays with broadcasting and lazy computing. Introduction ------------ -`xtensor` is a C++ library meant for numerical analysis with multi-dimensional +*xtensor* is a C++ library meant for numerical analysis with multi-dimensional array expressions. -`xtensor` provides +*xtensor* provides - an extensible expression system enabling **lazy broadcasting**. - an API following the idioms of the **C++ standard library**. -- tools to manipulate array expressions and build upon `xtensor`. +- tools to manipulate array expressions and build upon *xtensor*. -Containers of `xtensor` are inspired by `NumPy`_, the Python array programming +Containers of *xtensor* are inspired by `NumPy`_, the Python array programming library. **Adaptors** for existing data structures to be plugged into the expression system can easily be written. -In fact, `xtensor` can be used to **process numpy data structures in-place** +In fact, *xtensor* can be used to **process numpy data structures in-place** using Python's `buffer protocol`_. For more details on the numpy bindings, check out the xtensor-python_ project. Language bindings for R and Julia are also available. -`xtensor` requires a modern C++ compiler supporting C++14. The following C++ +*xtensor* requires a modern C++ compiler supporting C++14. The following C++ compilers are supported: - On Windows platforms, Visual C++ 2015 Update 2, or more recent diff --git a/docs/source/installation.rst b/docs/source/installation.rst index f7945134c..bc0bdb7ca 100644 --- a/docs/source/installation.rst +++ b/docs/source/installation.rst @@ -21,7 +21,7 @@ Installation ============ -Although ``xtensor`` is a header-only library, we provide standardized means to +Although *xtensor* is a header-only library, we provide standardized means to install it, with package managers or with cmake. Besides the xtensor headers, all these methods place the ``cmake`` project @@ -67,7 +67,7 @@ A package for xtensor is available on the Spack package manager. From source with cmake ---------------------- -You can also install ``xtensor`` from source with cmake. This requires that you +You can also install *xtensor* from source with cmake. This requires that you have the xtl_ library installed on your system. On Unix platforms, from the source directory: @@ -89,12 +89,12 @@ On Windows platforms, from the source directory: nmake install ``path_to_prefix`` is the absolute path to the folder where cmake searches for -dependencies and installs libraries. ``xtensor`` installation from cmake assumes +dependencies and installs libraries. *xtensor* installation from cmake assumes this folder contains ``include`` and ``lib`` subfolders. See the :doc:`build-options` section for more details about cmake options. -Although not officially supported, ``xtensor`` can be installed with MinGW: +Although not officially supported, *xtensor* can be installed with MinGW: .. code:: @@ -107,8 +107,8 @@ Although not officially supported, ``xtensor`` can be installed with MinGW: Including xtensor in your project --------------------------------- -The different packages of ``xtensor`` are built with cmake, so whatever the -installation mode you choose, you can add ``xtensor`` to your project using cmake: +The different packages of *xtensor* are built with cmake, so whatever the +installation mode you choose, you can add *xtensor* to your project using cmake: .. code:: diff --git a/docs/source/missing.rst b/docs/source/missing.rst index 892a36bc6..1aea11bf4 100644 --- a/docs/source/missing.rst +++ b/docs/source/missing.rst @@ -7,7 +7,7 @@ Missing values ============== -`xtensor` handles missing values and provides specialized container types for an optimized support of missing values. +*xtensor* handles missing values and provides specialized container types for an optimized support of missing values. Optional expressions -------------------- diff --git a/docs/source/operator.rst b/docs/source/operator.rst index 695ba26d2..bca7c1595 100644 --- a/docs/source/operator.rst +++ b/docs/source/operator.rst @@ -10,7 +10,7 @@ Operators and functions Arithmetic operators -------------------- -`xtensor` provides overloads of traditional arithmetic operators for +*xtensor* provides overloads of traditional arithmetic operators for :cpp:type:`xt::xexpression` objects: - unary :cpp:func:`~xt::xexpression::operator+` @@ -37,7 +37,7 @@ rules explained in a previous section. Logical operators ----------------- -`xtensor` also provides overloads of the logical operators: +*xtensor* also provides overloads of the logical operators: - :cpp:func:`~xt::xexpression::operator!` - :cpp:func:`~xt::xexpression::operator||` @@ -45,7 +45,7 @@ Logical operators Like arithmetic operators, these logical operators are element-wise operators and apply the lazy broadcasting rules. In addition to these element-wise -logical operators, `xtensor` provides two reducing boolean functions: +logical operators, *xtensor* provides two reducing boolean functions: - :cpp:func:`xt::any(E&& e) ` returns ``true`` if any of ``e`` elements is truthy, ``false`` otherwise. - :cpp:func:`xt::all(E&& e) ` returns ``true`` if all elements of ``e`` are truthy, ``false`` otherwise. @@ -68,12 +68,12 @@ and an element-wise ternary function (similar to the ``: ?`` ternary operator): // => res = { 11, 2, 3, 14 } Unlike in :any:`numpy.where`, :cpp:func:`xt::where` takes full advantage of the lazyness -of `xtensor`. +of *xtensor*. Comparison operators -------------------- -`xtensor` provides overloads of the inequality operators: +*xtensor* provides overloads of the inequality operators: - :cpp:func:`~xt::xexpression::operator\<` - :cpp:func:`~xt::xexpression::operator\<=` @@ -119,7 +119,7 @@ function. Bitwise operators ----------------- -`xtensor` also contains the following bitwise operators: +*xtensor* also contains the following bitwise operators: - Bitwise and: :cpp:func:`~xt::xexpression::operator&` - Bitwise or: :cpp:func:`~xt::xexpression::operator|` @@ -130,7 +130,7 @@ Bitwise operators Mathematical functions ---------------------- -`xtensor` provides overloads for many of the standard mathematical functions: +*xtensor* provides overloads for many of the standard mathematical functions: - basic functions: :cpp:func:`xt::abs`, :cpp:func:`xt::remainder`, :cpp:func:`xt::fma`, ... - exponential functions: :cpp:func:`xt::exp`, :cpp:func:`xt::expm1`, :cpp:func:`xt::log`, :cpp:func:`xt::log1p`, ... @@ -147,7 +147,7 @@ lazy broadcasting rules. Casting ------- -`xtensor` will implicitly promote and/or cast tensor expression elements as +*xtensor* will implicitly promote and/or cast tensor expression elements as needed, which suffices for most use-cases. But explicit casting can be performed via :cpp:func:`xt::cast`, which performs an element-wise ``static_cast``. @@ -166,7 +166,7 @@ performed via :cpp:func:`xt::cast`, which performs an element-wise ``static_cast Reducers -------- -`xtensor` provides reducers, that is, means for accumulating values of tensor +*xtensor* provides reducers, that is, means for accumulating values of tensor expressions over prescribed axes. The return value of a reducer is an :cpp:type:`xt::xexpression` with the same shape as the input expression, with the specified axes removed. @@ -209,7 +209,7 @@ A generator is provided to build the :cpp:type:`xt::xreducer_functors` object, t {1, 3}); If no axes are provided, the reduction is performed over all the axes, and the result is a 0-D expression. -Since `xtensor`'s expressions are lazy evaluated, you need to explicitely call the access operator to trigger +Since *xtensor*'s expressions are lazy evaluated, you need to explicitely call the access operator to trigger the evaluation and get the result: .. code:: @@ -256,7 +256,7 @@ as shown below: Accumulators ------------ -Similar to reducers, `xtensor` provides accumulators which are used to +Similar to reducers, *xtensor* provides accumulators which are used to implement cumulative functions such as :cpp:func:`xt::cumsum` or :cpp:func:`xt::cumprod`. Accumulators can currently only work on a single axis. Additionally, the accumulators are not lazy and do not return an xexpression, but rather an evaluated :cpp:type:`xt::xarray` @@ -304,7 +304,7 @@ with the same rules as those for reducers: Evaluation strategy ------------------- -Generally, `xtensor` implements a :ref:`lazy execution model `, +Generally, *xtensor* implements a :ref:`lazy execution model `, but under certain circumstances, a *greedy* execution model with immediate execution can be favorable. For example, reusing (and recomputing) the same values of a reducer over and over again if you use them in a loop can cost a @@ -337,11 +337,11 @@ strategy is currently implemented. Universal functions and vectorization ------------------------------------- -`xtensor` provides utilities to **vectorize any scalar function** (taking +*xtensor* provides utilities to **vectorize any scalar function** (taking multiple scalar arguments) into a function that will perform on :cpp:type:`xt::xexpression` s, applying the lazy broadcasting rules which we described in a previous section. These functions are called :cpp:type:`xt::xfunction` s. -They are `xtensor`'s counterpart to numpy's universal functions. +They are *xtensor*'s counterpart to numpy's universal functions. Actually, all arithmetic and logical operators, inequality operator and mathematical functions we described before are :cpp:type:`xt::xfunction` s. diff --git a/docs/source/pitfall.rst b/docs/source/pitfall.rst index 860773765..019b878a3 100644 --- a/docs/source/pitfall.rst +++ b/docs/source/pitfall.rst @@ -61,7 +61,7 @@ be tempted to simplify it a bit: return (1 - tmp) / (1 + tmp); } -Unfortunately, you introduced a bug; indeed, expressions in `xtensor` are not evaluated +Unfortunately, you introduced a bug; indeed, expressions in *xtensor* are not evaluated immediately, they capture their arguments by reference or copy depending on their nature, for future evaluation. Since ``tmp`` is an lvalue, it is captured by reference in the last statement; when the function returns, ``tmp`` is destroyed, leading to a dangling reference @@ -139,7 +139,7 @@ Alignment of fixed-size members If you are using ``C++ >= 17`` you should not have to worry about this. -When building with `xsimd` (see :ref:`external-dependencies`), if you define a structure +When building with *xsimd* (see :ref:`external-dependencies`), if you define a structure having members of fixed-size xtensor types, you must ensure that the buffers properly aligned. For this you can use the macro ``XTENSOR_FIXED_ALIGN`` available in ``xtensor/xtensor_config.hpp``. diff --git a/docs/source/quickref/basic.rst b/docs/source/quickref/basic.rst index 1d8cb83f8..78b2fd3c9 100644 --- a/docs/source/quickref/basic.rst +++ b/docs/source/quickref/basic.rst @@ -214,7 +214,7 @@ Fill Iterators --------- -``xtensor`` containers provide iterators compatible with algorithms from the STL: +*xtensor* containers provide iterators compatible with algorithms from the STL: .. code:: diff --git a/docs/source/quickref/builder.rst b/docs/source/quickref/builder.rst index 2b09850b8..10fefe025 100644 --- a/docs/source/quickref/builder.rst +++ b/docs/source/quickref/builder.rst @@ -7,8 +7,8 @@ Builders ======== -Most of ``xtensor`` builders return unevaluated expressions (see :ref:`lazy-evaluation` -for more details) that can be assigned to any kind of ``xtensor`` container. +Most of *xtensor* builders return unevaluated expressions (see :ref:`lazy-evaluation` +for more details) that can be assigned to any kind of *xtensor* container. Ones ---- diff --git a/docs/source/quickref/math.rst b/docs/source/quickref/math.rst index 5087761a4..1761f5aae 100644 --- a/docs/source/quickref/math.rst +++ b/docs/source/quickref/math.rst @@ -7,7 +7,7 @@ Mathematical functions ====================== -Operations and functions of ``xtensor`` are not evaluated until they are assigned. +Operations and functions of *xtensor* are not evaluated until they are assigned. In the following, ``e1``, ``e2`` and ``e3`` can be arbitrary tensor expressions. The results of operations and functions are assigned to :cpp:type:`xt::xarray` in the examples, but that could be any other container (or even views). To keep an unevaluated diff --git a/docs/source/quickref/operator.rst b/docs/source/quickref/operator.rst index 04c58e8a9..825b2e95d 100644 --- a/docs/source/quickref/operator.rst +++ b/docs/source/quickref/operator.rst @@ -7,7 +7,7 @@ Operators ========= -Operations and functions of ``xtensor`` are not evaluated until they are assigned. +Operations and functions of *xtensor* are not evaluated until they are assigned. In the following, ``e1``, ``e2`` and ``e3`` can be arbitrary tensor expressions. The results of operations and functions are assigned to :cpp:type:`xt::xarray` in the examples, but that could be any other container (or even views). To keep an unevaluated diff --git a/docs/source/rank.rst b/docs/source/rank.rst index 330ac8c50..99dc2b35a 100644 --- a/docs/source/rank.rst +++ b/docs/source/rank.rst @@ -12,7 +12,7 @@ Tensor Rank Rank overload ------------- -All `xtensor`'s classes have a member ``rank`` that can be used +All *xtensor*'s classes have a member ``rank`` that can be used to overload based on rank using *SFINAE*. Consider the following example: diff --git a/docs/source/related.rst b/docs/source/related.rst index f0b39e34f..89e44dc53 100644 --- a/docs/source/related.rst +++ b/docs/source/related.rst @@ -24,7 +24,7 @@ xtensor-python :alt: xtensor-python The xtensor-python_ project provides the implementation of container types -compatible with ``xtensor``'s expression system, ``pyarray`` and ``pytensor`` +compatible with *xtensor*'s expression system, ``pyarray`` and ``pytensor`` which effectively wrap numpy arrays, allowing operating on numpy arrays in-place. @@ -135,7 +135,7 @@ xtensor-python-cookiecutter :width: 50% The xtensor-python-cookiecutter_ project helps extension authors create Python -extension modules making use of `xtensor`. +extension modules making use of *xtensor*. It takes care of the initial work of generating a project skeleton with @@ -155,7 +155,7 @@ xtensor-julia :alt: xtensor-julia The xtensor-julia_ project provides the implementation of container types -compatible with ``xtensor``'s expression system, ``jlarray`` and ``jltensor`` +compatible with *xtensor*'s expression system, ``jlarray`` and ``jltensor`` which effectively wrap Julia arrays, allowing operating on Julia arrays in-place. @@ -249,7 +249,7 @@ xtensor-julia-cookiecutter :width: 50% The xtensor-julia-cookiecutter_ project helps extension authors create Julia -extension modules making use of `xtensor`. +extension modules making use of *xtensor*. It takes care of the initial work of generating a project skeleton with @@ -269,7 +269,7 @@ xtensor-r :alt: xtensor-r The xtensor-r_ project provides the implementation of container types -compatible with ``xtensor``'s expression system, ``rarray`` and ``rtensor`` +compatible with *xtensor*'s expression system, ``rarray`` and ``rtensor`` which effectively wrap R arrays, allowing operating on R arrays in-place. Example 1: Use an algorithm of the C++ library on a R array in-place @@ -406,7 +406,7 @@ The xsimd_ project provides a unified API for making use of the SIMD features of modern preprocessors for C++ library authors. It also provides accelerated implementation of common mathematical functions operating on batches. -xsimd_ is an optional dependency to ``xtensor`` which enable SIMD vectorization +xsimd_ is an optional dependency to *xtensor* which enable SIMD vectorization of xtensor operations. This feature is enabled with the ``XTENSOR_USE_XSIMD`` compilation flag, which is set to ``false`` by default. @@ -416,7 +416,7 @@ xtl .. image:: xtl.svg :alt: xtl -The xtl_ project, the only dependency of ``xtensor`` is a C++ template library +The xtl_ project, the only dependency of *xtensor* is a C++ template library holding the implementation of basic tools used across the libraries in the ecosystem. xframe @@ -426,7 +426,7 @@ xframe :alt: xframe The xframe_ project provides multi-dimensional labeled arrays and a data frame for C++, -based on ``xtensor`` and ``xtl``. +based on *xtensor* and *xtl*. `xframe` provides @@ -443,7 +443,7 @@ The z5_ project implements the zarr_ and n5_ storage specifications in C++. Both specifications describe chunked nd-array storage similar to HDF5, but use the filesystem to store chunks. This design allows for parallel write access and efficient cloud based storage, crucial requirements in modern big data applications. -The project uses ``xtensor`` to represent arrays in memory +The project uses *xtensor* to represent arrays in memory and also provides a python wrapper based on ``xtensor-python``. .. _xtensor-python: https://github.com/xtensor-stack/xtensor-python diff --git a/docs/source/scalar.rst b/docs/source/scalar.rst index 562ffc876..24bc20d33 100644 --- a/docs/source/scalar.rst +++ b/docs/source/scalar.rst @@ -10,7 +10,7 @@ Scalars and 0-D expressions Assignment ---------- -In `xtensor`, scalars are handled as if they were 0-dimensional expressions. +In *xtensor*, scalars are handled as if they were 0-dimensional expressions. This means that when assigning a scalar value to an :cpp:type:`xt::xarray`, the array is **not filled** with that value, but resized to become a 0-D array containing the scalar value: diff --git a/docs/source/view.rst b/docs/source/view.rst index a3f4efe22..8a2b11f5b 100644 --- a/docs/source/view.rst +++ b/docs/source/view.rst @@ -11,7 +11,7 @@ Views Views are used to adapt the shape of an :cpp:type:`xt::xexpression` without changing it, nor copying it. Views are convenient tools for assigning parts of an expression: since they do not copy the underlying expression, -assigning to the view actually assigns to the underlying expression. `xtensor` provides many kinds of views. +assigning to the view actually assigns to the underlying expression. *xtensor* provides many kinds of views. Sliced views ------------ @@ -166,7 +166,7 @@ The :cpp:type:`xt::xstrided_view` is very efficient on contigous memory Transposed views ---------------- -``xtensor`` provides a lazy transposed view on any expression, whose layout is either row-major order or column major order. +*xtensor* provides a lazy transposed view on any expression, whose layout is either row-major order or column major order. Trying to build a transposed view on a expression with a dynamic layout throws an exception. .. code:: @@ -188,7 +188,7 @@ Flatten views ------------- It is sometimes useful to have a one-dimensional view of all the elements of an expression. -``xtensor`` provides two functions for that, :cpp:func:`xt::ravel` and :cpp:func:`xt::flatten`. +*xtensor* provides two functions for that, :cpp:func:`xt::ravel` and :cpp:func:`xt::flatten`. The former one lets you specify the order used to read the elements while the latter one uses the layout of the expression. @@ -314,7 +314,7 @@ Filtration Sometimes, the only thing you want to do with a filter is to assign it a scalar. Though this can be done as shown in the previous section, this is not the *optimal* way to do it. -`xtensor` provides a specially optimized mechanism for that, called filtration. +*xtensor* provides a specially optimized mechanism for that, called filtration. A filtration IS NOT an :cpp:type:`xt::xexpression`, the only methods it provides are scalar and computed scalar assignments. @@ -349,7 +349,7 @@ Masked views are multidimensional views that apply a mask on an :cpp:type:`xt::x Broadcasting views ------------------ -Another type of view provided by `xtensor` is *broadcasting view*. +Another type of view provided by *xtensor* is *broadcasting view*. Such a view broadcasts an expression to the specified shape. As long as the view is not assigned to an array, no memory allocation or copy occurs. Broadcasting views should be built with the :cpp:func:`xt::broadcast` helper function. @@ -370,7 +370,7 @@ Broadcasting views should be built with the :cpp:func:`xt::broadcast` helper fun Complex views ------------- -In the case of a tensor containing complex numbers, `xtensor` provides views returning +In the case of a tensor containing complex numbers, *xtensor* provides views returning :cpp:type:`xt::xexpression` corresponding to the real and imaginary parts of the complex numbers. Like for other views, the elements of the underlying :cpp:type:`xt::xexpression` are not copied.