Skip to content

Commit

Permalink
Some spelling corrections
Browse files Browse the repository at this point in the history
  • Loading branch information
tdegeus committed Mar 22, 2023
1 parent 5b4122b commit 807aa88
Show file tree
Hide file tree
Showing 29 changed files with 65 additions and 65 deletions.
14 changes: 7 additions & 7 deletions docs/source/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -575,7 +575,7 @@ Other changes
`#1908 <https://github.com/xtensor-stack/xtensor/pull/1908>`_
- Added ``noexcept`` in ``svector``
`#1919 <https://github.com/xtensor-stack/xtensor/pull/1919>`_
- Add implementation of repeat (similar to numpy)
- Add implementation of repeat (similar to NumPy)
`#1896 <https://github.com/xtensor-stack/xtensor/pull/1896>`_
- Fix initialization of out shape in ``xt::tile``
`#1923 <https://github.com/xtensor-stack/xtensor/pull/1923>`_
Expand Down Expand Up @@ -768,7 +768,7 @@ Other changes
`#1676 <https://github.com/xtensor-stack/xtensor/pull/1676>`_
- Added missing coma
`#1680 <https://github.com/xtensor-stack/xtensor/pull/1680>`_
- Added Numpy-like parameter in ``load_csv``
- Added NumPy-like parameter in ``load_csv``
`#1682 <https://github.com/xtensor-stack/xtensor/pull/1682>`_
- Added ``shape()`` method to ``xshape.hpp``
`#1592 <https://github.com/xtensor-stack/xtensor/pull/1592>`_
Expand Down Expand Up @@ -1421,17 +1421,17 @@ Other changes
`#1109 <https://github.com/xtensor-stack/xtensor/pull/1109>`_.
- Added test case for ``setdiff1d``
`#1110 <https://github.com/xtensor-stack/xtensor/pull/1110>`_.
- Added missing reference to ``diff`` in ``From numpy to xtensor`` section
- Added missing reference to ``diff`` in ``From NumPy to xtensor`` section
`#1116 <https://github.com/xtensor-stack/xtensor/pull/1116>`_.
- Add ``amax`` and ``amin`` to the documentation
`#1121 <https://github.com/xtensor-stack/xtensor/pull/1121>`_.
- ``histogram`` and ``histogram_bin_edges`` implementation
`#1108 <https://github.com/xtensor-stack/xtensor/pull/1108>`_.
- Added numpy comparison for interp
- Added NumPy comparison for interp
`#1111 <https://github.com/xtensor-stack/xtensor/pull/1111>`_.
- Allow multiple return type reducer functions
`#1113 <https://github.com/xtensor-stack/xtensor/pull/1113>`_.
- Fixes ``average`` bug + adds Numpy based tests
- Fixes ``average`` bug + adds NumPy based tests
`#1118 <https://github.com/xtensor-stack/xtensor/pull/1118>`_.
- Static ``xfunction`` cache for fixed sizes
`#1105 <https://github.com/xtensor-stack/xtensor/pull/1105>`_.
Expand Down Expand Up @@ -2122,7 +2122,7 @@ Breaking changes

- The API for ``xbuffer_adaptor`` has changed. The template parameter is the type of the buffer, not just the value type
`#482 <https://github.com/xtensor-stack/xtensor/pull/482>`_.
- Change ``edge_items`` print option to ``edgeitems`` for better numpy consistency
- Change ``edge_items`` print option to ``edgeitems`` for better NumPy consistency
`#489 <https://github.com/xtensor-stack/xtensor/pull/489>`_.
- *xtensor* now depends on *xtl* version `~0.3.3`
`#508 <https://github.com/xtensor-stack/xtensor/pull/508>`_.
Expand Down Expand Up @@ -2159,7 +2159,7 @@ Other changes
`#492 <https://github.com/xtensor-stack/xtensor/pull/492>`_.
- The ``size()`` method for containers now returns the total number of elements instead of the buffer size, which may differ when the smallest stride is greater than ``1``
`#502 <https://github.com/xtensor-stack/xtensor/pull/502>`_.
- The behavior of ``linspace`` with integral types has been made consistent with numpy
- The behavior of ``linspace`` with integral types has been made consistent with NumPy
`#510 <https://github.com/xtensor-stack/xtensor/pull/510>`_.

0.12.1
Expand Down
2 changes: 1 addition & 1 deletion docs/source/closure-semantics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
Closure semantics
=================

The *xtensor* library is a tensor expression library implementing numpy-style broadcasting and universal functions but in a lazy fashion.
The *xtensor* library is a tensor expression library implementing NumPy-style broadcasting and universal functions but in a lazy fashion.

If ``x`` and ``y`` are two tensor expressions with compatible shapes, the result of ``x + y`` is not a tensor but an expression that does
not hold any value. Values of ``x + y`` are computed upon access or when the result is assigned to a container such as :cpp:type:`xt::xtensor` or
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def setup(app):
'goatcounter.js'
]

# Automatically link to numpy doc
# Automatically link to NumPy doc
extensions += ['sphinx.ext.intersphinx']
intersphinx_mapping = {
"numpy": ("https://numpy.org/doc/stable/", None),
Expand Down
2 changes: 1 addition & 1 deletion docs/source/container.rst
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Runtime vs Compile-time dimensionality
Three container classes implementing multidimensional arrays are provided: :cpp:type:`xt::xarray` and
:cpp:type:`xt::xtensor` and :cpp:type:`xt::xtensor_fixed`.

- :cpp:type:`xt::xarray` can be reshaped dynamically to any number of dimensions. It is the container that is the most similar to numpy arrays.
- :cpp:type:`xt::xarray` can be reshaped dynamically to any number of dimensions. It is the container that is the most similar to NumPy arrays.
- :cpp:type:`xt::xtensor` has a dimension set at compilation time, which enables many optimizations.
For example, shapes and strides of :cpp:type:`xt::xtensor` instances are allocated on the stack instead of the heap.
- :cpp:type:`xt::xtensor_fixed` has a shape fixed at compile time.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/developer/concepts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -270,8 +270,8 @@ If you read the entire code of ``xcontainer``, you'll notice that two types are
strides and backstrides: ``shape_type`` and ``inner_shape_type``, ``strides_type`` and
``inner_strides_type``, and ``backstrides_type`` and ``inner_backstrides_type``. The distinction
between ``inner_shape_type`` and ``shape_type`` was motivated by the xtensor-python wrapper around
numpy data structures, where the inner shape type is a proxy on the shape section of the numpy
arrayobject. It cannot have a value semantics on its own as it is bound to the entire numpy array.
NumPy data structures, where the inner shape type is a proxy on the shape section of the NumPy
arrayobject. It cannot have a value semantics on its own as it is bound to the entire NumPy array.

``xstrided_container`` inherits from ``xcontainer``; it represents a container that holds its shape
and strides. It provides methods for reshaping the container:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/expression.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Broadcasting

The number of dimensions of an :cpp:type:`xt::xexpression` and the sizes of these dimensions are provided by the :cpp:func:`~xt::xexpression::shape` method, which returns a sequence of unsigned integers
specifying the size of each dimension. We can operate on expressions of different shapes of dimensions in an elementwise fashion.
Broadcasting rules of *xtensor* are similar to those of Numpy_ and libdynd_.
Broadcasting rules of *xtensor* are similar to those of NumPy_ and libdynd_.

In an operation involving two arrays of different dimensions, the array with the lesser dimensions is broadcast across the leading dimensions of the other.
For example, if ``A`` has shape ``(2, 3)``, and ``B`` has shape ``(4, 2, 3)``, the result of a broadcast operation with ``A`` and ``B`` has shape ``(4, 2, 3)``.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/file_loading.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ format.
Please note that many more input and output formats are available in the `xtensor-io
<https://github.com/xtensor-stack/xtensor-io>`_ package.
`xtensor-io` offers functions to load and store from image files (``jpg``, ``gif``, ``png``...),
sound files (``wav``, ``ogg``...), HDF5 files (``h5``, ``hdf5``, ...), and compressed numpy format (``npz``).
sound files (``wav``, ``ogg``...), HDF5 files (``h5``, ``hdf5``, ...), and compressed NumPy format (``npz``).


Loading CSV data into xtensor
Expand Down
4 changes: 2 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ Containers of *xtensor* are inspired by `NumPy`_, the Python array programming
library. **Adaptors** for existing data structures to be plugged into the
expression system can easily be written.

In fact, *xtensor* can be used to **process numpy data structures in-place**
using Python's `buffer protocol`_. For more details on the numpy bindings,
In fact, *xtensor* can be used to **process NumPy data structures in-place**
using Python's `buffer protocol`_. For more details on the NumPy bindings,
check out the xtensor-python_ project. Language bindings for R and Julia are
also available.

Expand Down
10 changes: 5 additions & 5 deletions docs/source/numpy-differences.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ xtensor and numpy are very different libraries in their internal semantics. Whil
is a lazy expression system, numpy manipulates in-memory containers, however, similarities in
APIs are obvious. See e.g. the numpy to xtensor cheat sheet.

And this page tracks the subtle differences of behavior between numpy and xtensor.
And this page tracks the subtle differences of behavior between NumPy and xtensor.

Zero-dimensional arrays
-----------------------

With numpy, 0-D arrays are nearly indistinguishable from scalars. This led to some issues w.r.t.
With NumPy, 0-D arrays are nearly indistinguishable from scalars. This led to some issues w.r.t.
universal functions returning scalars with 0-D array inputs instead of actual arrays...

In xtensor, 0-D expressions are not implicitly convertible to scalar values. Values held by 0-D
Expand Down Expand Up @@ -87,15 +87,15 @@ be assigned to a container such as xarray or xtensor.
Missing values
--------------

Support of missing values in numpy can be emulated with the masked array module,
Support of missing values in NumPy can be emulated with the masked array module,
which provides a means to handle arrays that have missing or invalid data.

Support of missing values in xtensor is done through a notion of optional values, implemented in ``xoptional<T, B>``, which serves both as a value type for container and as a reference proxy for optimized storage types. See the section of the documentation on :doc:`missing`.

Strides
-------

Strided containers of xtensor and numpy having the same exact memory layout may have different strides when accessing them through the ``strides`` attribute.
Strided containers of xtensor and NumPy having the same exact memory layout may have different strides when accessing them through the ``strides`` attribute.
The reason is an optimization in xtensor, which is to set the strides to ``0`` in dimensions of length ``1``, which simplifies the implementation of broadcasting of universal functions.

.. tip::
Expand All @@ -109,7 +109,7 @@ The reason is an optimization in xtensor, which is to set the strides to ``0`` i
xt::strides(a, xt::stride_type::internal); // ``== a.strides()``
xt::strides(a, xt::stride_type::bytes) // strides in bytes, as in numpy
xt::strides(a, xt::stride_type::bytes) // strides in bytes, as in NumPy
Array indices
Expand Down
20 changes: 10 additions & 10 deletions docs/source/related.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ xtensor-python

The xtensor-python_ project provides the implementation of container types
compatible with *xtensor*'s expression system, ``pyarray`` and ``pytensor``
which effectively wrap numpy arrays, allowing operating on numpy arrays
which effectively wrap NumPy arrays, allowing operating on NumPy arrays
in-place.

Example 1: Use an algorithm of the C++ library on a numpy array in-place
Example 1: Use an algorithm of the C++ library on a NumPy array in-place
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**C++ code**
Expand All @@ -38,8 +38,8 @@ Example 1: Use an algorithm of the C++ library on a numpy array in-place
#include <numeric> // Standard library import for std::accumulate
#include <pybind11/pybind11.h> // Pybind11 import to define Python bindings
#include <xtensor/xmath.hpp> // xtensor import for the C++ universal functions
#define FORCE_IMPORT_ARRAY // numpy C api loading
#include <xtensor-python/pyarray.hpp> // Numpy bindings
#define FORCE_IMPORT_ARRAY // NumPy C api loading
#include <xtensor-python/pyarray.hpp> // NumPy bindings
double sum_of_sines(xt::pyarray<double> &m)
{
Expand Down Expand Up @@ -144,7 +144,7 @@ It takes care of the initial work of generating a project skeleton with
A few examples included in the resulting project including

- A universal function defined from C++
- A function making use of an algorithm from the STL on a numpy array
- A function making use of an algorithm from the STL on a NumPy array
- Unit tests
- The generation of the HTML documentation with sphinx

Expand Down Expand Up @@ -200,7 +200,7 @@ Example 1: Use an algorithm of the C++ library with a Julia array
1.2853996391883833
Example 2: Create a numpy-style universal function from a C++ scalar function
Example 2: Create a NumPy-style universal function from a C++ scalar function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

**C++ code**
Expand Down Expand Up @@ -257,8 +257,8 @@ It takes care of the initial work of generating a project skeleton with

A few examples included in the resulting project including

- A numpy-style universal function defined from C++
- A function making use of an algorithm from the STL on a numpy array
- A NumPy-style universal function defined from C++
- A function making use of an algorithm from the STL on a NumPy array
- Unit tests
- The generation of the HTML documentation with sphinx

Expand Down Expand Up @@ -318,7 +318,7 @@ xtensor-blas
The xtensor-blas_ project is an extension to the xtensor library, offering
bindings to BLAS and LAPACK libraries through cxxblas and cxxlapack from the
FLENS project. ``xtensor-blas`` powers the ``xt::linalg`` functionalities,
which are the counterpart to numpy's ``linalg`` module.
which are the counterpart to NumPy's ``linalg`` module.

xtensor-fftw
------------
Expand All @@ -328,7 +328,7 @@ xtensor-fftw

The xtensor-fftw_ project is an extension to the xtensor library, offering
bindings to the fftw library. ``xtensor-fftw`` powers the ``xt::fftw``
functionalities, which are the counterpart to numpy's ``fft`` module.
functionalities, which are the counterpart to NumPy's ``fft`` module.

Example 1: Calculate a derivative in Fourier space
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down
6 changes: 3 additions & 3 deletions include/xtensor/xassign.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -324,10 +324,10 @@ namespace xt
};

/**
* Considering the assigment LHS = RHS, if the requested value type used for
* Considering the assignment LHS = RHS, if the requested value type used for
* loading simd from RHS is not complex while LHS value_type is complex,
* the assignment fails. The reason is that SIMD batches of complex values cannot
* be implicitly instanciated from batches of scalar values.
* be implicitly instantiated from batches of scalar values.
* Making the constructor implicit does not fix the issue since in the end,
* the assignment is done with vec.store(buffer) where vec is a batch of scalars
* and buffer an array of complex. SIMD batches of scalars do not provide overloads
Expand Down Expand Up @@ -1144,7 +1144,7 @@ namespace xt
auto fct_stepper = e2.stepper_begin(e1.shape());
auto res_stepper = e1.stepper_begin(e1.shape());

// TODO in 1D case this is ambigous -- could be RM or CM.
// TODO in 1D case this is ambiguous -- could be RM or CM.
// Use default layout to make decision
std::size_t step_dim = 0;
if (!is_row_major) // row major case
Expand Down
2 changes: 1 addition & 1 deletion include/xtensor/xaxis_iterator.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -326,7 +326,7 @@ namespace xt
* Returns an iterator to the element following the last element of
* the expression for the specified axis
*
* @param e the expession to iterate over
* @param e the expression to iterate over
* @param axis the axis to iterate over
* @return an instance of xaxis_iterator
*/
Expand Down
2 changes: 1 addition & 1 deletion include/xtensor/xaxis_slice_iterator.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -339,7 +339,7 @@ namespace xt
* Returns an iterator to the element following the last element of
* the expression for the specified axis
*
* @param e the expession to iterate over
* @param e the expression to iterate over
* @param axis the axis to iterate over
* @return an instance of xaxis_slice_iterator
*/
Expand Down
2 changes: 1 addition & 1 deletion include/xtensor/xbroadcast.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -347,7 +347,7 @@ namespace xt
*
* @warning This method is meant for performance, for expressions with a dynamic
* number of dimensions (i.e. not known at compile time). Since it may have
* undefined behavior (see parameters), operator() should be prefered whenever
* undefined behavior (see parameters), operator() should be preferred whenever
* it is possible.
* @warning This method is NOT compatible with broadcasting, meaning the following
* code has undefined behavior:
Expand Down
4 changes: 2 additions & 2 deletions include/xtensor/xbuilder.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -849,7 +849,7 @@ namespace xt
/**
* @brief Stack xexpressions in sequence horizontally (column wise).
* This is equivalent to concatenation along the second axis, except for 1-D
* xexpressions where it concatenate along the firts axis.
* xexpressions where it concatenate along the first axis.
*
* @param t \ref xtuple of xexpressions to stack
* @return xgenerator evaluating to stacked elements
Expand Down Expand Up @@ -1109,7 +1109,7 @@ namespace xt
auto shape = arr.shape();
auto dimension = arr.dimension();

// The following shape calculation code is an almost verbatim adaptation of numpy:
// The following shape calculation code is an almost verbatim adaptation of NumPy:
// https://github.com/numpy/numpy/blob/2aabeafb97bea4e1bfa29d946fbf31e1104e7ae0/numpy/core/src/multiarray/item_selection.c#L1799
auto ret_shape = xtl::make_sequence<shape_type>(dimension - 1, 0);
int dim_1 = static_cast<int>(shape[axis_1]);
Expand Down
2 changes: 1 addition & 1 deletion include/xtensor/xcomplex.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ namespace xt
/**
* Calculates the phase angle elementwise for the complex numbers in @p e.
*
* Note that this function might be slightly less perfomant than xt::arg.
* Note that this function might be slightly less performant than xt::arg.
*
* @ingroup xt_xcomplex
* @param e the xt::xexpression
Expand Down
8 changes: 4 additions & 4 deletions include/xtensor/xcontainer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -472,7 +472,7 @@ namespace xt
*
* @warning This method is meant for performance, for expressions with a dynamic
* number of dimensions (i.e. not known at compile time). Since it may have
* undefined behavior (see parameters), operator() should be prefered whenever
* undefined behavior (see parameters), operator() should be preferred whenever
* it is possible.
* @warning This method is NOT compatible with broadcasting, meaning the following
* code has undefined behavior:
Expand Down Expand Up @@ -502,7 +502,7 @@ namespace xt
*
* @warning This method is meant for performance, for expressions with a dynamic
* number of dimensions (i.e. not known at compile time). Since it may have
* undefined behavior (see parameters), operator() should be prefered whenever
* undefined behavior (see parameters), operator() should be preferred whenever
* it is possible.
* @warning This method is NOT compatible with broadcasting, meaning the following
* code has undefined behavior:
Expand Down Expand Up @@ -662,7 +662,7 @@ namespace xt
}

/**
* Returns a reference to the element at the specified position in the containter
* Returns a reference to the element at the specified position in the container
* storage (as if it was one dimensional).
* @param i index specifying the position in the storage.
* Must be smaller than the number of elements in the container.
Expand All @@ -675,7 +675,7 @@ namespace xt
}

/**
* Returns a constant reference to the element at the specified position in the containter
* Returns a constant reference to the element at the specified position in the container
* storage (as if it was one dimensional).
* @param i index specifying the position in the storage.
* Must be smaller than the number of elements in the container.
Expand Down
4 changes: 2 additions & 2 deletions include/xtensor/xfixed.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -544,7 +544,7 @@ namespace xt

/**
* Create an uninitialized xfixed_container.
* Note this function is only provided for homogenity, and the shape & layout argument is
* Note this function is only provided for homogeneity, and the shape & layout argument is
* disregarded (the template shape is always used).
*
* @param shape the shape of the xfixed_container (unused!)
Expand All @@ -571,7 +571,7 @@ namespace xt

/**
* Create an xfixed_container, and initialize with the value of v.
* Note, the shape argument to this function is only provided for homogenity,
* Note, the shape argument to this function is only provided for homogeneity,
* and the shape argument is disregarded (the template shape is always used).
*
* @param shape the shape of the xfixed_container (unused!)
Expand Down
Loading

0 comments on commit 807aa88

Please sign in to comment.